Table of Contents
Valgrind provides a generic infrastructure for supervising the execution of programs. This is done by providing a way to instrument programs in very precise ways, making it relatively easy to support activities such as dynamic error detection and profiling.
Although writing a tool is not easy, and requires learning quite a few things about Valgrind, it is much easier than instrumenting a program from scratch yourself.
[Nb: What follows is slightly out of date.]
The key idea behind Valgrind's architecture is the division between its "core" and "tools".
The core provides the common low-level infrastructure to support program instrumentation, including the JIT compiler, low-level memory manager, signal handling and a scheduler (for pthreads). It also provides certain services that are useful to some but not all tools, such as support for error recording and suppression.
But the core leaves certain operations undefined, which must be filled by tools. Most notably, tools define how program code should be instrumented. They can also call certain functions to indicate to the core that they would like to use certain services, or be notified when certain interesting events occur. But the core takes care of all the hard work.
An important concept to understand before writing a tool is that there are three spaces in which program code executes:
User space: this covers most of the program's execution. The tool is given the code and can instrument it any way it likes, providing (more or less) total control over the code.
Code executed in user space includes all the program code, almost all of the C library (including things like the dynamic linker), and almost all parts of all other libraries.
Core space: a small proportion of the program's execution takes place entirely within Valgrind's core. This includes:
Dynamic memory management
(malloc()
etc.)
Thread scheduling
Signal handling
A tool has no control over these operations; it never "sees" the code doing this work and thus cannot instrument it. However, the core provides hooks so a tool can be notified when certain interesting events happen, for example when dynamic memory is allocated or freed, the stack pointer is changed, or a pthread mutex is locked, etc.
Note that these hooks only notify tools of events relevant to user space. For example, when the core allocates some memory for its own use, the tool is not notified of this, because it's not directly part of the supervised program's execution.
Kernel space: execution in the kernel. Two kinds:
System calls: can't be directly observed by either the tool or the core. But the core does have some idea of what happens to the arguments, and it provides hooks for a tool to wrap system calls.
Other: all other kernel activity (e.g. process scheduling) is totally opaque and irrelevant to the program.
It should be noted that a tool only has direct control over code executed in user space. This is the vast majority of code executed, but it is not absolutely all of it, so any profiling information recorded by a tool won't be totally accurate.
Before you write a tool, you should have some idea of what it should do. What is it you want to know about your programs of interest? Consider some existing tools:
memcheck: among other things, performs fine-grained validity and addressibility checks of every memory reference performed by the program.
cachegrind: tracks every instruction and memory reference to simulate instruction and data caches, tracking cache accesses and misses that occur on every line in the program.
helgrind: tracks every memory access and mutex lock/unlock to determine if a program contains any data races.
lackey: does simple counting of
various things: the number of calls to a particular function
(_dl_runtime_resolve()
); the
number of basic blocks, guest instructions, VEX instructions
executed; the number of branches executed and the proportion of
them which were taken.
These examples give a reasonable idea of what kinds of things Valgrind can be used for. The instrumentation can range from very lightweight (e.g. counting the number of times a particular function is called) to very intrusive (e.g. memcheck's memory checking).
Here is a list of ideas we have had for tools that should not be too hard to implement.
branch profiler: A machine's branch
prediction hardware could be simulated, and each branch
annotated with the number of predicted and mispredicted
branches. Would be implemented quite similarly to Cachegrind,
and could reuse the
cg_annotate
script to annotate
source code.
The biggest difficulty with this is the simulation; the chip-makers are very cagey about how their chips do branch prediction. But implementing one or more of the basic algorithms could still give good information.
coverage tool: Cachegrind can already be used for doing test coverage, but it's massive overkill to use it just for that.
It would be easy to write a coverage tool that records
how many times each basic block was recorded. Again, the
cg_annotate
script could be
used for annotating source code with the gathered information.
Although, cg_annotate
is only
designed for working with single program runs. It could be
extended relatively easily to deal with multiple runs of a
program, so that the coverage of a whole test suite could be
determined.
In addition to the standard coverage information, such a tool could record extra information that would help a user generate test cases to exercise unexercised paths. For example, for each conditional branch, the tool could record all inputs to the conditional test, and print these out when annotating.
run-time type checking: A nice example of a dynamic checker is given in this paper:
Debugging via Run-Time Type Checking
Alexey Loginov, Suan Hsi Yong, Susan Horwitz and Thomas Reps
Proceedings of Fundamental Approaches to Software Engineering
April 2001.
Similar is the tool described in this paper:
Run-Time Type Checking for Binary Programs
Michael Burrows, Stephen N. Freund, Janet L. Wiener
Proceedings of the 12th International Conference on Compiler Construction (CC 2003)
April 2003.
This approach can find quite a range of bugs, particularly in C and C++ programs, and could be implemented quite nicely as a Valgrind tool.
Ways to speed up this run-time type checking are described in this paper:
Reducing the Overhead of Dynamic Analysis
Suan Hsi Yong and Susan Horwitz
Proceedings of Runtime Verification '02
July 2002.
Valgrind's client requests could be used to pass information to a tool about which elements need instrumentation and which don't.
We would love to hear from anyone who implements these or other tools.
Tools must define various functions for instrumenting programs
that are called by Valgrind's core. They are then linked against the
coregrind library (libcoregrind.a
) that valgrind
provides as well as the VEX library (libvex.a
) that
also comes with valgrind and provides the JIT engine.
Each tool is linked as a statically linked program and placed in
the valgrind library directory from where valgrind will load it
automatically when the --tool
option is used to select
it.
To write your own tool, you'll need the Valgrind source code. A normal source distribution should do, although you might want to check out the latest code from the Subversion repository. See the information about how to do so at the Valgrind website.
Valgrind uses GNU automake
and
autoconf
for the creation of Makefiles
and configuration. But don't worry, these instructions should be enough
to get you started even if you know nothing about those tools.
In what follows, all filenames are relative to Valgrind's
top-level directory valgrind/
.
Choose a name for the tool, and an abbreviation that can be used
as a short prefix. We'll use foobar
and fb
as an example.
Make a new directory foobar/
which will hold the tool.
Copy none/Makefile.am
into
foobar/
. Edit it by replacing all
occurrences of the string "none"
with
"foobar"
and the one occurrence of
the string "nl_"
with
"fb_"
. It might be worth trying to
understand this file, at least a little; you might have to do more
complicated things with it later on. In particular, the name of the
foobar_SOURCES
variable determines
the name of the tool, which determines what name must be passed to the
--tool
option to use the tool.
Copy none/nl_main.c
into
foobar/
, renaming it as
fb_main.c
. Edit it by changing the lines in
pre_clo_init()
to something appropriate for the
tool. These fields are used in the startup message, except for
bug_reports_to
which is used if a
tool assertion fails.
Edit Makefile.am
, adding the new directory
foobar
to the
SUBDIRS
variable.
Edit configure.in
, adding
foobar/Makefile
to the
AC_OUTPUT
list.
Run:
autogen.sh ./configure --prefix=`pwd`/inst make install
It should automake, configure and compile without errors,
putting copies of the tool in
foobar/
and
inst/lib/valgrind/
.
You can test it with a command like:
inst/bin/valgrind --tool=foobar date
(almost any program should work;
date
is just an example).
The output should be something like this:
==738== foobar-0.0.1, a foobarring tool for x86-linux. ==738== Copyright (C) 1066AD, and GNU GPL'd, by J. Random Hacker. ==738== Built with valgrind-1.1.0, a program execution monitor. ==738== Copyright (C) 2000-2003, and GNU GPL'd, by Julian Seward. ==738== Estimated CPU clock rate is 1400 MHz ==738== For more details, rerun with: -v ==738== Wed Sep 25 10:31:54 BST 2002 ==738==
The tool does nothing except run the program uninstrumented.
These steps don't have to be followed exactly - you can choose
different names for your source files, and use a different
--prefix
for
./configure
.
Now that we've setup, built and tested the simplest possible tool, onto the interesting stuff...
A tool must define at least these four functions:
pre_clo_init() post_clo_init() instrument() fini()
Also, it must use the macro
VG_DETERMINE_INTERFACE_VERSION
exactly
once in its source code. If it doesn't, you will get a link error
involving VG_(tool_interface_version)
.
This macro is used to ensure the core/tool interface used by the core
and a plugged-in tool are binary compatible.
In addition, if a tool wants to use some of the optional services provided by the core, it may have to define other functions and tell the code about them.
Most of the initialisation should be done in
pre_clo_init()
. Only use
post_clo_init()
if a tool provides command line
options and must do some initialisation after option processing takes
place ("clo"
stands for "command line
options").
First of all, various "details" need to be set for a tool, using
the functions VG_(details_*)()
. Some are all
compulsory, some aren't. Some are used when constructing the startup
message, detail_bug_reports_to
is used
if VG_(tool_panic)()
is ever called, or
a tool assertion fails. Others have other uses.
Second, various "needs" can be set for a tool, using the functions
VG_(needs_*)()
. They are mostly booleans, and can
be left untouched (they default to False
). They
determine whether a tool can do various things such as: record, report
and suppress errors; process command line options; wrap system calls;
record extra information about malloc'd blocks, etc.
For example, if a tool wants the core's help in recording and
reporting errors, it must call
VG_(needs_tool_errors)
and provide definitions of
eight functions for comparing errors, printing out errors, reading
suppressions from a suppressions file, etc. While writing these
functions requires some work, it's much less than doing error handling
from scratch because the core is doing most of the work. See the
function VG_(needs_tool_errors)
in
include/pub_tool_tooliface.h
for full details of
all the needs.
Third, the tool can indicate which events in core it wants to be
notified about, using the functions VG_(track_*)()
.
These include things such as blocks of memory being malloc'd, the stack
pointer changing, a mutex being locked, etc. If a tool wants to know
about this, it should provide a pointer to a function, which will be
called when that event happens.
For example, if the tool want to be notified when a new block of
memory is malloc'd, it should call
VG_(track_new_mem_heap)()
with an appropriate
function pointer, and the assigned function will be called each time
this happens.
More information about "details", "needs" and "trackable events"
can be found in
include/pub_tool_tooliface.h
.
instrument()
is the interesting one. It
allows you to instrument VEX IR, which is
Valgrind's RISC-like intermediate language. VEX IR is described in
Introduction to UCode.
The easiest way to instrument VEX IR is to insert calls to C
functions when interesting things happen. See the tool "Lackey"
(lackey/lk_main.c
) for a simple example of this, or
Cachegrind (cachegrind/cg_main.c
) for a more
complex example.
This is where you can present the final results, such as a summary of the information collected. Any log files should be written out at this point.
Please note that the core/tool split infrastructure is quite complex and not brilliantly documented. Here are some important points, but there are undoubtedly many others that I should note but haven't thought of.
The files include/pub_tool_*.h
contain all
the types, macros, functions, etc. that a tool should (hopefully) need,
and are the only .h
files a tool should need to
#include
.
In particular, you can't use anything from the C library (there
are deep reasons for this, trust us). Valgrind provides an
implementation of a reasonable subset of the C library, details of which
are in pub_tool_libc*.h
.
Similarly, when writing a tool, you shouldn't need to look at any of the code in Valgrind's core. Although it might be useful sometimes to help understand something.
The pub_tool_*.h
files have a reasonable
amount of documentation in it that should hopefully be enough to get
you going. But ultimately, the tools distributed (Memcheck,
Cachegrind, Lackey, etc.) are probably the best
documentation of all, for the moment.
Note that the VG_
macro is used
heavily. This just prepends a longer string in front of names to avoid
potential namespace clashes.
Writing and debugging tools is not trivial. Here are some suggestions for solving common problems.
If you are getting segmentation faults in C functions used by your tool, the usual GDB command:
gdb <prog> core
usually gives the location of the segmentation fault.
If you want to debug C functions used by your tool, you can achieve this by following these steps:
Set VALGRIND_LAUNCHER
to
<prefix>/bin/valgrind
:
export VALGRIND_LAUNCHER=/usr/local/bin/valgrind
Then run gdb <prefix>/lib/valgrind/<platform>/<tool>:
gdb /usr/local/lib/valgrind/ppc32-linux/lackey
Do handle SIGSEGV SIGILL nostop
noprint
in GDB to prevent GDB from stopping on a
SIGSEGV or SIGILL:
(gdb) handle SIGILL SIGSEGV nostop noprint
Set any breakpoints you want and proceed as normal for GDB:
(gdb) b vgPlain_do_exec
The macro VG_(FUNC) is expanded to vgPlain_FUNC, so If you want to set a breakpoint VG_(do_exec), you could do like this in GDB.
Run the tool with required options:
(gdb) run `pwd`
GDB may be able to give you useful information. Note that by
default most of the system is built with
-fomit-frame-pointer
, and you'll need to get rid of
this to extract useful tracebacks from GDB.
If you are having problems with your VEX UIR instrumentation, it's
likely that GDB won't be able to help at all. In this case, Valgrind's
--trace-flags
option is invaluable for observing the
results of instrumentation.
Once a tool becomes more complicated, there are some extra things you may want/need to do.
If your tool reports errors and you want to suppress some common
ones, you can add suppressions to the suppression files. The relevant
files are valgrind/*.supp
; the final suppression
file is aggregated from these files by combining the relevant
.supp
files depending on the versions of linux, X
and glibc on a system.
Suppression types have the form
tool_name:suppression_name
. The
tool_name
here is the name you specify
for the tool during initialisation with
VG_(details_name)()
.
As of version 3.0.0, Valgrind documentation has been converted to XML. Why? See The XML FAQ.
If you are feeling conscientious and want to write some documentation for your tool, please use XML. The Valgrind Docs use the following toolchain and versions:
xmllint: using libxml version 20607 xsltproc: using libxml 20607, libxslt 10102 and libexslt 802 pdfxmltex: pdfTeX (Web2C 7.4.5) 3.14159-1.10b pdftops: version 3.00 DocBook: version 4.2
Latency: you should note that latency is a big problem: DocBook is constantly being updated, but the tools tend to lag behind somewhat. It is important that the versions get on with each other, so if you decide to upgrade something, then you need to ascertain whether things still work nicely - this *cannot* be assumed.
Stylesheets: The Valgrind docs use
various custom stylesheet layers, all of which are in
valgrind/docs/lib/
. You
shouldn't need to modify these in any way.
Catalogs: Catalogs provide a mapping from
generic addresses to specific local directories on a given machine.
Most recent Linux distributions have adopted a common place for storing
catalogs (/etc/xml/
). Assuming that you have the
various tools listed above installed, you probably won't need to modify
your catalogs. But if you do, then just add another
group
to this file, reflecting your
local installation.
Follow these steps (using foobar
as the example tool name again):
Make a directory
valgrind/foobar/docs/
.
Copy the XML documentation file for the tool Nulgrind from
valgrind/none/docs/nl-manual.xml
to
foobar/docs/
, and rename it to
foobar/docs/fb-manual.xml
.
Note: there is a *really stupid* tetex bug with underscores in filenames, so don't use '_'.
Write the documentation. There are some helpful bits and
pieces on using xml markup in
valgrind/docs/xml/xml_help.txt
.
Include it in the User Manual by adding the relevant entry to
valgrind/docs/xml/manual.xml
. Copy and edit an
existing entry.
Validate foobar/docs/fb-manual.xml
using
the following command from within valgrind/docs/
:
% make valid
You will probably get errors that look like this:
./xml/index.xml:5: element chapter: validity error : No declaration for attribute base of element chapter
Ignore (only) these -- they're not important.
Because the xml toolchain is fragile, it is important to ensure
that fb-manual.xml
won't break the documentation
set build. Note that just because an xml file happily transforms to
html does not necessarily mean the same holds true for pdf/ps.
You can (re-)generate the HTML docs while you are writing
fb-manual.xml
to help you see how it's looking.
The generated files end up in
valgrind/docs/html/
. Use the following
command, within valgrind/docs/
:
% make html-docs
When you have finished, also generate pdf and ps output to
check all is well, from within valgrind/docs/
:
% make print-docs
Check the output .pdf
and
.ps
files in
valgrind/docs/print/
.
Valgrind has some support for regression tests. If you want to write regression tests for your tool:
Make a directory
foobar/tests/
. Make sure the name
of the directory is tests/
as the
build system assumes that any tests for the tool will be in a
directory by that name.
Edit configure.in
, adding
foobar/tests/Makefile
to the
AC_OUTPUT
list.
Write foobar/tests/Makefile.am
. Use
memcheck/tests/Makefile.am
as an
example.
Write the tests, .vgtest
test
description files, .stdout.exp
and
.stderr.exp
expected output files.
(Note that Valgrind's output goes to stderr.) Some details on
writing and running tests are given in the comments at the top of
the testing script
tests/vg_regtest
.
Write a filter for stderr results
foobar/tests/filter_stderr
. It can
call the existing filters in
tests/
. See
memcheck/tests/filter_stderr
for an
example; in particular note the
$dir
trick that ensures the filter
works correctly from any directory.
To profile a tool, use Cachegrind on it. Read README_DEVELOPERS for details on running Valgrind under Valgrind.
To do simple tick-based profiling of a tool, include the line:
#include "vg_profile.c"
in the tool somewhere, and rebuild (you may have to
make clean
first). Then run Valgrind
with the --profile=yes
option.
The profiler is stack-based; you can register a profiling event
with VG_(register_profile_event)()
and then use the
VGP_PUSHCC
and
VGP_POPCC
macros to record time spent
doing certain things. New profiling event numbers must not overlap with
the core profiling event numbers. See
include/pub_tool_profile.h
for details and Memcheck
for an example.
If you add any directories under
valgrind/foobar/
, you will need to add
an appropriate Makefile.am
to it, and add a
corresponding entry to the AC_OUTPUT
list in valgrind/configure.in
.
If you add any scripts to your tool (see Cachegrind for an
example) you need to add them to the
bin_SCRIPTS
variable in
valgrind/foobar/Makefile.am
.
In order to allow for the core/tool interface to evolve over time,
Valgrind uses a basic interface versioning system. All a tool has to do
is use the
VG_DETERMINE_INTERFACE_VERSION
macro
exactly once in its code. If not, a link error will occur when the tool
is built.
The interface version number has the form X.Y. Changes in Y indicate binary compatible changes. Changes in X indicate binary incompatible changes. If the core and tool has the same major version number X they should work together. If X doesn't match, Valgrind will abort execution with an explanation of the problem.
This approach was chosen so that if the interface changes in the future, old tools won't work and the reason will be clearly explained, instead of possibly crashing mysteriously. We have attempted to minimise the potential for binary incompatible changes by means such as minimising the use of naked structs in the interface.
This whole core/tool business is under active development, although it's slowly maturing.
The first consequence of this is that the core/tool interface will continue to change in the future; we have no intention of freezing it and then regretting the inevitable stupidities. Hopefully most of the future changes will be to add new features, hooks, functions, etc, rather than to change old ones, which should cause a minimum of trouble for existing tools, and we've put some effort into future-proofing the interface to avoid binary incompatibility. But we can't guarantee anything. The versioning system should catch any incompatibilities. Just something to be aware of.
The second consequence of this is that we'd love to hear your feedback about it:
If you love it or hate it
If you find bugs
If you write a tool
If you have suggestions for new features, needs, trackable events, functions
If you have suggestions for making tools easier to write
If you have suggestions for improving this documentation
If you don't understand something
or anything else!
Happy programming.