Table of Contents
This section describes the Valgrind core services, flags and behaviours. That means it is relevant regardless of what particular tool you are using. A point of terminology: most references to "valgrind" in the rest of this section (Section 2) refer to the Valgrind core services.
Valgrind is designed to be as non-intrusive as possible. It works directly with existing executables. You don't need to recompile, relink, or otherwise modify, the program to be checked.
Simply put
valgrind --tool=tool_name
at the start of the command line normally used to run the program. For
example, if want to run the command
ls -l
using the heavyweight
memory-checking tool Memcheck, issue the command:
valgrind --tool=memcheck ls -l
(Memcheck is the default, so if you want to use it you can
actually omit the --tool
flag.
Regardless of which tool is in use, Valgrind takes control of your program before it starts. Debugging information is read from the executable and associated libraries, so that error messages and other outputs can be phrased in terms of source code locations (if that is appropriate).
Your program is then run on a synthetic CPU provided by the Valgrind core. As new code is executed for the first time, the core hands the code to the selected tool. The tool adds its own instrumentation code to this and hands the result back to the core, which coordinates the continued execution of this instrumented code.
The amount of instrumentation code added varies widely between tools. At one end of the scale, Memcheck adds code to check every memory access and every value computed, increasing the size of the code at least 12 times, and making it run 25-50 times slower than natively. At the other end of the spectrum, the ultra-trivial "none" tool (a.k.a. Nulgrind) adds no instrumentation at all and causes in total "only" about a 4 times slowdown.
Valgrind simulates every single instruction your program executes.
Because of this, the active tool checks, or profiles, not only the code
in your application but also in all supporting dynamically-linked
(.so
-format) libraries, including the
GNU C library, the X client libraries, Qt, if you work with KDE, and so
on.
If you're using one of the error-detection tools, Valgrind will
often detect errors in libraries, for example the GNU C or X11
libraries, which you have to use. You might not be interested in these
errors, since you probably have no control over that code. Therefore,
Valgrind allows you to selectively suppress errors, by recording them in
a suppressions file which is read when Valgrind starts up. The build
mechanism attempts to select suppressions which give reasonable
behaviour for the libc and XFree86 versions detected on your machine.
To make it easier to write suppressions, you can use the
--gen-suppressions=yes
option which tells Valgrind to
print out a suppression for each error that appears, which you can then
copy into a suppressions file.
Different error-checking tools report different kinds of errors. The suppression mechanism therefore allows you to say which tool or tool(s) each suppression applies to.
First off, consider whether it might be beneficial to recompile
your application and supporting libraries with debugging info enabled
(the -g
flag). Without debugging info, the best
Valgrind tools will be able to do is guess which function a particular
piece of code belongs to, which makes both error messages and profiling
output nearly useless. With -g
, you'll hopefully get
messages which point directly to the relevant source code lines.
Another flag you might like to consider, if you are working with
C++, is -fno-inline
. That makes it easier to see the
function-call chain, which can help reduce confusion when navigating
around large C++ apps. For whatever it's worth, debugging
OpenOffice.org with Memcheck is a bit easier when using this flag. You
don't have to do this, but doing so helps Valgrind produce more accurate
and less confusing error reports. Chances are you're set up like this
already, if you intended to debug your program with GNU gdb, or some
other debugger.
This paragraph applies only if you plan to use Memcheck: On rare
occasions, optimisation levels at -O2
and above have been observed to generate code which fools Memcheck into
wrongly reporting uninitialised value errors. We have looked in detail
into fixing this, and unfortunately the result is that doing so would
give a further significant slowdown in what is already a slow tool. So
the best solution is to turn off optimisation altogether. Since this
often makes things unmanagably slow, a plausible compromise is to use
-O
. This gets you the majority of the
benefits of higher optimisation levels whilst keeping relatively small
the chances of false complaints from Memcheck. All other tools (as far
as we know) are unaffected by optimisation level.
Valgrind understands both the older "stabs" debugging format, used by gcc versions prior to 3.1, and the newer DWARF2 format used by gcc 3.1 and later. We continue to refine and debug our debug-info readers, although the majority of effort will naturally enough go into the newer DWARF2 reader.
When you're ready to roll, just run your application as you
would normally, but place
valgrind --tool=tool_name
in front of
your usual command-line invocation. Note that you should run the real
(machine-code) executable here. If your application is started by, for
example, a shell or perl script, you'll need to modify it to invoke
Valgrind on the real executables. Running such scripts directly under
Valgrind will result in you getting error reports pertaining to
/bin/sh
,
/usr/bin/perl
, or whatever interpreter
you're using. This may not be what you want and can be confusing. You
can force the issue by giving the flag
--trace-children=yes
, but confusion is still
likely.
Valgrind tools write a commentary, a stream of text, detailing error reports and other significant events. All lines in the commentary have following form:
==12345== some-message-from-Valgrind
The 12345
is the process ID.
This scheme makes it easy to distinguish program output from Valgrind
commentary, and also easy to differentiate commentaries from different
processes which have become merged together, for whatever reason.
By default, Valgrind tools write only essential messages to the
commentary, so as to avoid flooding you with information of secondary
importance. If you want more information about what is happening,
re-run, passing the -v
flag to Valgrind. A second
-v
gives yet more detail.
You can direct the commentary to three different places:
The default: send it to a file descriptor, which is by default
2 (stderr). So, if you give the core no options, it will write
commentary to the standard error stream. If you want to send it to
some other file descriptor, for example number 9, you can specify
--log-fd=9
.
This is the simplest and most common arrangement, but can cause problems when valgrinding entire trees of processes which expect specific file descriptors, particularly stdin/stdout/stderr, to be available for their own use.
A less intrusive
option is to write the commentary to a file, which you specify by
--log-file=filename
. Note carefully that the
commentary is not written to the file you
specify, but instead to one called
filename.12345
, if for example the pid of the
traced process is 12345. This is helpful when valgrinding a whole
tree of processes at once, since it means that each process writes
to its own logfile, rather than the result being jumbled up in one
big logfile. If filename.12345
already exists,
then it will name new files filename.12345.1
and so on.
If you want to specify precisely the file name to use, without
the trailing .12345
part, you can
instead use --log-file-exactly=filename
.
You can also use the
--log-file-qualifier=<VAR>
option to modify
the filename via according to the environment variable
VAR
. This is rarely needed, but very useful in
certain circumstances (eg. when running MPI programs). In this
case, the trailing .12345
part is
replaced by the contents of $VAR
. The idea is
that you specify a variable which will be set differently for each
process in the job, for example
BPROC_RANK
or whatever is
applicable in your MPI setup.
The
least intrusive option is to send the commentary to a network
socket. The socket is specified as an IP address and port number
pair, like this: --log-socket=192.168.0.1:12345
if
you want to send the output to host IP 192.168.0.1 port 12345 (I
have no idea if 12345 is a port of pre-existing significance). You
can also omit the port number:
--log-socket=192.168.0.1
, in which case a default
port of 1500 is used. This default is defined by the constant
VG_CLO_DEFAULT_LOGPORT
in the
sources.
Note, unfortunately, that you have to use an IP address here, rather than a hostname.
Writing to a network socket is pretty useless if you don't
have something listening at the other end. We provide a simple
listener program,
valgrind-listener
, which accepts
connections on the specified port and copies whatever it is sent to
stdout. Probably someone will tell us this is a horrible security
risk. It seems likely that people will write more sophisticated
listeners in the fullness of time.
valgrind-listener can accept simultaneous connections from up to 50 valgrinded processes. In front of each line of output it prints the current number of active connections in round brackets.
valgrind-listener accepts two command-line flags:
-e
or --exit-at-zero
:
when the number of connected processes falls back to zero,
exit. Without this, it will run forever, that is, until you
send it Control-C.
portnumber
: changes the port it listens
on from the default (1500). The specified port must be in the
range 1024 to 65535. The same restriction applies to port
numbers specified by a --log-socket
to
Valgrind itself.
If a valgrinded process fails to connect to a listener, for whatever reason (the listener isn't running, invalid or unreachable host or port, etc), Valgrind switches back to writing the commentary to stderr. The same goes for any process which loses an established connection to a listener. In other words, killing the listener doesn't kill the processes sending data to it.
Here is an important point about the relationship between the
commentary and profiling output from tools. The commentary contains a
mix of messages from the Valgrind core and the selected tool. If the
tool reports errors, it will report them to the commentary. However, if
the tool does profiling, the profile data will be written to a file of
some kind, depending on the tool, and independent of what
--log-*
options are in force. The commentary is
intended to be a low-bandwidth, human-readable channel. Profiling data,
on the other hand, is usually voluminous and not meaningful without
further processing, which is why we have chosen this arrangement.
When one of the error-checking tools (Memcheck, Helgrind) detects something bad happening in the program, an error message is written to the commentary. For example:
==25832== Invalid read of size 4 ==25832== at 0x8048724: BandMatrix::ReSize(int, int, int) (bogon.cpp:45) ==25832== by 0x80487AF: main (bogon.cpp:66) ==25832== Address 0xBFFFF74C is not stack'd, malloc'd or free'd
This message says that the program did an illegal 4-byte read of
address 0xBFFFF74C, which, as far as Memcheck can tell, is not a valid
stack address, nor corresponds to any currently malloc'd or free'd
blocks. The read is happening at line 45 of
bogon.cpp
, called from line 66 of the same file,
etc. For errors associated with an identified malloc'd/free'd block,
for example reading free'd memory, Valgrind reports not only the
location where the error happened, but also where the associated block
was malloc'd/free'd.
Valgrind remembers all error reports. When an error is detected, it is compared against old reports, to see if it is a duplicate. If so, the error is noted, but no further commentary is emitted. This avoids you being swamped with bazillions of duplicate error reports.
If you want to know how many times each error occurred, run with
the -v
option. When execution finishes, all the
reports are printed out, along with, and sorted by, their occurrence
counts. This makes it easy to see which errors have occurred most
frequently.
Errors are reported before the associated operation actually happens. If you're using a tool (Memcheck) which does address checking, and your program attempts to read from address zero, the tool will emit a message to this effect, and the program will then duly die with a segmentation fault.
In general, you should try and fix errors in the order that they are reported. Not doing so can be confusing. For example, a program which copies uninitialised values to several memory locations, and later uses them, will generate several error messages, when run on Memcheck. The first such error message may well give the most direct clue to the root cause of the problem.
The process of detecting duplicate errors is quite an
expensive one and can become a significant performance overhead
if your program generates huge quantities of errors. To avoid
serious problems, Valgrind will simply stop collecting
errors after 1000 different errors have been seen, or 10000000 errors
in total have been seen. In this situation you might as well
stop your program and fix it, because Valgrind won't tell you
anything else useful after this. Note that the 1000/10000000 limits
apply after suppressed errors are removed. These limits are
defined in m_errormgr.c
and can be increased
if necessary.
To avoid this cutoff you can use the
--error-limit=no
flag. Then Valgrind will always show
errors, regardless of how many there are. Use this flag carefully,
since it may have a bad effect on performance.
The error-checking tools detect numerous problems in the base
libraries, such as the GNU C library, and the XFree86 client libraries,
which come pre-installed on your GNU/Linux system. You can't easily fix
these, but you don't want to see these errors (and yes, there are many!)
So Valgrind reads a list of errors to suppress at startup. A default
suppression file is cooked up by the
./configure
script when the system is
built.
You can modify and add to the suppressions file at your leisure, or, better, write your own. Multiple suppression files are allowed. This is useful if part of your project contains errors you can't or don't want to fix, yet you don't want to continuously be reminded of them.
Note: By far the easiest way to add
suppressions is to use the --gen-suppressions=yes
flag
described in Command-line flags for the Valgrind core.
Each error to be suppressed is described very specifically, to minimise the possibility that a suppression-directive inadvertantly suppresses a bunch of similar errors which you did want to see. The suppression mechanism is designed to allow precise yet flexible specification of errors to suppress.
If you use the -v
flag, at the end of execution,
Valgrind prints out one line for each used suppression, giving its name
and the number of times it got used. Here's the suppressions used by a
run of valgrind --tool=memcheck ls l
:
--27579-- supp: 1 socketcall.connect(serv_addr)/__libc_connect/__nscd_getgrgid_r --27579-- supp: 1 socketcall.connect(serv_addr)/__libc_connect/__nscd_getpwuid_r --27579-- supp: 6 strrchr/_dl_map_object_from_fd/_dl_map_object
Multiple suppressions files are allowed. By default, Valgrind
uses $PREFIX/lib/valgrind/default.supp
. You can
ask to add suppressions from another file, by specifying
--suppressions=/path/to/file.supp
.
If you want to understand more about suppressions, look at an
existing suppressions file whilst reading the following documentation.
The file glibc-2.2.supp
, in the source
distribution, provides some good examples.
Each suppression has the following components:
First line: its name. This merely gives a handy name to the suppression, by which it is referred to in the summary of used suppressions printed out when a program finishes. It's not important what the name is; any identifying string will do.
Second line: name of the tool(s) that the suppression is for (if more than one, comma-separated), and the name of the suppression itself, separated by a colon (Nb: no spaces are allowed), eg:
tool_name1,tool_name2:suppression_name
Recall that Valgrind is a modular system, in which different instrumentation tools can observe your program whilst it is running. Since different tools detect different kinds of errors, it is necessary to say which tool(s) the suppression is meaningful to.
Tools will complain, at startup, if a tool does not understand any suppression directed to it. Tools ignore suppressions which are not directed to them. As a result, it is quite practical to put suppressions for all tools into the same suppression file.
Valgrind's core can detect certain PThreads API errors, for which this line reads:
core:PThread
Next line: a small number of suppression types have extra
information after the second line (eg. the Param
suppression for Memcheck)
Remaining lines: This is the calling context for the error -- the chain of function calls that led to it. There can be up to four of these lines.
Locations may be either names of shared objects/executables or
wildcards matching function names. They begin
obj:
and
fun:
respectively. Function and
object names to match against may use the wildcard characters
*
and
?
.
Important note: C++ function names must be
mangled. If you are writing suppressions by
hand, use the --demangle=no
option to get the
mangled names in your error messages.
Finally, the entire suppression must be between curly braces. Each brace must be the first character on its own line.
A suppression only suppresses an error when the error matches all the details in the suppression. Here's an example:
{ __gconv_transform_ascii_internal/__mbrtowc/mbtowc Memcheck:Value4 fun:__gconv_transform_ascii_internal fun:__mbr*toc fun:mbtowc }
What it means is: for Memcheck only, suppress a
use-of-uninitialised-value error, when the data size is 4, when it
occurs in the function
__gconv_transform_ascii_internal
, when
that is called from any function of name matching
__mbr*toc
, when that is called from
mbtowc
. It doesn't apply under any
other circumstances. The string by which this suppression is identified
to the user is
__gconv_transform_ascii_internal/__mbrtowc/mbtowc
.
(See Writing suppression files for more details on the specifics of Memcheck's suppression kinds.)
Another example, again for the Memcheck tool:
{ libX11.so.6.2/libX11.so.6.2/libXaw.so.7.0 Memcheck:Value4 obj:/usr/X11R6/lib/libX11.so.6.2 obj:/usr/X11R6/lib/libX11.so.6.2 obj:/usr/X11R6/lib/libXaw.so.7.0 }
Suppress any size 4 uninitialised-value error which occurs
anywhere in libX11.so.6.2
, when called from
anywhere in the same library, when called from anywhere in
libXaw.so.7.0
. The inexact specification of
locations is regrettable, but is about all you can hope for, given that
the X11 libraries shipped with Red Hat 7.2 have had their symbol tables
removed.
Note: since the above two examples did not make it clear, you can
freely mix the obj:
and
fun:
styles of description within a
single suppression record.
As mentioned above, Valgrind's core accepts a common set of flags. The tools also accept tool-specific flags, which are documented seperately for each tool.
You invoke Valgrind like this:
valgrind [valgrind-options] your-prog [your-prog-options]
Valgrind's default settings succeed in giving reasonable behaviour in most cases. We group the available options by rough categories.
The single most important option.
These options work with all tools.
-h --help
Show help for all options, both for the core and for the selected tool.
--help-debug
Same as --help
, but also lists debugging
options which usually are only of use to Valgrind's
developers.
--version
Show the version number of the Valgrind core. Tools can have their own version numbers. There is a scheme in place to ensure that tools only execute when the core version is one they are known to work with. This was done to minimise the chances of strange problems arising from tool-vs-core version incompatibilities.
-q --quiet
Run silently, and only print error messages. Useful if you are running regression tests or have some other automated test machinery.
-v --verbose
Be more verbose. Gives extra information on various aspects of your program, such as: the shared objects loaded, the suppressions used, the progress of the instrumentation and execution engines, and warnings about unusual behaviour. Repeating the flag increases the verbosity level.
-d
Emit information for debugging Valgrind itself. This is
usually only of interest to the Valgrind developers. Repeating
the flag produces more detailed output. If you want to send us a
bug report, a log of the output generated by
-v -v -d -d
will make your report more
useful.
--tool=<toolname> [default: memcheck]
Run the Valgrind tool called toolname
,
e.g. Memcheck, Addrcheck, Cachegrind, etc.
--trace-children=<yes|no> [default: no]
When enabled, Valgrind will trace into child processes. This can be confusing and isn't usually what you want, so it is disabled by default.
--track-fds=<yes|no> [default: no]
When enabled, Valgrind will print out a list of open file descriptors on exit. Along with each file descriptor is printed a stack backtrace of where the file was opened and any details relating to the file descriptor such as the file name or socket details.
--time-stamp=<yes|no> [default: no]
When enabled, each message is preceded with an indication of the elapsed wallclock time since startup, expressed as days, hours, minutes, seconds and milliseconds.
--log-fd=<number> [default: 2, stderr]
Specifies that Valgrind should send all of its messages to the specified file descriptor. The default, 2, is the standard error channel (stderr). Note that this may interfere with the client's own use of stderr, as Valgrind's output will be interleaved with any output that the client sends to stderr.
--log-file=<filename>
Specifies that Valgrind should send all of its messages to
the specified file. In fact, the file name used is created by
concatenating the text filename
, "." and the
process ID, (ie. <filename>.<pid>), so as to create a
file per process. The specified file name may not be the empty
string.
--log-file-exactly=<filename>
Just like --log-file
, but the suffix
".pid"
is not added. If you
trace multiple processes with Valgrind when using this option the
log file may get all messed up.
--log-file-qualifier=<VAR>
When used in conjunction with --log-file
,
causes the log file name to be qualified using the contents of the
environment variable $VAR
. This
is useful when running MPI programs. For further details, see
Section 2.3 "The Commentary"
in the manual.
--log-socket=<ip-address:port-number>
Specifies that Valgrind should send all of its messages to
the specified port at the specified IP address. The port may be
omitted, in which case port 1500 is used. If a connection cannot
be made to the specified socket, Valgrind falls back to writing
output to the standard error (stderr). This option is intended to
be used in conjunction with the
valgrind-listener
program. For
further details, see
Section 2.3 "The Commentary"
in the manual.
These options are used by all tools that can report errors, e.g. Memcheck, but not Cachegrind.
--xml=<yes|no> [default: no]
When enabled, output will be in XML format. This is aimed at making life easier for tools that consume Valgrind's output as input, such as GUI front ends. Currently this option only works with Memcheck.
--xml-user-comment=<string>
Embeds an extra user comment string at the start of the XML
output. Only works when --xml=yes
is specified;
ignored otherwise.
--demangle=<yes|no> [default: yes]
Enable/disable automatic demangling (decoding) of C++ names. Enabled by default. When enabled, Valgrind will attempt to translate encoded C++ names back to something approaching the original. The demangler handles symbols mangled by g++ versions 2.X, 3.X and 4.X.
An important fact about demangling is that function names mentioned in suppressions files should be in their mangled form. Valgrind does not demangle function names when searching for applicable suppressions, because to do otherwise would make suppressions file contents dependent on the state of Valgrind's demangling machinery, and would also be slow and pointless.
--num-callers=<number> [default: 12]
By default, Valgrind shows twelve levels of function call names to help you identify program locations. You can change that number with this option. This can help in determining the program's location in deeply-nested call chains. Note that errors are commoned up using only the top four function locations (the place in the current function, and that of its three immediate callers). So this doesn't affect the total number of errors reported.
The maximum value for this is 50. Note that higher settings will make Valgrind run a bit more slowly and take a bit more memory, but can be useful when working with programs with deeply-nested call chains.
--error-limit=<yes|no> [default: yes]
When enabled, Valgrind stops reporting errors after 10,000,000 in total, or 1,000 different ones, have been seen. This is to stop the error tracking machinery from becoming a huge performance overhead in programs with many errors.
--error-exitcode=<number> [default: 0]
Specifies an alternative exit code to return if Valgrind reported any errors in the run. When set to the default value (zero), the return value from Valgrind will always be the return value of the process being simulated. When set to a nonzero value, that value is returned instead, if Valgrind detects any errors. This is useful for using Valgrind as part of an automated test suite, since it makes it easy to detect test cases for which Valgrind has reported errors, just by inspecting return codes.
--show-below-main=<yes|no> [default: no]
By default, stack traces for errors do not show any
functions that appear beneath main()
(or similar functions such as glibc's
__libc_start_main()
, if
main()
is not present in the stack trace);
most of the time it's uninteresting C library stuff. If this
option is enabled, those entries below main()
will be shown.
--suppressions=<filename> [default: $PREFIX/lib/valgrind/default.supp]
Specifies an extra file from which to read descriptions of errors to suppress. You may use as many extra suppressions files as you like.
--gen-suppressions=<yes|no|all> [default: no]
When set to yes
, Valgrind will pause
after every error shown and print the line:
---- Print suppression ? --- [Return/N/n/Y/y/C/c] ----
The prompt's behaviour is the same as for the
--db-attach
option (see below).
If you choose to, Valgrind will print out a suppression for this error. You can then cut and paste it into a suppression file if you don't want to hear about the error in the future.
When set to all
, Valgrind will print a
suppression for every reported error, without querying the
user.
This option is particularly useful with C++ programs, as it prints out the suppressions with mangled names, as required.
Note that the suppressions printed are as specific as
possible. You may want to common up similar ones, eg. by adding
wildcards to function names. Also, sometimes two different errors
are suppressed by the same suppression, in which case Valgrind
will output the suppression more than once, but you only need to
have one copy in your suppression file (but having more than one
won't cause problems). Also, the suppression name is given as
<insert a suppression name
here>
; the name doesn't really matter, it's
only used with the -v
option which prints out all
used suppression records.
--db-attach=<yes|no> [default: no]
When enabled, Valgrind will pause after every error shown and print the line:
---- Attach to debugger ? --- [Return/N/n/Y/y/C/c] ----
Pressing Ret
, or N Ret
or
n Ret
, causes Valgrind not to start a debugger
for this error.
Pressing Y Ret
or
y Ret
causes Valgrind to start a debugger for
the program at this point. When you have finished with the
debugger, quit from it, and the program will continue. Trying to
continue from inside the debugger doesn't work.
C Ret
or c Ret
causes
Valgrind not to start a debugger, and not to ask again.
Note: --db-attach=yes
conflicts with --trace-children=yes
. You can't
use them together. Valgrind refuses to start up in this
situation.
May 2002: this is a historical relic which could be easily fixed if it gets in your way. Mail us and complain if this is a problem for you.
Nov 2002: if you're sending output to a logfile or to a network socket, I guess this option doesn't make any sense. Caveat emptor.
--db-command=<command> [default: gdb -nw %f %p]
Specify the debugger to use with the
--db-attach
command. The default debugger is
gdb. This option is a template that is expanded by Valgrind at
runtime. %f
is replaced with the executable's
file name and %p
is replaced by the process ID
of the executable.
This specifies how Valgrind will invoke the debugger. By
default it will use whatever GDB is detected at build time, which
is usually /usr/bin/gdb
. Using
this command, you can specify some alternative command to invoke
the debugger you want to use.
The command string given can include one or instances of the
%p
and %f
expansions. Each
instance of %p
expands to the PID of the
process to be debugged and each instance of %f
expands to the path to the executable for the process to be
debugged.
--input-fd=<number> [default: 0, stdin]
When using --db-attach=yes
and
--gen-suppressions=yes
, Valgrind will stop so as
to read keyboard input from you, when each error occurs. By
default it reads from the standard input (stdin), which is
problematic for programs which close stdin. This option allows
you to specify an alternative file descriptor from which to read
input.
--max-stackframe=<number> [default: 2000000]
The maximum size of a stack frame - if the stack pointer moves by more than this amount then Valgrind will assume that the program is switching to a different stack.
You may need to use this option if your program has large stack-allocated arrays. Valgrind keeps track of your program's stack pointer. If it changes by more than the threshold amount, Valgrind assumes your program is switching to a different stack, and Memcheck behaves differently than it would for a stack pointer change smaller than the threshold. Usually this heuristic works well. However, if your program allocates large structures on the stack, this heuristic will be fooled, and Memcheck will subsequently report large numbers of invalid stack accesses. This option allows you to change the threshold to a different value.
You should only consider use of this flag if Valgrind's debug output directs you to do so. In that case it will tell you the new threshold you should specify.
In general, allocating large structures on the stack is a bad idea, because (1) you can easily run out of stack space, especially on systems with limited memory or which expect to support large numbers of threads each with a small stack, and (2) because the error checking performed by Memcheck is more effective for heap-allocated data than for stack-allocated data. If you have to use this flag, you may wish to consider rewriting your code to allocate on the heap rather than on the stack.
For tools that use their own version of
malloc()
(e.g. Memcheck and
Massif), the following options apply.
--alignment=<number> [default: 8]
By default Valgrind's malloc()
,
realloc()
, etc, return 8-byte aligned
addresses. This is standard for most processors. However, some
programs might assume that malloc()
et al
return 16-byte or more aligned memory. The supplied value must be
between 8 and 4096 inclusive, and must be a power of two.
These options apply to all tools, as they affect certain obscure workings of the Valgrind core. Most people won't need to use these.
--run-libc-freeres=<yes|no> [default: yes]
The GNU C library (libc.so
), which is
used by all programs, may allocate memory for its own uses.
Usually it doesn't bother to free that memory when the program
ends - there would be no point, since the Linux kernel reclaims
all process resources when a process exits anyway, so it would
just slow things down.
The glibc authors realised that this behaviour causes leak
checkers, such as Valgrind, to falsely report leaks in glibc, when
a leak check is done at exit. In order to avoid this, they
provided a routine called __libc_freeres
specifically to make glibc release all memory it has allocated.
Memcheck therefore tries to run
__libc_freeres
at exit.
Unfortunately, in some versions of glibc,
__libc_freeres
is sufficiently buggy to cause
segmentation faults. This is particularly noticeable on Red Hat
7.1. So this flag is provided in order to inhibit the run of
__libc_freeres
. If your program seems to run
fine on Valgrind, but segfaults at exit, you may find that
--run-libc-freeres=no
fixes that, although at the
cost of possibly falsely reporting space leaks in
libc.so
.
--sim-hints=hint1,hint2,...
Pass miscellaneous hints to Valgrind which slightly modify the simulated behaviour in nonstandard or dangerous ways, possibly to help the simulation of strange features. By default no hints are enabled. Use with caution! Currently known hints are:
lax-ioctls:
Be very lax about ioctl
handling; the only assumption is that the size is
correct. Doesn't require the full buffer to be initialized
when writing. Without this, using some device drivers with a
large number of strange ioctl commands becomes very
tiresome.
enable-inner:
Enable some special
magic needed when the program being run is itself
Valgrind.
--kernel-variant=variant1,variant2,...
Handle system calls and ioctls arising from minor variants of the default kernel for this platform. This is useful for running on hacked kernels or with kernel modules which support nonstandard ioctls, for example. Use with caution. If you don't understand what this option does then you almost certainly don't need it. Currently known variants are:
bproc:
Support the sys_broc system
call on x86. This is for running on BProc, which is a minor
variant of standard Linux which is sometimes used for building
clusters.
--show-emwarns=<yes|no> [default: no]
When enabled, Valgrind will emit warnings about its CPU emulation in certain cases. These are usually not interesting.
--smc-check=<none|stack|all> [default: stack]
This option controls Valgrind's detection of self-modifying
code. Valgrind can do no detection, detect self-modifying code on
the stack, or detect self-modifying code anywhere. Note that the
default option will catch the vast majority of cases, as far as we
know. Running with all
will slow Valgrind down
greatly (but running with none
will rarely
speed things up, since very little code gets put on the stack for
most programs).
There are also some options for debugging
Valgrind itself. You shouldn't need to use them in the normal run of
things. If you wish to see the list, use the
--help-debug
option.
Note that Valgrind also reads options from three places:
The file ~/.valgrindrc
The environment variable
$VALGRIND_OPTS
The file ./.valgrindrc
These are processed in the given order, before the
command-line options. Options processed later override those
processed earlier; for example, options in
./.valgrindrc
will take
precedence over those in
~/.valgrindrc
. The first two
are particularly useful for setting the default tool to
use.
Any tool-specific options put in
$VALGRIND_OPTS
or the
.valgrindrc
files should be
prefixed with the tool name and a colon. For example, if you
want Memcheck to always do leak checking, you can put the
following entry in ~/.valgrindrc
:
--memcheck:leak-check=yes
This will be ignored if any tool other than Memcheck is
run. Without the memcheck:
part, this will cause problems if you select other tools that
don't understand
--leak-check=yes
.
Valgrind has a trapdoor mechanism via which the client program can pass all manner of requests and queries to Valgrind and the current tool. Internally, this is used extensively to make malloc, free, etc, work, although you don't see that.
For your convenience, a subset of these so-called client requests is provided to allow you to tell Valgrind facts about the behaviour of your program, and also to make queries. In particular, your program can tell Valgrind about changes in memory range permissions that Valgrind would not otherwise know about, and so allows clients to get Valgrind to do arbitrary custom checks.
Clients need to include a header file to make this work.
Which header file depends on which client requests you use. Some
client requests are handled by the core, and are defined in the
header file valgrind/valgrind.h
. Tool-specific
header files are named after the tool, e.g.
valgrind/memcheck.h
. All header files can be found
in the include/valgrind
directory of wherever Valgrind
was installed.
The macros in these header files have the magical property that they generate code in-line which Valgrind can spot. However, the code does nothing when not run on Valgrind, so you are not forced to run your program under Valgrind just because you use the macros in this file. Also, you are not required to link your program with any extra supporting libraries.
The code left in your binary has negligible performance impact:
on x86, amd64 and ppc32, the overhead is 6 simple integer instructions
and is probably undetectable except in tight loops.
However, if you really wish to compile out the client requests, you can
compile with -DNVALGRIND
(analogous to
-DNDEBUG
's effect on
assert()
).
You are encouraged to copy the valgrind/*.h
headers
into your project's include directory, so your program doesn't have a
compile-time dependency on Valgrind being installed. The Valgrind headers,
unlike most of the rest of the code, are under a BSD-style license so you may
include them without worrying about license incompatibility.
Here is a brief description of the macros available in
valgrind.h
, which work with more than one
tool (see the tool-specific documentation for explanations of the
tool-specific macros).
RUNNING_ON_VALGRIND
:returns 1 if running on Valgrind, 0 if running on the real CPU. If you are running Valgrind on itself, it will return the number of layers of Valgrind emulation we're running on.
VALGRIND_DISCARD_TRANSLATIONS
:discard translations of code in the specified address range. Useful if you are debugging a JITter or some other dynamic code generation system. After this call, attempts to execute code in the invalidated address range will cause Valgrind to make new translations of that code, which is probably the semantics you want. Note that code invalidations are expensive because finding all the relevant translations quickly is very difficult. So try not to call it often. Note that you can be clever about this: you only need to call it when an area which previously contained code is overwritten with new code. You can choose to write code into fresh memory, and just call this occasionally to discard large chunks of old code all at once.
Alternatively, for transparent self-modifying-code support,
use--smc-check=all
.
VALGRIND_COUNT_ERRORS
:returns the number of errors found so far by Valgrind. Can be
useful in test harness code when combined with the
--log-fd=-1
option; this runs Valgrind silently,
but the client program can detect when errors occur. Only useful
for tools that report errors, e.g. it's useful for Memcheck, but for
Cachegrind it will always return zero because Cachegrind doesn't
report errors.
VALGRIND_MALLOCLIKE_BLOCK
:If your program manages its own memory instead of using
the standard malloc()
/
new
/
new[]
, tools that track
information about heap blocks will not do nearly as good a
job. For example, Memcheck won't detect nearly as many
errors, and the error messages won't be as informative. To
improve this situation, use this macro just after your custom
allocator allocates some new memory. See the comments in
valgrind.h
for information on how to use
it.
VALGRIND_FREELIKE_BLOCK
:This should be used in conjunction with
VALGRIND_MALLOCLIKE_BLOCK
.
Again, see memcheck/memcheck.h
for
information on how to use it.
VALGRIND_CREATE_MEMPOOL
:This is similar to
VALGRIND_MALLOCLIKE_BLOCK
,
but is tailored towards code that uses memory pools. See the
comments in valgrind.h
for information
on how to use it.
VALGRIND_DESTROY_MEMPOOL
:This should be used in conjunction with
VALGRIND_CREATE_MEMPOOL
Again, see the comments in valgrind.h
for
information on how to use it.
VALGRIND_MEMPOOL_ALLOC
:This should be used in conjunction with
VALGRIND_CREATE_MEMPOOL
Again, see the comments in valgrind.h
for
information on how to use it.
VALGRIND_MEMPOOL_FREE
:This should be used in conjunction with
VALGRIND_CREATE_MEMPOOL
Again, see the comments in valgrind.h
for
information on how to use it.
VALGRIND_NON_SIMD_CALL[0123]
:executes a function of 0, 1, 2 or 3 args in the client program on the real CPU, not the virtual CPU that Valgrind normally runs code on. These are used in various ways internally to Valgrind. They might be useful to client programs.
Warning: Only use these if you really know what you are doing.
VALGRIND_PRINTF(format, ...)
:printf a message to the log file when running under Valgrind. Nothing is output if not running under Valgrind. Returns the number of characters output.
VALGRIND_PRINTF_BACKTRACE(format, ...)
:printf a message to the log file along with a stack backtrace when running under Valgrind. Nothing is output if not running under Valgrind. Returns the number of characters output.
VALGRIND_STACK_REGISTER(start, end)
:Register a new stack. Informs Valgrind that the memory range
between start and end is a unique stack. Returns a stack identifier
that can be used with other
VALGRIND_STACK_*
calls.
Valgrind will use this information to determine if a change to the stack pointer is an item pushed onto the stack or a change over to a new stack. Use this if you're using a user-level thread package and are noticing spurious errors from Valgrind about uninitialized memory reads.
VALGRIND_STACK_DEREGISTER(id)
:Deregister a previously registered stack. Informs
Valgrind that previously registered memory range with stack id
id
is no longer a stack.
VALGRIND_STACK_CHANGE(id, start, end)
:Change a previously registered stack. Informs
Valgrind that the previously registerer stack with stack id
id
has changed it's start and end
values. Use this if your user-level thread package implements
stack growth.
Note that valgrind.h
is included by
all the tool-specific header files (such as
memcheck.h
), so you don't need to include it
in your client if you include a tool-specific header.
Valgrind supports programs which use POSIX pthreads. Getting this to work was technically challenging but it all works well enough for significant threaded applications to work.
The main thing to point out is that although Valgrind works with the built-in threads system (eg. NPTL or LinuxThreads), it serialises execution so that only one thread is running at a time. This approach avoids the horrible implementation problems of implementing a truly multiprocessor version of Valgrind, but it does mean that threaded apps run only on one CPU, even if you have a multiprocessor machine.
Valgrind schedules your program's threads in a round-robin fashion, with all threads having equal priority. It switches threads every 50000 basic blocks (on x86, typically around 300000 instructions), which means you'll get a much finer interleaving of thread executions than when run natively. This in itself may cause your program to behave differently if you have some kind of concurrency, critical race, locking, or similar, bugs.
Your program will use the native
libpthread
, but not all of its facilities
will work. In particular, synchonisation of processes via shared-memory
segments will not work. This relies on special atomic instruction sequences
which Valgrind does not emulate in a way which works between processes.
Unfortunately there's no way for Valgrind to warn when this is happening,
and such calls will mostly work; it's only when there's a race that
it will fail.
Valgrind also supports direct use of the
clone()
system call,
futex()
and so on.
clone()
is supported where either
everything is shared (a thread) or nothing is shared (fork-like); partial
sharing will fail. Again, any use of atomic instruction sequences in shared
memory between processes will not work reliably.
Valgrind has a fairly complete signal implementation. It should be able to cope with any valid use of signals.
If you're using signals in clever ways (for example, catching
SIGSEGV, modifying page state and restarting the instruction), you're
probably relying on precise exceptions. In this case, you will need
to use --vex-iropt-precise-memory-exns=yes
.
If your program dies as a result of a fatal core-dumping signal,
Valgrind will generate its own core file
(vgcore.NNNNN
) containing your program's
state. You may use this core file for post-mortem debugging with gdb or
similar. (Note: it will not generate a core if your core dump size limit is
0.) At the time of writing the core dumps do not include all the floating
point register information.
If Valgrind itself crashes (hopefully not) the operating system will create a core dump in the usual way.
Valgrind versions 3.2.0 and above and can do function wrapping on all supported targets. In function wrapping, calls to some specified function are intercepted and rerouted to a different, user-supplied function. This can do whatever it likes, typically examining the arguments, calling onwards to the original, and possibly examining the result. Any number of different functions may be wrapped.
Function wrapping is useful for instrumenting an API in some way. For example, wrapping functions in the POSIX pthreads API makes it possible to notify Valgrind of thread status changes, and wrapping functions in the MPI (message-passing) API allows notifying Valgrind of memory status changes associated with message arrival/departure. Such information is usually passed to Valgrind by using client requests in the wrapper functions, although that is not of relevance here.
Supposing we want to wrap some function
int foo ( int x, int y ) { return x + y; }
A wrapper is a function of identical type, but with a special name
which identifies it as the wrapper for foo
.
Wrappers need to include
supporting macros from valgrind.h
.
Here is a simple wrapper which prints the arguments and return value:
#include <stdio.h> #include "valgrind.h" int I_WRAP_SONAME_FNNAME_ZU(NONE,foo)( int x, int y ) { int result; OrigFn fn; VALGRIND_GET_ORIG_FN(fn); printf("foo's wrapper: args %d %d\n", x, y); CALL_FN_W_WW(result, fn, x,y); printf("foo's wrapper: result %d\n", result); return result; }
To become active, the wrapper merely needs to be present in a text
section somewhere in the same process' address space as the function
it wraps, and for its ELF symbol name to be visible to Valgrind. In
practice, this means either compiling to a
.o
and linking it in, or
compiling to a .so
and
LD_PRELOAD
ing it in. The latter is more
convenient in that it doesn't require relinking.
All wrappers have approximately the above form. There are three crucial macros:
I_WRAP_SONAME_FNNAME_ZU
:
this generates the real name of the wrapper.
This is an encoded name which Valgrind notices when reading symbol
table information. What it says is: I am the wrapper for any function
named foo
which is found in
an ELF shared object with an empty
("NONE
") soname field. The specification
mechanism is powerful in
that wildcards are allowed for both sonames and function names.
The fine details are discussed below.
VALGRIND_GET_ORIG_FN
:
once in the the wrapper, the first priority is
to get hold of the address of the original (and any other supporting
information needed). This is stored in a value of opaque
type OrigFn
.
The information is acquired using
VALGRIND_GET_ORIG_FN
. It is crucial
to make this macro call before calling any other wrapped function
in the same thread.
CALL_FN_W_WW
: eventually we will
want to call the function being
wrapped. Calling it directly does not work, since that just gets us
back to the wrapper and tends to kill the program in short order by
stack overflow. Instead, the result lvalue,
OrigFn
and arguments are
handed to one of a family of macros of the form
CALL_FN_*
. These
cause Valgrind to call the original and avoid recursion back to the
wrapper.
This scheme has the advantage of being self-contained. A library of wrappers can be compiled to object code in the normal way, and does not rely on an external script telling Valgrind which wrappers pertain to which originals.
Each wrapper has a name which, in the most general case says: I am the wrapper for any function whose name matches FNPATT and whose ELF "soname" matches SOPATT. Both FNPATT and SOPATT may contain wildcards (asterisks) and other characters (spaces, dots, @, etc) which are not generally regarded as valid C identifier names.
This flexibility is needed to write robust wrappers for POSIX pthread functions, where typically we are not completely sure of either the function name or the soname, or alternatively we want to wrap a whole bunch of functions at once.
For example, pthread_create
in GNU libpthread is usually a
versioned symbol - one whose name ends in, eg,
@GLIBC_2.3
. Hence we
are not sure what its real name is. We also want to cover any soname
of the form libpthread.so*
.
So the header of the wrapper will be
int I_WRAP_SONAME_FNNAME_ZZ(libpthreadZdsoZd0,pthreadZucreateZAZa) ( ... formals ... ) { ... body ... }
In order to write unusual characters as valid C function names, a Z-encoding scheme is used. Names are written literally, except that a capital Z acts as an escape character, with the following encoding:
Za encodes * Zp + Zc : Zd . Zu _ Zh - Zs (space) ZA @ ZZ Z
Hence libpthreadZdsoZd0
is an
encoding of the soname libpthread.so.0
and pthreadZucreateZAZa
is an encoding
of the function name pthread_create@*
.
The macro I_WRAP_SONAME_FNNAME_ZZ
constructs a wrapper name in which
both the soname (first component) and function name (second component)
are Z-encoded. Encoding the function name can be tiresome and is
often unnecessary, so a second macro,
I_WRAP_SONAME_FNNAME_ZU
, can be
used instead. The _ZU
variant is
also useful for writing wrappers for
C++ functions, in which the function name is usually already mangled
using some other convention in which Z plays an important role; having
to encode a second time quickly becomes confusing.
Since the function name field may contain wildcards, it can be
anything, including just *
.
The same is true for the soname.
However, some ELF objects - specifically, main executables - do not
have sonames. Any object lacking a soname is treated as if its soname
was NONE
, which is why the original
example above had a name
I_WRAP_SONAME_FNNAME_ZU(NONE,foo)
.
The ability for a wrapper to replace an infinite family of functions is powerful but brings complications in situations where ELF objects appear and disappear (are dlopen'd and dlclose'd) on the fly. Valgrind tries to maintain sensible behaviour in such situations.
For example, suppose a process has dlopened (an ELF object with
soname) object1.so
, which contains
function1
. It starts to use
function1
immediately.
After a while it dlopens wrappers.so
,
which contains a wrapper
for function1
in (soname)
object1.so
. All subsequent calls to
function1
are rerouted to the wrapper.
If wrappers.so
is
later dlclose'd, calls to function1
are
naturally routed back to the original.
Alternatively, if object1.so
is dlclose'd but wrappers.so remains,
then the wrapper exported by wrapper.so
becomes inactive, since there
is no way to get to it - there is no original to call any more. However,
Valgrind remembers that the wrapper is still present. If
object1.so
is
eventually dlopen'd again, the wrapper will become active again.
In short, valgrind inspects all code loading/unloading events to ensure that the set of currently active wrappers remains consistent.
A second possible problem is that of conflicting wrappers. It is easily possible to load two or more wrappers, both of which claim to be wrappers for some third function. In such cases Valgrind will complain about conflicting wrappers when the second one appears, and will honour only the first one.
Figuring out what's going on given the dynamic nature of wrapping
can be difficult. The
--trace-redir=yes
flag makes
this possible
by showing the complete state of the redirection subsystem after
every
mmap
/munmap
event affecting code (text).
There are two central concepts:
A "redirection specification" is a binding of
a (soname pattern, fnname pattern) pair to a code address.
These bindings are created by writing functions with names
made with the
I_WRAP_SONAME_FNNAME_{ZZ,_ZU}
macros.
An "active redirection" is code-address to code-address binding currently in effect.
The state of the wrapping-and-redirection subsystem comprises a set of
specifications and a set of active bindings. The specifications are
acquired/discarded by watching all
mmap
/munmap
events on code (text)
sections. The active binding set is (conceptually) recomputed from
the specifications, and all known symbol names, following any change
to the specification set.
--trace-redir=yes
shows the contents
of both sets following any such event.
-v
prints a line of text each
time an active specification is used for the first time.
Hence for maximum debugging effectiveness you will need to use both flags.
One final comment. The function-wrapping facility is closely
tied to Valgrind's ability to replace (redirect) specified
functions, for example to redirect calls to
malloc
to its
own implementation. Indeed, a replacement function can be
regarded as a wrapper function which does not call the original.
However, to make the implementation more robust, the two kinds
of interception (wrapping vs replacement) are treated differently.
--trace-redir=yes
shows
specifications and bindings for both
replacement and wrapper functions. To differentiate the
two, replacement bindings are printed using
R->
whereas
wraps are printed using W->
.
For the most part, the function wrapping implementation is robust.
The only important caveat is: in a wrapper, get hold of
the OrigFn
information using
VALGRIND_GET_ORIG_FN
before calling any
other wrapped function. Once you have the
OrigFn
, arbitrary
intercalling, recursion between, and longjumping out of wrappers
should work correctly. There is never any interaction between wrapped
functions and merely replaced functions
(eg malloc
), so you can call
malloc
etc safely from within wrappers.
The above comments are true for {x86,amd64,ppc32}-linux. On ppc64-linux function wrapping is more fragile due to the (arguably poorly designed) ppc64-linux ABI. This mandates the use of a shadow stack which tracks entries/exits of both wrapper and replacement functions. This gives two limitations: firstly, longjumping out of wrappers will rapidly lead to disaster, since the shadow stack will not get correctly cleared. Secondly, since the shadow stack has finite size, recursion between wrapper/replacement functions is only possible to a limited depth, beyond which Valgrind has to abort the run. This depth is currently 16 calls.
For all platforms ({x86,amd64,ppc32,ppc64}-linux) all the above comments apply on a per-thread basis. In other words, wrapping is thread-safe: each thread must individually observe the above restrictions, but there is no need for any kind of inter-thread cooperation.
As shown in the above example, to call the original you must use a
macro of the form CALL_FN_*
.
For technical reasons it is impossible
to create a single macro to deal with all argument types and numbers,
so a family of macros covering the most common cases is supplied. In
what follows, 'W' denotes a machine-word-typed value (a pointer or a
C long
),
and 'v' denotes C's void
type.
The currently available macros are:
CALL_FN_v_v -- call an original of type void fn ( void ) CALL_FN_W_v -- call an original of type long fn ( void ) CALL_FN_v_W -- void fn ( long ) CALL_FN_W_W -- long fn ( long ) CALL_FN_v_WW -- void fn ( long, long ) CALL_FN_W_WW -- long fn ( long, long ) CALL_FN_v_WWW -- void fn ( long, long, long ) CALL_FN_W_WWW -- long fn ( long, long, long ) CALL_FN_W_WWWW -- long fn ( long, long, long, long ) CALL_FN_W_5W -- long fn ( long, long, long, long, long ) CALL_FN_W_6W -- long fn ( long, long, long, long, long, long ) and so on, up to CALL_FN_W_12W
The set of supported types can be expanded as needed. It is regrettable that this limitation exists. Function wrapping has proven difficult to implement, with a certain apparently unavoidable level of ickyness. After several implementation attempts, the present arrangement appears to be the least-worst tradeoff. At least it works reliably in the presence of dynamic linking and dynamic code loading/unloading.
You should not attempt to wrap a function of one type signature with a wrapper of a different type signature. Such trickery will surely lead to crashes or strange behaviour. This is not of course a limitation of the function wrapping implementation, merely a reflection of the fact that it gives you sweeping powers to shoot yourself in the foot if you are not careful. Imagine the instant havoc you could wreak by writing a wrapper which matched any function name in any soname - in effect, one which claimed to be a wrapper for all functions in the process.
We use the standard Unix
./configure
,
make
, make
install
mechanism, and we have attempted to
ensure that it works on machines with kernel 2.4 or 2.6 and glibc
2.2.X or 2.3.X. You may then want to run the regression tests
with make regtest
.
There are five options (in addition to the usual
--prefix=
which affect how Valgrind is built:
--enable-inner
This builds Valgrind with some special magic hacks which make it possible to run it on a standard build of Valgrind (what the developers call "self-hosting"). Ordinarily you should not use this flag as various kinds of safety checks are disabled.
--enable-tls
TLS (Thread Local Storage) is a relatively new mechanism which requires compiler, linker and kernel support. Valgrind tries to automatically test if TLS is supported and if so enables this option. Sometimes it cannot test for TLS, so this option allows you to override the automatic test.
--with-vex=
Specifies the path to the underlying VEX dynamic-translation library. By default this is taken to be in the VEX directory off the root of the source tree.
--enable-only64bit
--enable-only32bit
On 64-bit platforms (amd64-linux, ppc64-linux), Valgrind is by default built in such a way that both 32-bit and 64-bit executables can be run. Sometimes this cleverness is a problem for a variety of reasons. These two flags allow for single-target builds in this situation. If you issue both, the configure script will complain. Note they are ignored on 32-bit-only platforms (x86-linux, ppc32-linux).
The configure
script tests
the version of the X server currently indicated by the current
$DISPLAY
. This is a known bug.
The intention was to detect the version of the current XFree86
client libraries, so that correct suppressions could be selected
for them, but instead the test checks the server version. This
is just plain wrong.
If you are building a binary package of Valgrind for
distribution, please read README_PACKAGERS
Readme Packagers. It contains some
important information.
Apart from that, there's not much excitement here. Let us know if you have build problems.
Contact us at http://www.valgrind.org/.
See Limitations for the known limitations of Valgrind, and for a list of programs which are known not to work on it.
All parts of the system make heavy use of assertions and internal self-checks. They are permanently enabled, and we have no plans to disable them. If one of them breaks, please mail us!
If you get an assertion failure on the expression
blockSane(ch)
in
VG_(free)()
in
m_mallocfree.c
, this may have happened because
your program wrote off the end of a malloc'd block, or before its
beginning. Valgrind hopefully will have emitted a proper message to that
effect before dying in this way. This is a known problem which
we should fix.
Read the Valgrind FAQ for more advice about common problems, crashes, etc.
The following list of limitations seems long. However, most programs actually work fine.
Valgrind will run Linux ELF binaries, on a kernel 2.4.X or 2.6.X system, on the x86, amd64, ppc32 and ppc64 architectures, subject to the following constraints:
On x86 and amd64, there is no support for 3DNow! instructions. If the translator encounters these, Valgrind will generate a SIGILL when the instruction is executed. Apart from that, on x86 and amd64, essentially all instructions are supported, up to and including SSE2. Version 3.1.0 includes limited support for SSE3 on x86. This could be improved if necessary.
On ppc32 and ppc64, almost all integer, floating point and Altivec instructions are supported. Specifically: integer and FP insns that are mandatory for PowerPC, the "General-purpose optional" group (fsqrt, fsqrts, stfiwx), the "Graphics optional" group (fre, fres, frsqrte, frsqrtes), and the Altivec (also known as VMX) SIMD instruction set, are supported.
Atomic instruction sequences are not properly supported, in the sense that their atomicity is not preserved. This will affect any use of synchronization via memory shared between processes. They will appear to work, but fail sporadically.
If your program does its own memory management, rather than using malloc/new/free/delete, it should still work, but Valgrind's error checking won't be so effective. If you describe your program's memory management scheme using "client requests" (see The Client Request mechanism), Memcheck can do better. Nevertheless, using malloc/new and free/delete is still the best approach.
Valgrind's signal simulation is not as robust as it could be. Basic POSIX-compliant sigaction and sigprocmask functionality is supplied, but it's conceivable that things could go badly awry if you do weird things with signals. Workaround: don't. Programs that do non-POSIX signal tricks are in any case inherently unportable, so should be avoided if possible.
Machine instructions, and system calls, have been implemented on demand. So it's possible, although unlikely, that a program will fall over with a message to that effect. If this happens, please report ALL the details printed out, so we can try and implement the missing feature.
Memory consumption of your program is majorly increased whilst running under Valgrind. This is due to the large amount of administrative information maintained behind the scenes. Another cause is that Valgrind dynamically translates the original executable. Translated, instrumented code is 12-18 times larger than the original so you can easily end up with 50+ MB of translations when running (eg) a web browser.
Valgrind can handle dynamically-generated code just fine. If
you regenerate code over the top of old code (ie. at the same memory
addresses), if the code is on the stack Valgrind will realise the
code has changed, and work correctly. This is necessary to handle
the trampolines GCC uses to implemented nested functions. If you
regenerate code somewhere other than the stack, you will need to use
the --smc-check=all
flag, and Valgrind will run more
slowly than normal.
As of version 3.0.0, Valgrind has the following limitations in its implementation of x86/AMD64 floating point relative to IEEE754.
Precision: There is no support for 80 bit arithmetic. Internally, Valgrind represents all such "long double" numbers in 64 bits, and so there may be some differences in results. Whether or not this is critical remains to be seen. Note, the x86/amd64 fldt/fstpt instructions (read/write 80-bit numbers) are correctly simulated, using conversions to/from 64 bits, so that in-memory images of 80-bit numbers look correct if anyone wants to see.
The impression observed from many FP regression tests is that the accuracy differences aren't significant. Generally speaking, if a program relies on 80-bit precision, there may be difficulties porting it to non x86/amd64 platforms which only support 64-bit FP precision. Even on x86/amd64, the program may get different results depending on whether it is compiled to use SSE2 instructions (64-bits only), or x87 instructions (80-bit). The net effect is to make FP programs behave as if they had been run on a machine with 64-bit IEEE floats, for example PowerPC. On amd64 FP arithmetic is done by default on SSE2, so amd64 looks more like PowerPC than x86 from an FP perspective, and there are far fewer noticable accuracy differences than with x86.
Rounding: Valgrind does observe the 4 IEEE-mandated rounding modes (to nearest, to +infinity, to -infinity, to zero) for the following conversions: float to integer, integer to float where there is a possibility of loss of precision, and float-to-float rounding. For all other FP operations, only the IEEE default mode (round to nearest) is supported.
Numeric exceptions in FP code: IEEE754 defines five types of numeric exception that can happen: invalid operation (sqrt of negative number, etc), division by zero, overflow, underflow, inexact (loss of precision).
For each exception, two courses of action are defined by 754: either (1) a user-defined exception handler may be called, or (2) a default action is defined, which "fixes things up" and allows the computation to proceed without throwing an exception.
Currently Valgrind only supports the default fixup actions. Again, feedback on the importance of exception support would be appreciated.
When Valgrind detects that the program is trying to exceed any
of these limitations (setting exception handlers, rounding mode, or
precision control), it can print a message giving a traceback of
where this has happened, and continue execution. This behaviour used
to be the default, but the messages are annoying and so showing them
is now optional. Use --show-emwarns=yes
to see
them.
The above limitations define precisely the IEEE754 'default' behaviour: default fixup on all exceptions, round-to-nearest operations, and 64-bit precision.
As of version 3.0.0, Valgrind has the following limitations in its implementation of x86/AMD64 SSE2 FP arithmetic, relative to IEEE754.
Essentially the same: no exceptions, and limited observance of rounding mode. Also, SSE2 has control bits which make it treat denormalised numbers as zero (DAZ) and a related action, flush denormals to zero (FTZ). Both of these cause SSE2 arithmetic to be less accurate than IEEE requires. Valgrind detects, ignores, and can warn about, attempts to enable either mode.
As of version 3.2.0, Valgrind has the following limitations in its implementation of PPC32 and PPC64 floating point arithmetic, relative to IEEE754.
Scalar (non-Altivec): Valgrind provides a bit-exact emulation of all floating point instructions, except for "fre" and "fres", which are done more precisely than required by the PowerPC architecture specification. All floating point operations observe the current rounding mode.
However, fpscr[FPRF] is not set after each operation. That could be done but would give measurable performance overheads, and so far no need for it has been found.
As on x86/AMD64, IEEE754 exceptions are not supported: all floating point exceptions are handled using the default IEEE fixup actions. Valgrind detects, ignores, and can warn about, attempts to unmask the 5 IEEE FP exception kinds by writing to the floating-point status and control register (fpscr).
Vector (Altivec, VMX): essentially as with x86/AMD64 SSE/SSE2: no exceptions, and limited observance of rounding mode. For Altivec, FP arithmetic is done in IEEE/Java mode, which is more accurate than the Linux default setting. "More accurate" means that denormals are handled properly, rather than simply being flushed to zero.
Programs which are known not to work are:
emacs starts up but immediately concludes it is out of
memory and aborts. It may be that Memcheck does not provide
a good enough emulation of the
mallinfo
function.
Emacs works fine if you build it to use
the standard malloc/free routines.
This is the log for a run of a small program using Memcheck The program is in fact correct, and the reported error is as the result of a potentially serious code generation bug in GNU g++ (snapshot 20010527).
sewardj@phoenix:~/newmat10$ ~/Valgrind-6/valgrind -v ./bogon ==25832== Valgrind 0.10, a memory error detector for x86 RedHat 7.1. ==25832== Copyright (C) 2000-2001, and GNU GPL'd, by Julian Seward. ==25832== Startup, with flags: ==25832== --suppressions=/home/sewardj/Valgrind/redhat71.supp ==25832== reading syms from /lib/ld-linux.so.2 ==25832== reading syms from /lib/libc.so.6 ==25832== reading syms from /mnt/pima/jrs/Inst/lib/libgcc_s.so.0 ==25832== reading syms from /lib/libm.so.6 ==25832== reading syms from /mnt/pima/jrs/Inst/lib/libstdc++.so.3 ==25832== reading syms from /home/sewardj/Valgrind/valgrind.so ==25832== reading syms from /proc/self/exe ==25832== ==25832== Invalid read of size 4 ==25832== at 0x8048724: _ZN10BandMatrix6ReSizeEiii (bogon.cpp:45) ==25832== by 0x80487AF: main (bogon.cpp:66) ==25832== Address 0xBFFFF74C is not stack'd, malloc'd or free'd ==25832== ==25832== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0) ==25832== malloc/free: in use at exit: 0 bytes in 0 blocks. ==25832== malloc/free: 0 allocs, 0 frees, 0 bytes allocated. ==25832== For a detailed leak analysis, rerun with: --leak-check=yes ==25832== ==25832== exiting, did 1881 basic blocks, 0 misses. ==25832== 223 translations, 3626 bytes in, 56801 bytes out.
The GCC folks fixed this about a week before gcc-3.0 shipped.
Most of these only appear if you run in verbose mode
(enabled by -v
):
More than 100 errors detected. Subsequent
errors will still be recorded, but in less detail than
before.
After 100 different errors have been shown, Valgrind becomes more conservative about collecting them. It then requires only the program counters in the top two stack frames to match when deciding whether or not two errors are really the same one. Prior to this point, the PCs in the top four frames are required to match. This hack has the effect of slowing down the appearance of new errors after the first 100. The 100 constant can be changed by recompiling Valgrind.
More than 1000 errors detected. I'm not
reporting any more. Final error counts may be inaccurate. Go fix
your program!
After 1000 different errors have been detected, Valgrind ignores any more. It seems unlikely that collecting even more different ones would be of practical help to anybody, and it avoids the danger that Valgrind spends more and more of its time comparing new errors against an ever-growing collection. As above, the 1000 number is a compile-time constant.
Warning: client switching stacks?
Valgrind spotted such a large change in the stack pointer,
%esp
, that it guesses the client is switching to
a different stack. At this point it makes a kludgey guess where the
base of the new stack is, and sets memory permissions accordingly.
You may get many bogus error messages following this, if Valgrind
guesses wrong. At the moment "large change" is defined as a change
of more that 2000000 in the value of the %esp
(stack pointer) register.
Warning: client attempted to close Valgrind's
logfile fd <number>
Valgrind doesn't allow the client to close the logfile,
because you'd never see any diagnostic information after that point.
If you see this message, you may want to use the
--log-fd=<number>
option to specify a
different logfile file-descriptor number.
Warning: noted but unhandled ioctl
<number>
Valgrind observed a call to one of the vast family of
ioctl
system calls, but did not
modify its memory status info (because I have not yet got round to
it). The call will still have gone through, but you may get
spurious errors after this as a result of the non-update of the
memory info.
Warning: set address range perms: large range
<number>
Diagnostic message, mostly for benefit of the Valgrind developers, to do with memory permissions.
Valgrind supports debugging of distributed-memory applications
which use the MPI message passing standard. This support consists of a
library of wrapper functions for the
PMPI_*
interface. When incorporated
into the application's address space, either by direct linking or by
LD_PRELOAD
, the wrappers intercept
calls to PMPI_Send
,
PMPI_Recv
, etc. They then
use client requests to inform Valgrind of memory state changes caused
by the function being wrapped. This reduces the number of false
positives that Memcheck otherwise typically reports for MPI
applications.
The wrappers also take the opportunity to carefully check
size and definedness of buffers passed as arguments to MPI functions, hence
detecting errors such as passing undefined data to
PMPI_Send
, or receiving data into a
buffer which is too small.
Unlike most of the rest of Valgrind, the wrapper library is subject to a
BSD-style license, so you can link it into any code base you like.
See the top of auxprogs/libmpiwrap.c
for details.
The wrapper library will be built automatically if possible.
Valgrind's configure script will look for a suitable
mpicc
to build it with. This must be
the same mpicc
you use to build the
MPI application you want to debug. By default, Valgrind tries
mpicc
, but you can specify a
different one by using the configure-time flag
--with-mpicc=
. Currently the
wrappers are only buildable with
mpicc
s which are based on GNU
gcc
or Intel's
icc
.
Check that the configure script prints a line like this:
checking for usable MPI2-compliant mpicc and mpi.h... yes, mpicc
If it says ... no
, your
mpicc
has failed to compile and link
a test MPI2 program.
If the configure test succeeds, continue in the usual way with
make
and make
install
. The final install tree should then contain
libmpiwrap.so
.
Compile up a test MPI program (eg, MPI hello-world) and try this:
LD_PRELOAD=$prefix/lib/valgrind/<platform>/libmpiwrap.so \ mpirun [args] $prefix/bin/valgrind ./hello
You should see something similar to the following
valgrind MPI wrappers 31901: Active for pid 31901 valgrind MPI wrappers 31901: Try MPIWRAP_DEBUG=help for possible options
repeated for every process in the group. If you do not see these, there is an build/installation problem of some kind.
The MPI functions to be wrapped are assumed to be in an ELF
shared object with soname matching
libmpi.so*
. This is known to be
correct at least for Open MPI and Quadrics MPI, and can easily be
changed if required.
Compile your MPI application as usual, taking care to link it
using the same mpicc
that your
Valgrind build was configured with.
Use the following basic scheme to run your application on Valgrind with the wrappers engaged:
MPIWRAP_DEBUG=[wrapper-args] \ LD_PRELOAD=$prefix/lib/valgrind/<platform>/libmpiwrap.so \ mpirun [mpirun-args] \ $prefix/bin/valgrind [valgrind-args] \ [application] [app-args]
As an alternative to
LD_PRELOAD
ing
libmpiwrap.so
, you can simply link it
to your application if desired. This should not disturb native
behaviour of your application in any way.
Environment variable
MPIWRAP_DEBUG
is consulted at
startup. The default behaviour is to print a starting banner
valgrind MPI wrappers 16386: Active for pid 16386 valgrind MPI wrappers 16386: Try MPIWRAP_DEBUG=help for possible options
and then be relatively quiet.
You can give a list of comma-separated options in
MPIWRAP_DEBUG
. These are
verbose
:
show entries/exits of all wrappers. Also show extra
debugging info, such as the status of outstanding
MPI_Request
s resulting
from uncompleted MPI_Irecv
s.
quiet
:
opposite of verbose
, only print
anything when the wrappers want
to report a detected programming error, or in case of catastrophic
failure of the wrappers.
warn
:
by default, functions which lack proper wrappers
are not commented on, just silently
ignored. This causes a warning to be printed for each unwrapped
function used, up to a maximum of three warnings per function.
strict
:
print an error message and abort the program if
a function lacking a wrapper is used.
If you want to use Valgrind's XML output facility
(--xml=yes
), you should pass
quiet
in
MPIWRAP_DEBUG
so as to get rid of any
extraneous printing from the wrappers.
All MPI2 functions except
MPI_Wtick
,
MPI_Wtime
and
MPI_Pcontrol
have wrappers. The
first two are not wrapped because they return a
double
, and Valgrind's
function-wrap mechanism cannot handle that (it could easily enough be
extended to). MPI_Pcontrol
cannot be
wrapped as it has variable arity:
int MPI_Pcontrol(const int level, ...)
Most functions are wrapped with a default wrapper which does
nothing except complain or abort if it is called, depending on
settings in MPIWRAP_DEBUG
listed
above. The following functions have "real", do-something-useful
wrappers:
PMPI_Send PMPI_Bsend PMPI_Ssend PMPI_Rsend PMPI_Recv PMPI_Get_count PMPI_Isend PMPI_Ibsend PMPI_Issend PMPI_Irsend PMPI_Irecv PMPI_Wait PMPI_Waitall PMPI_Test PMPI_Testall PMPI_Iprobe PMPI_Probe PMPI_Cancel PMPI_Sendrecv PMPI_Type_commit PMPI_Type_free PMPI_Bcast PMPI_Gather PMPI_Scatter PMPI_Alltoall PMPI_Reduce PMPI_Allreduce PMPI_Op_create PMPI_Comm_create PMPI_Comm_dup PMPI_Comm_free PMPI_Comm_rank PMPI_Comm_size PMPI_Error_string PMPI_Init PMPI_Initialized PMPI_Finalize
A few functions such as
PMPI_Address
are listed as
HAS_NO_WRAPPER
. They have no wrapper
at all as there is nothing worth checking, and giving a no-op wrapper
would reduce performance for no reason.
Note that the wrapper library itself can itself generate large
numbers of calls to the MPI implementation, especially when walking
complex types. The most common functions called are
PMPI_Extent
,
PMPI_Type_get_envelope
,
PMPI_Type_get_contents
, and
PMPI_Type_free
.
MPI-1.1 structured types are supported, and walked exactly.
The currently supported combiners are
MPI_COMBINER_NAMED
,
MPI_COMBINER_CONTIGUOUS
,
MPI_COMBINER_VECTOR
,
MPI_COMBINER_HVECTOR
MPI_COMBINER_INDEXED
,
MPI_COMBINER_HINDEXED
and
MPI_COMBINER_STRUCT
. This should
cover all MPI-1.1 types. The mechanism (function
walk_type
) should extend easily to
cover MPI2 combiners.
MPI defines some named structured types
(MPI_FLOAT_INT
,
MPI_DOUBLE_INT
,
MPI_LONG_INT
,
MPI_2INT
,
MPI_SHORT_INT
,
MPI_LONG_DOUBLE_INT
) which are pairs
of some basic type and a C int
.
Unfortunately the MPI specification makes it impossible to look inside
these types and see where the fields are. Therefore these wrappers
assume the types are laid out as struct { float val;
int loc; }
(for
MPI_FLOAT_INT
), etc, and act
accordingly. This appears to be correct at least for Open MPI 1.0.2
and for Quadrics MPI.
If strict
is an option specified
in MPIWRAP_DEBUG
, the application
will abort if an unhandled type is encountered. Otherwise, the
application will print a warning message and continue.
Some effort is made to mark/check memory ranges corresponding to
arrays of values in a single pass. This is important for performance
since asking Valgrind to mark/check any range, no matter how small,
carries quite a large constant cost. This optimisation is applied to
arrays of primitive types (double
,
float
,
int
,
long
, long
long
, short
,
char
, and long
double
on platforms where sizeof(long
double) == 8
). For arrays of all other types, the
wrappers handle each element individually and so there can be a very
large performance cost.
For the most part the wrappers are straightforward. The only significant complexity arises with nonblocking receives.
The issue is that MPI_Irecv
states the recv buffer and returns immediately, giving a handle
(MPI_Request
) for the transaction.
Later the user will have to poll for completion with
MPI_Wait
etc, and when the
transaction completes successfully, the wrappers have to paint the
recv buffer. But the recv buffer details are not presented to
MPI_Wait
-- only the handle is. The
library therefore maintains a shadow table which associates
uncompleted MPI_Request
s with the
corresponding buffer address/count/type. When an operation completes,
the table is searched for the associated address/count/type info, and
memory is marked accordingly.
Access to the table is guarded by a (POSIX pthreads) lock, so as to make the library thread-safe.
The table is allocated with
malloc
and never
free
d, so it will show up in leak
checks.
Writing new wrappers should be fairly easy. The source file is
auxprogs/libmpiwrap.c
. If possible,
find an existing wrapper for a function of similar behaviour to the
one you want to wrap, and use it as a starting point. The wrappers
are organised in sections in the same order as the MPI 1.1 spec, to
aid navigation. When adding a wrapper, remember to comment out the
definition of the default wrapper in the long list of defaults at the
bottom of the file (do not remove it, just comment it out).
The wrappers should reduce Memcheck's false-error rate on MPI applications. Because the wrapping is done at the MPI interface, there will still potentially be a large number of errors reported in the MPI implementation below the interface. The best you can do is try to suppress them.
You may also find that the input-side (buffer
length/definedness) checks find errors in your MPI use, for example
passing too short a buffer to
MPI_Recv
.
Functions which are not wrapped may increase the false
error rate. A possible approach is to run with
MPI_DEBUG
containing
warn
. This will show you functions
which lack proper wrappers but which are nevertheless used. You can
then write wrappers for them.