To use this tool, you must specify
--tool=helgrind on the Valgrind
command line.
7.1. Overview
Helgrind is a Valgrind tool for detecting synchronisation errors
in C, C++ and Fortran programs that use the POSIX pthreads
threading primitives.
The main abstractions in POSIX pthreads are: a set of threads
sharing a common address space, thread creation, thread joining,
thread exit, mutexes (locks), condition variables (inter-thread event
notifications), reader-writer locks, spinlocks, semaphores and
barriers.
Helgrind can detect three classes of errors, which are discussed
in detail in the next three sections:
Problems like these often result in unreproducible,
timing-dependent crashes, deadlocks and other misbehaviour, and
can be difficult to find by other means.
Helgrind is aware of all the pthread abstractions and tracks
their effects as accurately as it can. On x86 and amd64 platforms, it
understands and partially handles implicit locking arising from the
use of the LOCK instruction prefix. On PowerPC/POWER and ARM
platforms, it partially handles implicit locking arising from
load-linked and store-conditional instruction pairs.
Helgrind works best when your application uses only the POSIX
pthreads API. However, if you want to use custom threading
primitives, you can describe their behaviour to Helgrind using the
ANNOTATE_* macros defined
in helgrind.h.
Helgrind also provides Execution Trees memory
profiling using the command line
option --xtree-memory and the monitor command
xtmemory.
7.2. Detected errors: Misuses of the POSIX pthreads API
Helgrind intercepts calls to many POSIX pthreads functions, and
is therefore able to report on various common problems. Although
these are unglamourous errors, their presence can lead to undefined
program behaviour and hard-to-find bugs later on. The detected errors
are:
unlocking an invalid mutex
unlocking a not-locked mutex
unlocking a mutex held by a different
thread
destroying an invalid or a locked mutex
recursively locking a non-recursive mutex
deallocation of memory that contains a
locked mutex
passing mutex arguments to functions expecting
reader-writer lock arguments, and vice
versa
when a POSIX pthread function fails with an
error code that must be handled
when a thread exits whilst still holding locked
locks
calling pthread_cond_wait
with a not-locked mutex, an invalid mutex,
or one locked by a different
thread
inconsistent bindings between condition
variables and their associated mutexes
invalid or duplicate initialisation of a pthread
barrier
initialisation of a pthread barrier on which threads
are still waiting
destruction of a pthread barrier object which was
never initialised, or on which threads are still
waiting
waiting on an uninitialised pthread
barrier
for all of the pthreads functions that Helgrind
intercepts, an error is reported, along with a stack
trace, if the system threading library routine returns
an error code, even if Helgrind itself detected no
error
Checks pertaining to the validity of mutexes are generally also
performed for reader-writer locks.
Various kinds of this-can't-possibly-happen events are also
reported. These usually indicate bugs in the system threading
library.
Reported errors always contain a primary stack trace indicating
where the error was detected. They may also contain auxiliary stack
traces giving additional information. In particular, most errors
relating to mutexes will also tell you where that mutex first came to
Helgrind's attention (the "was first observed
at" part), so you have a chance of figuring out which
mutex it is referring to. For example:
Thread #1 unlocked a not-locked lock at 0x7FEFFFA90
at 0x4C2408D: pthread_mutex_unlock (hg_intercepts.c:492)
by 0x40073A: nearly_main (tc09_bad_unlock.c:27)
by 0x40079B: main (tc09_bad_unlock.c:50)
Lock at 0x7FEFFFA90 was first observed
at 0x4C25D01: pthread_mutex_init (hg_intercepts.c:326)
by 0x40071F: nearly_main (tc09_bad_unlock.c:23)
by 0x40079B: main (tc09_bad_unlock.c:50)
Helgrind has a way of summarising thread identities, as
you see here with the text "Thread
#1". This is so that it can speak about threads and
sets of threads without overwhelming you with details. See
below
for more information on interpreting error messages.
7.3. Detected errors: Inconsistent Lock Orderings
In this section, and in general, to "acquire" a lock simply
means to lock that lock, and to "release" a lock means to unlock
it.
Helgrind monitors the order in which threads acquire locks.
This allows it to detect potential deadlocks which could arise from
the formation of cycles of locks. Detecting such inconsistencies is
useful because, whilst actual deadlocks are fairly obvious, potential
deadlocks may never be discovered during testing and could later lead
to hard-to-diagnose in-service failures.
The simplest example of such a problem is as
follows.
Imagine some shared resource R, which, for whatever
reason, is guarded by two locks, L1 and L2, which must both be held
when R is accessed.
Suppose a thread acquires L1, then L2, and proceeds
to access R. The implication of this is that all threads in the
program must acquire the two locks in the order first L1 then L2.
Not doing so risks deadlock.
The deadlock could happen if two threads -- call them
T1 and T2 -- both want to access R. Suppose T1 acquires L1 first,
and T2 acquires L2 first. Then T1 tries to acquire L2, and T2 tries
to acquire L1, but those locks are both already held. So T1 and T2
become deadlocked.
Helgrind builds a directed graph indicating the order in which
locks have been acquired in the past. When a thread acquires a new
lock, the graph is updated, and then checked to see if it now contains
a cycle. The presence of a cycle indicates a potential deadlock involving
the locks in the cycle.
In general, Helgrind will choose two locks involved in the cycle
and show you how their acquisition ordering has become inconsistent.
It does this by showing the program points that first defined the
ordering, and the program points which later violated it. Here is a
simple example involving just two locks:
Thread #1: lock order "0x7FF0006D0 before 0x7FF0006A0" violated
Observed (incorrect) order is: acquisition of lock at 0x7FF0006A0
at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
by 0x400825: main (tc13_laog1.c:23)
followed by a later acquisition of lock at 0x7FF0006D0
at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
by 0x400853: main (tc13_laog1.c:24)
Required order was established by acquisition of lock at 0x7FF0006D0
at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
by 0x40076D: main (tc13_laog1.c:17)
followed by a later acquisition of lock at 0x7FF0006A0
at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
by 0x40079B: main (tc13_laog1.c:18)
When there are more than two locks in the cycle, the error is
equally serious. However, at present Helgrind does not show the locks
involved, sometimes because that information is not available, but
also so as to avoid flooding you with information. For example, a
naive implementation of the famous Dining Philosophers problem
involves a cycle of five locks
(see helgrind/tests/tc14_laog_dinphils.c).
In this case Helgrind has detected that all 5 philosophers could
simultaneously pick up their left fork and then deadlock whilst
waiting to pick up their right forks.
Thread #6: lock order "0x80499A0 before 0x8049A00" violated
Observed (incorrect) order is: acquisition of lock at 0x8049A00
at 0x40085BC: pthread_mutex_lock (hg_intercepts.c:495)
by 0x80485B4: dine (tc14_laog_dinphils.c:18)
by 0x400BDA4: mythread_wrapper (hg_intercepts.c:219)
by 0x39B924: start_thread (pthread_create.c:297)
by 0x2F107D: clone (clone.S:130)
followed by a later acquisition of lock at 0x80499A0
at 0x40085BC: pthread_mutex_lock (hg_intercepts.c:495)
by 0x80485CD: dine (tc14_laog_dinphils.c:19)
by 0x400BDA4: mythread_wrapper (hg_intercepts.c:219)
by 0x39B924: start_thread (pthread_create.c:297)
by 0x2F107D: clone (clone.S:130)
7.4. Detected errors: Data Races
A data race happens, or could happen, when two threads access a
shared memory location without using suitable locks or other
synchronisation to ensure single-threaded access. Such missing
locking can cause obscure timing dependent bugs. Ensuring programs
are race-free is one of the central difficulties of threaded
programming.
Reliably detecting races is a difficult problem, and most
of Helgrind's internals are devoted to dealing with it.
We begin with a simple example.
7.4.1. A Simple Data Race
About the simplest possible example of a race is as follows. In
this program, it is impossible to know what the value
of var is at the end of the program.
Is it 2 ? Or 1 ?
#include <pthread.h>
int var = 0;
void* child_fn ( void* arg ) {
var++; /* Unprotected relative to parent */ /* this is line 6 */
return NULL;
}
int main ( void ) {
pthread_t child;
pthread_create(&child, NULL, child_fn, NULL);
var++; /* Unprotected relative to child */ /* this is line 13 */
pthread_join(child, NULL);
return 0;
}
The problem is there is nothing to
stop var being updated simultaneously
by both threads. A correct program would
protect var with a lock of type
pthread_mutex_t, which is acquired
before each access and released afterwards. Helgrind's output for
this program is:
Thread #1 is the program's root thread
Thread #2 was created
at 0x511C08E: clone (in /lib64/libc-2.8.so)
by 0x4E333A4: do_clone (in /lib64/libpthread-2.8.so)
by 0x4E33A30: pthread_create@@GLIBC_2.2.5 (in /lib64/libpthread-2.8.so)
by 0x4C299D4: pthread_create@* (hg_intercepts.c:214)
by 0x400605: main (simple_race.c:12)
Possible data race during read of size 4 at 0x601038 by thread #1
Locks held: none
at 0x400606: main (simple_race.c:13)
This conflicts with a previous write of size 4 by thread #2
Locks held: none
at 0x4005DC: child_fn (simple_race.c:6)
by 0x4C29AFF: mythread_wrapper (hg_intercepts.c:194)
by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
by 0x511C0CC: clone (in /lib64/libc-2.8.so)
Location 0x601038 is 0 bytes inside global var "var"
declared at simple_race.c:3
This is quite a lot of detail for an apparently simple error.
The last clause is the main error message. It says there is a race as
a result of a read of size 4 (bytes), at 0x601038, which is the
address of var, happening in
function main at line 13 in the
program.
Two important parts of the message are:
Helgrind shows two stack traces for the error, not one. By
definition, a race involves two different threads accessing the
same location in such a way that the result depends on the relative
speeds of the two threads.
The first stack trace follows the text "Possible
data race during read of size 4 ..." and the
second trace follows the text "This conflicts with
a previous write of size 4 ...". Helgrind is
usually able to show both accesses involved in a race. At least
one of these will be a write (since two concurrent, unsynchronised
reads are harmless), and they will of course be from different
threads.
By examining your program at the two locations, you should be
able to get at least some idea of what the root cause of the
problem is. For each location, Helgrind shows the set of locks
held at the time of the access. This often makes it clear which
thread, if any, failed to take a required lock. In this example
neither thread holds a lock during the access.
For races which occur on global or stack variables, Helgrind
tries to identify the name and defining point of the variable.
Hence the text "Location 0x601038 is 0 bytes inside
global var "var" declared at simple_race.c:3".
Showing names of stack and global variables carries no
run-time overhead once Helgrind has your program up and running.
However, it does require Helgrind to spend considerable extra time
and memory at program startup to read the relevant debug info.
Hence this facility is disabled by default. To enable it, you need
to give the --read-var-info=yes option to
Helgrind.
The following section explains Helgrind's race detection
algorithm in more detail.
7.4.2. Helgrind's Race Detection Algorithm
Most programmers think about threaded programming in terms of
the basic functionality provided by the threading library (POSIX
Pthreads): thread creation, thread joining, locks, condition
variables, semaphores and barriers.
The effect of using these functions is to impose
constraints upon the order in which memory accesses can
happen. This implied ordering is generally known as the
"happens-before relation". Once you understand the happens-before
relation, it is easy to see how Helgrind finds races in your code.
Fortunately, the happens-before relation is itself easy to understand,
and is by itself a useful tool for reasoning about the behaviour of
parallel programs. We now introduce it using a simple example.
Consider first the following buggy program:
Parent thread: Child thread:
int var;
// create child thread
pthread_create(...)
var = 20; var = 10;
exit
// wait for child
pthread_join(...)
printf("%d\n", var);
The parent thread creates a child. Both then write different
values to some variable var, and the
parent then waits for the child to exit.
What is the value of var at the
end of the program, 10 or 20? We don't know. The program is
considered buggy (it has a race) because the final value
of var depends on the relative rates
of progress of the parent and child threads. If the parent is fast
and the child is slow, then the child's assignment may happen later,
so the final value will be 10; and vice versa if the child is faster
than the parent.
The relative rates of progress of parent vs child is not something
the programmer can control, and will often change from run to run.
It depends on factors such as the load on the machine, what else is
running, the kernel's scheduling strategy, and many other factors.
The obvious fix is to use a lock to
protect var. It is however
instructive to consider a somewhat more abstract solution, which is to
send a message from one thread to the other:
Parent thread: Child thread:
int var;
// create child thread
pthread_create(...)
var = 20;
// send message to child
// wait for message to arrive
var = 10;
exit
// wait for child
pthread_join(...)
printf("%d\n", var);
Now the program reliably prints "10", regardless of the speed of
the threads. Why? Because the child's assignment cannot happen until
after it receives the message. And the message is not sent until
after the parent's assignment is done.
The message transmission creates a "happens-before" dependency
between the two assignments: var = 20;
must now happen-before var = 10;.
And so there is no longer a race
on var.
Note that it's not significant that the parent sends a message
to the child. Sending a message from the child (after its assignment)
to the parent (before its assignment) would also fix the problem, causing
the program to reliably print "20".
Helgrind's algorithm is (conceptually) very simple. It monitors all
accesses to memory locations. If a location -- in this example,
var,
is accessed by two different threads, Helgrind checks to see if the
two accesses are ordered by the happens-before relation. If so,
that's fine; if not, it reports a race.
It is important to understand that the happens-before relation
creates only a partial ordering, not a total ordering. An example of
a total ordering is comparison of numbers: for any two numbers
x and
y, either
x is less than, equal to, or greater
than
y. A partial ordering is like a
total ordering, but it can also express the concept that two elements
are neither equal, less or greater, but merely unordered with respect
to each other.
In the fixed example above, we say that
var = 20; "happens-before"
var = 10;. But in the original
version, they are unordered: we cannot say that either happens-before
the other.
What does it mean to say that two accesses from different
threads are ordered by the happens-before relation? It means that
there is some chain of inter-thread synchronisation operations which
cause those accesses to happen in a particular order, irrespective of
the actual rates of progress of the individual threads. This is a
required property for a reliable threaded program, which is why
Helgrind checks for it.
The happens-before relations created by standard threading
primitives are as follows:
When a mutex is unlocked by thread T1 and later (or
immediately) locked by thread T2, then the memory accesses in T1
prior to the unlock must happen-before those in T2 after it acquires
the lock.
The same idea applies to reader-writer locks,
although with some complication so as to allow correct handling of
reads vs writes.
When a condition variable (CV) is signalled on by
thread T1 and some other thread T2 is thereby released from a wait
on the same CV, then the memory accesses in T1 prior to the
signalling must happen-before those in T2 after it returns from the
wait. If no thread was waiting on the CV then there is no
effect.
If instead T1 broadcasts on a CV, then all of the
waiting threads, rather than just one of them, acquire a
happens-before dependency on the broadcasting thread at the point it
did the broadcast.
A thread T2 that continues after completing sem_wait
on a semaphore that thread T1 posts on, acquires a happens-before
dependence on the posting thread, a bit like dependencies caused
mutex unlock-lock pairs. However, since a semaphore can be posted
on many times, it is unspecified from which of the post calls the
wait call gets its happens-before dependency.
For a group of threads T1 .. Tn which arrive at a
barrier and then move on, each thread after the call has a
happens-after dependency from all threads before the
barrier.
A newly-created child thread acquires an initial
happens-after dependency on the point where its parent created it.
That is, all memory accesses performed by the parent prior to
creating the child are regarded as happening-before all the accesses
of the child.
Similarly, when an exiting thread is reaped via a
call to pthread_join, once the call returns, the
reaping thread acquires a happens-after dependency relative to all memory
accesses made by the exiting thread.
In summary: Helgrind intercepts the above listed events, and builds a
directed acyclic graph represented the collective happens-before
dependencies. It also monitors all memory accesses.
If a location is accessed by two different threads, but Helgrind
cannot find any path through the happens-before graph from one access
to the other, then it reports a race.
There are a couple of caveats:
Helgrind doesn't check for a race in the case where
both accesses are reads. That would be silly, since concurrent
reads are harmless.
Two accesses are considered to be ordered by the
happens-before dependency even through arbitrarily long chains of
synchronisation events. For example, if T1 accesses some location
L, and then pthread_cond_signals T2, which later
pthread_cond_signals T3, which then accesses L, then
a suitable happens-before dependency exists between the first and second
accesses, even though it involves two different inter-thread
synchronisation events.
7.4.3. Interpreting Race Error Messages
Helgrind's race detection algorithm collects a lot of
information, and tries to present it in a helpful way when a race is
detected. Here's an example:
Thread #2 was created
at 0x511C08E: clone (in /lib64/libc-2.8.so)
by 0x4E333A4: do_clone (in /lib64/libpthread-2.8.so)
by 0x4E33A30: pthread_create@@GLIBC_2.2.5 (in /lib64/libpthread-2.8.so)
by 0x4C299D4: pthread_create@* (hg_intercepts.c:214)
by 0x4008F2: main (tc21_pthonce.c:86)
Thread #3 was created
at 0x511C08E: clone (in /lib64/libc-2.8.so)
by 0x4E333A4: do_clone (in /lib64/libpthread-2.8.so)
by 0x4E33A30: pthread_create@@GLIBC_2.2.5 (in /lib64/libpthread-2.8.so)
by 0x4C299D4: pthread_create@* (hg_intercepts.c:214)
by 0x4008F2: main (tc21_pthonce.c:86)
Possible data race during read of size 4 at 0x601070 by thread #3
Locks held: none
at 0x40087A: child (tc21_pthonce.c:74)
by 0x4C29AFF: mythread_wrapper (hg_intercepts.c:194)
by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
by 0x511C0CC: clone (in /lib64/libc-2.8.so)
This conflicts with a previous write of size 4 by thread #2
Locks held: none
at 0x400883: child (tc21_pthonce.c:74)
by 0x4C29AFF: mythread_wrapper (hg_intercepts.c:194)
by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
by 0x511C0CC: clone (in /lib64/libc-2.8.so)
Location 0x601070 is 0 bytes inside local var "unprotected2"
declared at tc21_pthonce.c:51, in frame #0 of thread 3
Helgrind first announces the creation points of any threads
referenced in the error message. This is so it can speak concisely
about threads without repeatedly printing their creation point call
stacks. Each thread is only ever announced once, the first time it
appears in any Helgrind error message.
The main error message begins at the text
"Possible data race during read". At
the start is information you would expect to see -- address and size
of the racing access, whether a read or a write, and the call stack at
the point it was detected.
A second call stack is presented starting at the text
"This conflicts with a previous
write". This shows a previous access which also
accessed the stated address, and which is believed to be racing
against the access in the first call stack. Note that this second call
stack is limited to a maximum of --history-backtrace-size
entries with a default value of 8 to limit the memory usage.
Finally, Helgrind may attempt to give a description of the
raced-on address in source level terms. In this example, it
identifies it as a local variable, shows its name, declaration point,
and in which frame (of the first call stack) it lives. Note that this
information is only shown when --read-var-info=yes
is specified on the command line. That's because reading the DWARF3
debug information in enough detail to capture variable type and
location information makes Helgrind much slower at startup, and also
requires considerable amounts of memory, for large programs.
Once you have your two call stacks, how do you find the root
cause of the race?
The first thing to do is examine the source locations referred
to by each call stack. They should both show an access to the same
location, or variable.
Now figure out how how that location should have been made
thread-safe:
Perhaps the location was intended to be protected by
a mutex? If so, you need to lock and unlock the mutex at both
access points, even if one of the accesses is reported to be a read.
Did you perhaps forget the locking at one or other of the accesses?
To help you do this, Helgrind shows the set of locks held by each
threads at the time they accessed the raced-on location.
Alternatively, perhaps you intended to use a some
other scheme to make it safe, such as signalling on a condition
variable. In all such cases, try to find a synchronisation event
(or a chain thereof) which separates the earlier-observed access (as
shown in the second call stack) from the later-observed access (as
shown in the first call stack). In other words, try to find
evidence that the earlier access "happens-before" the later access.
See the previous subsection for an explanation of the happens-before
relation.
The fact that Helgrind is reporting a race means it did not observe
any happens-before relation between the two accesses. If
Helgrind is working correctly, it should also be the case that you
also cannot find any such relation, even on detailed inspection
of the source code. Hopefully, though, your inspection of the code
will show where the missing synchronisation operation(s) should have
been.
7.5. Hints and Tips for Effective Use of Helgrind
Helgrind can be very helpful in finding and resolving
threading-related problems. Like all sophisticated tools, it is most
effective when you understand how to play to its strengths.
Helgrind will be less effective when you merely throw an
existing threaded program at it and try to make sense of any reported
errors. It will be more effective if you design threaded programs
from the start in a way that helps Helgrind verify correctness. The
same is true for finding memory errors with Memcheck, but applies more
here, because thread checking is a harder problem. Consequently it is
much easier to write a correct program for which Helgrind falsely
reports (threading) errors than it is to write a correct program for
which Memcheck falsely reports (memory) errors.
With that in mind, here are some tips, listed most important first,
for getting reliable results and avoiding false errors. The first two
are critical. Any violations of them will swamp you with huge numbers
of false data-race errors.
Make sure your application, and all the libraries it uses,
use the POSIX threading primitives. Helgrind needs to be able to
see all events pertaining to thread creation, exit, locking and
other synchronisation events. To do so it intercepts many POSIX
pthreads functions.
Do not roll your own threading primitives (mutexes, etc)
from combinations of the Linux futex syscall, atomic counters, etc.
These throw Helgrind's internal what's-going-on models
way off course and will give bogus results.
Also, do not reimplement existing POSIX abstractions using
other POSIX abstractions. For example, don't build your own
semaphore routines or reader-writer locks from POSIX mutexes and
condition variables. Instead use POSIX reader-writer locks and
semaphores directly, since Helgrind supports them directly.
Helgrind directly supports the following POSIX threading
abstractions: mutexes, reader-writer locks, condition variables
(but see below), semaphores and barriers. Currently spinlocks
are not supported, although they could be in future.
At the time of writing, the following popular Linux packages
are known to implement their own threading primitives:
Qt version 4.X. Qt 3.X is harmless in that it
only uses POSIX pthreads primitives. Unfortunately Qt 4.X
has its own implementation of mutexes (QMutex) and thread reaping.
Helgrind 3.4.x contains direct support
for Qt 4.X threading, which is experimental but is believed to
work fairly well. A side effect of supporting Qt 4 directly is
that Helgrind can be used to debug KDE4 applications. As this
is an experimental feature, we would particularly appreciate
feedback from folks who have used Helgrind to successfully debug
Qt 4 and/or KDE4 applications.
Runtime support library for GNU OpenMP (part of
GCC), at least for GCC versions 4.2 and 4.3. The GNU OpenMP runtime
library (libgomp.so) constructs its own
synchronisation primitives using combinations of atomic memory
instructions and the futex syscall, which causes total chaos since in
Helgrind since it cannot "see" those.
Fortunately, this can be solved using a configuration-time
option (for GCC). Rebuild GCC from source, and configure using
--disable-linux-futex.
This makes libgomp.so use the standard
POSIX threading primitives instead. Note that this was tested
using GCC 4.2.3 and has not been re-tested using more recent GCC
versions. We would appreciate hearing about any successes or
failures with more recent versions.
If you must implement your own threading primitives, there
are a set of client request macros
in helgrind.h to help you
describe your primitives to Helgrind. You should be able to
mark up mutexes, condition variables, etc, without difficulty.
It is also possible to mark up the effects of thread-safe
reference counting using the
ANNOTATE_HAPPENS_BEFORE,
ANNOTATE_HAPPENS_AFTER and
ANNOTATE_HAPPENS_BEFORE_FORGET_ALL,
macros. Thread-safe reference counting using an atomically
incremented/decremented refcount variable causes Helgrind
problems because a one-to-zero transition of the reference count
means the accessing thread has exclusive ownership of the
associated resource (normally, a C++ object) and can therefore
access it (normally, to run its destructor) without locking.
Helgrind doesn't understand this, and markup is essential to
avoid false positives.
Here are recommended guidelines for marking up thread safe
reference counting in C++. You only need to mark up your
release methods -- the ones which decrement the reference count.
Given a class like this:
class MyClass {
unsigned int mRefCount;
void Release ( void ) {
unsigned int newCount = atomic_decrement(&mRefCount);
if (newCount == 0) {
delete this;
}
}
}
the release method should be marked up as follows:
There are a number of complex, mostly-theoretical objections to
this scheme. From a theoretical standpoint it appears to be
impossible to devise a markup scheme which is completely correct
in the sense of guaranteeing to remove all false races. The
proposed scheme however works well in practice.
Avoid memory recycling. If you can't avoid it, you must use
tell Helgrind what is going on via the
VALGRIND_HG_CLEAN_MEMORY client request (in
helgrind.h).
Helgrind is aware of standard heap memory allocation and
deallocation that occurs via
malloc/free/new/delete
and from entry and exit of stack frames. In particular, when memory is
deallocated via free, delete,
or function exit, Helgrind considers that memory clean, so when it is
eventually reallocated, its history is irrelevant.
However, it is common practice to implement memory recycling
schemes. In these, memory to be freed is not handed to
free/delete, but instead put
into a pool of free buffers to be handed out again as required. The
problem is that Helgrind has no
way to know that such memory is logically no longer in use, and
its history is irrelevant. Hence you must make that explicit,
using the VALGRIND_HG_CLEAN_MEMORY client request
to specify the relevant address ranges. It's easiest to put these
requests into the pool manager code, and use them either when memory is
returned to the pool, or is allocated from it.
Avoid POSIX condition variables. If you can, use POSIX
semaphores (sem_t, sem_post,
sem_wait) to do inter-thread event signalling.
Semaphores with an initial value of zero are particularly useful for
this.
Helgrind only partially correctly handles POSIX condition
variables. This is because Helgrind can see inter-thread
dependencies between a pthread_cond_wait call and a
pthread_cond_signal/pthread_cond_broadcast
call only if the waiting thread actually gets to the rendezvous first
(so that it actually calls
pthread_cond_wait). It can't see dependencies
between the threads if the signaller arrives first. In the latter case,
POSIX guidelines imply that the associated boolean condition still
provides an inter-thread synchronisation event, but one which is
invisible to Helgrind.
The result of Helgrind missing some inter-thread
synchronisation events is to cause it to report false positives.
The root cause of this synchronisation lossage is
particularly hard to understand, so an example is helpful. It was
discussed at length by Arndt Muehlenfeld ("Runtime Race Detection
in Multi-Threaded Programs", Dissertation, TU Graz, Austria). The
canonical POSIX-recommended usage scheme for condition variables
is as follows:
b is a Boolean condition, which is False most of the time
cv is a condition variable
mx is its associated mutex
Signaller: Waiter:
lock(mx) lock(mx)
b = True while (b == False)
signal(cv) wait(cv,mx)
unlock(mx) unlock(mx)
Assume b is False most of
the time. If the waiter arrives at the rendezvous first, it
enters its while-loop, waits for the signaller to signal, and
eventually proceeds. Helgrind sees the signal, notes the
dependency, and all is well.
If the signaller arrives
first, b is set to true, and the
signal disappears into nowhere. When the waiter later arrives, it
does not enter its while-loop and simply carries on. But even in
this case, the waiter code following the while-loop cannot execute
until the signaller sets b to
True. Hence there is still the same inter-thread dependency, but
this time it is through an arbitrary in-memory condition, and
Helgrind cannot see it.
By comparison, Helgrind's detection of inter-thread
dependencies caused by semaphore operations is believed to be
exactly correct.
As far as I know, a solution to this problem that does not
require source-level annotation of condition-variable wait loops
is beyond the current state of the art.
Make sure you are using a supported Linux distribution. At
present, Helgrind only properly supports glibc-2.3 or later. This
in turn means we only support glibc's NPTL threading
implementation. The old LinuxThreads implementation is not
supported.
If your application is using thread local variables,
helgrind might report false positive race conditions on these
variables, despite being very probably race free. On Linux, you can
use --sim-hints=deactivate-pthread-stack-cache-via-hack
to avoid such false positive error messages
(see --sim-hints).
Round up all finished threads using
pthread_join. Avoid
detaching threads: don't create threads in the detached state, and
don't call pthread_detach on existing threads.
Using pthread_join to round up finished
threads provides a clear synchronisation point that both Helgrind and
programmers can see. If you don't call
pthread_join on a thread, Helgrind has no way to
know when it finishes, relative to any
significant synchronisation points for other threads in the program. So
it assumes that the thread lingers indefinitely and can potentially
interfere indefinitely with the memory state of the program. It
has every right to assume that -- after all, it might really be
the case that, for scheduling reasons, the exiting thread did run
very slowly in the last stages of its life.
Helgrind tracks the state of memory in detail, and memory
management bugs in the application are liable to cause confusion.
In extreme cases, applications which do many invalid reads and
writes (particularly to freed memory) have been known to crash
Helgrind. So, ideally, you should make your application
Memcheck-clean before using Helgrind.
It may be impossible to make your application Memcheck-clean
unless you first remove threading bugs. In particular, it may be
difficult to remove all reads and writes to freed memory in
multithreaded C++ destructor sequences at program termination.
So, ideally, you should make your application Helgrind-clean
before using Memcheck.
Since this circularity is obviously unresolvable, at least
bear in mind that Memcheck and Helgrind are to some extent
complementary, and you may need to use them together.
POSIX requires that implementations of standard I/O
(printf, fprintf,
fwrite, fread, etc) are thread
safe. Unfortunately GNU libc implements this by using internal locking
primitives that Helgrind is unable to intercept. Consequently Helgrind
generates many false race reports when you use these functions.
Helgrind attempts to hide these errors using the standard
Valgrind error-suppression mechanism. So, at least for simple
test cases, you don't see any. Nevertheless, some may slip
through. Just something to be aware of.
Helgrind's error checks do not work properly inside the
system threading library itself
(libpthread.so), and it usually
observes large numbers of (false) errors in there. Valgrind's
suppression system then filters these out, so you should not see
them.
If you see any race errors reported
where libpthread.so or
ld.so is the object associated
with the innermost stack frame, please file a bug report at
http://www.valgrind.org/.
7.6. Helgrind Command-line Options
The following end-user options are available:
--free-is-write=no|yes
[default: no]
When enabled (not the default), Helgrind treats freeing of
heap memory as if the memory was written immediately before
the free. This exposes races where memory is referenced by
one thread, and freed by another, but there is no observable
synchronisation event to ensure that the reference happens
before the free.
This functionality is new in Valgrind 3.7.0, and is
regarded as experimental. It is not enabled by default
because its interaction with custom memory allocators is not
well understood at present. User feedback is welcomed.
--track-lockorders=no|yes
[default: yes]
When enabled (the default), Helgrind performs lock order
consistency checking. For some buggy programs, the large number
of lock order errors reported can become annoying, particularly
if you're only interested in race errors. You may therefore find
it helpful to disable lock order checking.
--history-level=none|approx|full
[default: full]
--history-level=full (the default) causes Helgrind
collects enough information about "old" accesses that it can produce two
stack traces in a race report -- both the stack trace for the current
access, and the trace for the older, conflicting access. To limit memory
usage, "old" accesses stack traces are limited to a maximum
of --history-backtrace-size entries (default 8) or
to --num-callers value if this value is smaller.
Collecting such information is expensive in both speed and
memory, particularly for programs that do many inter-thread
synchronisation events (locks, unlocks, etc). Without such
information, it is more difficult to track down the root
causes of races. Nonetheless, you may not need it in
situations where you just want to check for the presence or
absence of races, for example, when doing regression testing
of a previously race-free program.
--history-level=none is the opposite
extreme. It causes Helgrind not to collect any information
about previous accesses. This can be dramatically faster
than --history-level=full.
--history-level=approx provides a
compromise between these two extremes. It causes Helgrind to
show a full trace for the later access, and approximate
information regarding the earlier access. This approximate
information consists of two stacks, and the earlier access is
guaranteed to have occurred somewhere between program points
denoted by the two stacks. This is not as useful as showing
the exact stack for the previous access
(as --history-level=full does), but it is
better than nothing, and it is almost as fast as
--history-level=none.
--history-backtrace-size=<number>
[default: 8]
When --history-level=full is selected,
--history-backtrace-size=number indicates how many
entries to record in "old" accesses stack traces.
--delta-stacktrace=no|yes
[default: yes on linux amd64/x86]
This flag only has any effect
at --history-level=full.
--delta-stacktrace configures the way Helgrind
captures the stacktraces for the
option --history-level=full. Such a stacktrace is
typically needed each time a new piece of memory is read or written in a
basic block of instructions.
--delta-stacktrace=no causes
Helgrind to compute a full history stacktrace from the unwind info
each time a stacktrace is needed.
--delta-stacktrace=yes indicates to Helgrind to
derive a new stacktrace from the previous stacktrace, as long as there
was no call instruction, no return instruction, or any other instruction
changing the call stack since the previous stacktrace was captured. If
no such instruction was executed, the new stacktrace can be derived from
the previous stacktrace by just changing the top frame to the current
program counter. This option can speed up Helgrind by 25% when
using --history-level=full.
The following aspects have to be considered when
using --delta-stacktrace=yes :
In some cases (for example in a function
prologue), the valgrind unwinder might not properly unwind
the stack, due to some limitations and/or due to wrong
unwind info. When using --delta-stacktrace=yes, the wrong
stack trace captured in the function prologue will be kept
till the next call or return.
On the other hand, --delta-stacktrace=yes
sometimes helps to obtain a correct stacktrace, for
example when the unwind info allows a correct stacktrace
to be done in the beginning of the sequence, but not later
on in the instruction sequence.
Determining which instructions are changing
the callstack is partially based on platform dependent
heuristics, which have to be tuned/validated specifically
for the platform. Also, unwinding in a function prologue
must be good enough to allow using
--delta-stacktrace=yes. Currently, the option
--delta-stacktrace=yes has been reasonably validated only
on linux x86 32 bits and linux amd64 64 bits. For more
details about how to validate --delta-stacktrace=yes, see
debug option --hg-sanity-flags and the function
check_cached_rcec_ok in libhb_core.c.
--conflict-cache-size=N
[default: 1000000]
This flag only has any effect
at --history-level=full.
Information about "old" conflicting accesses is stored in
a cache of limited size, with LRU-style management. This is
necessary because it isn't practical to store a stack trace
for every single memory access made by the program.
Historical information on not recently accessed locations is
periodically discarded, to free up space in the cache.
This option controls the size of the cache, in terms of the
number of different memory addresses for which
conflicting access information is stored. If you find that
Helgrind is showing race errors with only one stack instead of
the expected two stacks, try increasing this value.
The minimum value is 10,000 and the maximum is 30,000,000
(thirty times the default value). Increasing the value by 1
increases Helgrind's memory requirement by very roughly 100
bytes, so the maximum value will easily eat up three extra
gigabytes or so of memory.
--check-stack-refs=no|yes
[default: yes]
By default Helgrind checks all data memory accesses made by your
program. This flag enables you to skip checking for accesses
to thread stacks (local variables). This can improve
performance, but comes at the cost of missing races on
stack-allocated data.
--ignore-thread-creation=<yes|no>
[default: no]
Controls whether all activities during thread creation should be
ignored. By default enabled only on Solaris.
Solaris provides higher throughput, parallelism and scalability than
other operating systems, at the cost of more fine-grained locking
activity. This means for example that when a thread is created under
glibc, just one big lock is used for all thread setup. Solaris libc
uses several fine-grained locks and the creator thread resumes its
activities as soon as possible, leaving for example stack and TLS setup
sequence to the created thread.
This situation confuses Helgrind as it assumes there is some false
ordering in place between creator and created thread; and therefore many
types of race conditions in the application would not be reported.
To prevent such false ordering, this command line option is set to
yes by default on Solaris.
All activity (loads, stores, client requests) is therefore ignored
during:
pthread_create() call in the creator thread
thread creation phase (stack and TLS setup) in the created thread
Also new memory allocated during thread creation is untracked,
that is race reporting is suppressed there. DRD does the same thing
implicitly. This is necessary because Solaris libc caches many objects
and reuses them for different threads and that confuses
Helgrind.
7.7. Helgrind Monitor Commands
The Helgrind tool provides monitor commands handled by Valgrind's built-in
gdbserver (see Monitor command handling by the Valgrind gdbserver).
Valgrind python code provides GDB front end commands giving an easier usage of
the helgrind monitor commands (see
GDB front end commands for Valgrind gdbserver monitor commands). To launch an
helgrind monitor command via its GDB front end command, instead of prefixing
the command with "monitor", you must use the GDB helgrind
command (or the shorter aliases hg). Using the helgrind
GDB front end command provide a more flexible usage, such as evaluation of
address and length arguments by GDB. In GDB, you can use help
helgrind to get help about the helgrind front end monitor commands
and you can use apropos helgrind to get all the commands
mentionning the word "helgrind" in their name or on-line help.
info locks [lock_addr] shows the list of locks
and their status. If lock_addr is given, only shows
the lock located at this address.
In the following example, helgrind knows about one lock. This
lock is located at the guest address ga
0x8049a20. The lock kind is rdwr
indicating a reader-writer lock. Other possible lock kinds
are nonRec (simple mutex, non recursive)
and mbRec (simple mutex, possibly recursive).
The lock kind is then followed by the list of threads helding the
lock. In the below example, R1:thread #6 tid 3
indicates that the helgrind thread #6 has acquired (once, as the
counter following the letter R is one) the lock in read mode. The
helgrind thread nr is incremented for each started thread. The
presence of 'tid 3' indicates that the thread #6 is has not exited
yet and is the valgrind tid 3. If a thread has terminated, then
this is indicated with 'tid (exited)'.
(gdb) monitor info locks
Lock ga 0x8049a20 {
kind rdwr
{ R1:thread #6 tid 3 }
}
(gdb)
If you give the option --read-var-info=yes,
then more information will be provided about the lock location, such as
the global variable or the heap block that contains the lock:
Lock ga 0x8049a20 {
Location 0x8049a20 is 0 bytes inside global var "s_rwlock"
declared at rwlock_race.c:17
kind rdwr
{ R1:thread #3 tid 3 }
}
The GDB equivalent helgrind front end command helgrind info locks
[ADDR] accept any address expression for its first ADDR
argument.
accesshistory <addr> [<len>]
shows the access history recorded for <len> (default 1) bytes
starting at <addr>. For each recorded access that overlaps
with the given range, accesshistory shows the operation
type (read or write), the address and size read or written, the helgrind
thread nr/valgrind tid number that did the operation and the locks held
by the thread at the time of the operation.
The oldest access is shown first, the most recent access is shown last.
In the following example, we see first a recorded write of 4 bytes by
thread #7 that has modified the given 2 bytes range.
The second recorded write is the most recent recorded write : thread #9
modified the same 2 bytes as part of a 4 bytes write operation.
The list of locks held by each thread at the time of the write operation
are also shown.
(gdb) monitor accesshistory 0x8049D8A 2
write of size 4 at 0x8049D88 by thread #7 tid 3
==6319== Locks held: 2, at address 0x8049D8C (and 1 that can't be shown)
==6319== at 0x804865F: child_fn1 (locked_vs_unlocked2.c:29)
==6319== by 0x400AE61: mythread_wrapper (hg_intercepts.c:234)
==6319== by 0x39B924: start_thread (pthread_create.c:297)
==6319== by 0x2F107D: clone (clone.S:130)
write of size 4 at 0x8049D88 by thread #9 tid 2
==6319== Locks held: 2, at addresses 0x8049DA4 0x8049DD4
==6319== at 0x804877B: child_fn2 (locked_vs_unlocked2.c:45)
==6319== by 0x400AE61: mythread_wrapper (hg_intercepts.c:234)
==6319== by 0x39B924: start_thread (pthread_create.c:297)
==6319== by 0x2F107D: clone (clone.S:130)
The GDB equivalent helgrind front end command helgrind
accesshistory ADDR [LEN] accept any address expression for its
first ADDR argument. The second optional argument is any integer
expression. Note that these 2 arguments must be separated by a space,
like in the following example:
(gdb) hg accesshistory &mx sizeof(mx)
read of size 4 at 0x1130A8 by thread #2 tid (exited)
==302== Locks held: none
==302== at 0x1094AC: child8 (tc19_shadowmem.c:37)
==302== by 0x10A0DF: steer (tc19_shadowmem.c:288)
==302== by 0x48448A3: mythread_wrapper (hg_intercepts.c:406)
==302== by 0x4879EA6: start_thread (pthread_create.c:477)
==302== by 0x4990A2E: clone (clone.S:95)
xtmemory [<filename> default xtmemory.kcg.%p.%n]
requests Helgrind tool to produce an xtree heap memory report.
See Execution Trees for
a detailed explanation about execution trees.
7.8. Helgrind Client Requests
The following client requests are defined in
helgrind.h. See that file for exact details of their
arguments.
VALGRIND_HG_CLEAN_MEMORY
This makes Helgrind forget everything it knows about a
specified memory range. This is particularly useful for memory
allocators that wish to recycle memory.
ANNOTATE_HAPPENS_BEFORE
ANNOTATE_HAPPENS_AFTER
ANNOTATE_NEW_MEMORY
ANNOTATE_RWLOCK_CREATE
ANNOTATE_RWLOCK_DESTROY
ANNOTATE_RWLOCK_ACQUIRED
ANNOTATE_RWLOCK_RELEASED
These are used to describe to Helgrind, the behaviour of
custom (non-POSIX) synchronisation primitives, which it otherwise
has no way to understand. See comments
in helgrind.h for further
documentation.
7.9. A To-Do List for Helgrind
The following is a list of loose ends which should be tidied up
some time.
For lock order errors, print the complete lock
cycle, rather than only doing for size-2 cycles as at
present.
The conflicting access mechanism sometimes
mysteriously fails to show the conflicting access' stack, even
when provided with unbounded storage for conflicting access info.
This should be investigated.
Document races caused by GCC's thread-unsafe code
generation for speculative stores. In the interim see
http://gcc.gnu.org/ml/gcc/2007-10/msg00266.html
and http://lkml.org/lkml/2007/10/24/673.
Don't update the lock-order graph, and don't check
for errors, when a "try"-style lock operation happens (e.g.
pthread_mutex_trylock). Such calls do not add any real
restrictions to the locking order, since they can always fail to
acquire the lock, resulting in the caller going off and doing Plan
B (presumably it will have a Plan B). Doing such checks could
generate false lock-order errors and confuse users.
Performance can be very poor. Slowdowns on the
order of 100:1 are not unusual. There is limited scope for
performance improvements.