My question is related to multithreading lock-free synchronization. I wanted to know the following:
What are general approaches to achieve this? I read somewher
Here are some general approaches that can minimize the use of locks, assuming your algorithm has some particular exploitable features:
When updating a single numeric variable, you can use non-blocking primitives such as CAS, atomic_increment, etc. They are usually much faster that a classic blocking critical section (lock, mutex).
When a data structure is read by multiple threads, but only written by one or few threads, an obvious solution would be a read-write lock, instead of a full lock.
Try to exploit fine grain locking. For example, instead of locking an entire data structure with a single lock, see if you can use multiple different locks to protect distinct sections of the data structure.
If you're relying on the implicit memory fence effect of locks to ensure visibility of a single variable across threads, just use volatile
1, if available.
Sometimes, using a conditional variable (and associated lock) is too slow in practice. In this case, a volatile
busy spin is much more efficient.
More good advice on this topic here: http://software.intel.com/en-us/articles/intel-guide-for-developing-multithreaded-applications/
A nice read in another SO question: Lock-free multi-threading is for real threading experts (don't be scared by the title).
And a recently discussed lock-free Java implementation of atomic_decrement: Starvation in non-blocking approaches
1 The use of volatile
here applies to languages such as Java where volatile
has defined semantics in the memory model, but not to C or C++ where volatile
preceded the introduction of the cross-thread memory model and doesn't integrate with it. Similar constructs are available in those languages, such as the various std::memory_order specifiers in C++.