(Note: Much of this is redundant with commentary on Massive CPU load using std::lock (c++11), but I think this topic deserves its own question and answers.)
I recent
Your confusion with the standardese seems to be due to this statement
5 Effects: All arguments are locked via a sequence of calls to
lock()
,try_lock()
, orunlock()
on each argument.
That does not imply that std::lock
will recursively call itself with each argument to the original call.
Objects that satisfy the Lockable concept (§30.2.5.4 [thread.req.lockable.req]) must implement all 3 of those member functions. std::lock
will invoke these member functions on each argument, in an unspecified order, to attempt to acquire a lock on all objects, while doing something implementation defined to avoid deadlock.
Your example 3 has a potential for deadlock because you're not issuing a single call to std::lock
with all objects that you want to acquire a lock on.
Example 2 will not cause a deadlock, Howard's answer explains why.
Did C++11 adopt this function from Boost?
If so, Boost's description is instructive (emphasis mine):
Effects: Locks the Lockable objects supplied as arguments in an unspecified and indeterminate order in a way that avoids deadlock. It is safe to call this function concurrently from multiple threads with the same mutexes (or other lockable objects) in different orders without risk of deadlock. If any of the lock() or try_lock() operations on the supplied Lockable objects throws an exception any locks acquired by the function will be released before the function exits.
I think you are misunderstanding the scope of the deadlock avoidance. That's understandable since the text seems to mention lock
in two different contexts, the "multi-lock" std::lock
and the individual locks carried out by that "multi-lock" (however the lockables implement it). The text for std::lock
states:
All arguments are locked via a sequence of calls to lock(), try_lock(),or unlock() on each argument. The sequence of calls shall not result in deadlock
If you call std::lock
passing ten different lockables, the standard guarantees no deadlock for that call. It's not guaranteed that deadlock is avoided if you lock the lockables outside the control of std::lock
. That means thread 1 locking A then B can deadlock against thread 2 locking B then A. That was the case in your original third example, which had (pseudo-code):
Thread 1 Thread 2
lock A lock B
lock B lock A
As that couldn't have been std::lock
(it only locked one resource), it must have been something like unique_lock
.
The deadlock avoidance will occur if both threads attempt to lock A/B and B/A in a single call to std::lock
, as per your first example. Your second example won't deadlock either since thread 1 will be backing off if the second lock is needed by a thread 2 already having the first lock. Your updated third example:
Thread 1 Thread 2
std::lock(lock1,lock2); std::lock(lock3,lock4);
std::lock(lock3,lock4); std::lock(lock1,lock2);
still has the possibility of deadlock since the atomicity of the lock is a single call to std::lock
. For example, if thread 1 successfully locks lock1
and lock2
, then thread 2 successfully locks lock3
and lock4
, deadlock will ensue as both threads attempt to lock the resource held by the other.
So, in answer to your specific questions:
1/ Yes, I think you've misunderstood what the standard is saying. The sequence it talks about is clearly the sequence of locks carried out on the individual lockables passed to a single std::lock
.
2/ As to what they were thinking, it's sometimes hard to tell :-) But I would posit that they wanted to give us capabilities that we would otherwise have to write ourselves. Yes, back-off-and-retry may not be an ideal strategy but, if you need the deadlock avoidance functionality, you may have to pay the price. Better for the implementation to provide it rather than it having to be written over and over again by developers.
3/ No, there's no need to avoid it. I don't think I've ever found myself in a situation where simple manual ordering of locks wasn't possible but I don't discount the possibility. If you do find yourself in that situation, this can assist (so you don't have to code up your own deadlock avoidance stuff).
In regard to the comments that back-off-and-retry is a problematic strategy, yes, that's correct. But you may be missing the point that it may be necessary if, for example, you cannot enforce the ordering of the locks before-hand.
And it doesn't have to be as bad as you think. Because the locks can be done in any order by std::lock
, there's nothing stopping the implementation from re-ordering after each backoff to bring the "failing" lockable to the front of the list. That would mean those that were locked would tend to gather at the front, so that the std::lock
would be less likely to be claiming resources unnecessarily.
Consider the call std::lock (a, b, c, d, e, f)
in which f
was the only lockable that was already locked. In the first lock attempt, that call would lock a
through e
then "fail" on f
.
Following the back-off (unlocking a
through e
), the list to lock would be changed to f, a, b, c, d, e
so that subsequent iterations would be less likely to unnecessarily lock. That's not fool-proof since other resources may be locked or unlocked between iterations, but it tends towards success.
In fact, it may even order the list initially by checking the states of all lockables so that all those currently locked are up the front. That would start the "tending toward success" operation earlier in the process.
That's just one strategy, there may well be others, even better. That's why the standard didn't mandate how it was to be done, on the off-chance there may be some genius out there who comes up with a better way.
Perhaps it would help if you thought of each individual call to std::lock(x, y, ...)
as atomic. It will block until it can lock all of its arguments. If you don't know all of the mutexes you need to lock a-priori, do not use this function. If you do know, then you can safely use this function, without having to order your locks.
But by all means order your locks if that is what you prefer to do.
Thread 1 Thread 2
std::lock(lock1, lock2); std::lock(lock2, lock1);
The above will not deadlock. One of the threads will get both locks, and the other thread will block until the first one has released the locks.
Thread 1 Thread 2
std::lock(lock1, lock2, lock3, lock4); std::lock(lock3, lock4);
std::lock(lock1, lock2);
The above will not deadlock. Though this is tricky. If Thread 2 gets lock3 and lock4 before Thread1 does, then Thread 1 will block until Thread 2 releases all 4 locks. If Thread 1 gets the four locks first, then Thread 2 will block at the point of locking lock3 and lock4 until Thread 1 releases all 4 locks.
Thread 1 Thread 2
std::lock(lock1,lock2); std::lock(lock3,lock4);
std::lock(lock3,lock4); std::lock(lock1,lock2);
Yes, the above can deadlock. You can view the above as exactly equivalent to:
Thread 1 Thread 2
lock12.lock(); lock34.lock();
lock34.lock(); lock12.lock();
Update
I believe a misunderstanding is that dead-lock and live-lock are both correctness issues.
In actual practice, dead-lock is a correctness issue, as it causes the process to freeze. And live-lock is a performance issue, as it causes the process to slow down, but it still completes its task correctly. The reason is that live-lock will not (in practice) sustain itself indefinitely.
<disclaimer>
There are forms of live-lock that can be created which are permanent, and thus equivalent to dead-lock. This answer does not address such code, and such code is not relevant to this issue.
</disclaimer>
The yield shown in this answer is a significant performance optimization which significantly decreases live-lock, and thus significantly increases the performance of std::lock(x, y, ...)
.
Update 2
After a long delay, I have written a first draft of a paper on this subject. The paper compares 4 different ways of getting this job done. It contains software you can copy and paste into your own code and test yourself:
http://howardhinnant.github.io/dining_philosophers.html