Threads and simple Dead lock cure

前端 未结 9 660
余生分开走
余生分开走 2021-01-04 22:48

When dealing with threads (specifically in C++) using mutex locks and semaphores is there a simple rule of thumb to avoid Dead Locks and have nice clean Synchronization?

相关标签:
9条回答
  • 2021-01-04 23:07

    There is no simple deadlock cure.

    Acquire locks in agreed order: If all calls acquire A->B->C then no deadlock can occur. Deadlocks can occur only if the locking order differs between the two threads (one acquires A->B the second B->A).

    In practice is hard to choose an order between arbitrary objects in memory. On a simple trivial project is possible, but on large projects with many individual contributors is very hard. A partial solution is to create hierarchies, by ranking the locks. All locks in module A have rank 1, all locks in module B have rank 2. One can acquire a lock of rank 2 when helding locks of rank 1, but not vice-versa. Of course you need a framework around the locking primitives that tracks and validates the ranking.

    0 讨论(0)
  • 2021-01-04 23:11

    Read Deadlock: the Problem and a Solution.

    "The common advice for avoiding deadlock is to always lock the two mutexes in the same order: if you always lock mutex A before mutex B, then you'll never deadlock. Sometimes this is straightforward, as the mutexes are serving different purposes, but other times it is not so simple, such as when the mutexes are each protecting a separate instance of the same class".

    0 讨论(0)
  • 2021-01-04 23:12
    1. If at all possible, design your code so that you never have to lock more then a single mutex/semaphore at a time.
    2. If that's not possible, make sure to always lock multiple mutex/semaphores in the same order. So if one part of the code locks mutex A and then takes semaphore B, make sure that no other part of the code takes semaphore B and then locks mutex A.
    0 讨论(0)
  • 2021-01-04 23:18

    If you want to attack the possibility of a deadlock you must attack one of the 4 crucial conditions for the existence of a deadlock.

    The 4 conditions for a deadlock are: 1. Mutual Exclusion - only one thread can enter the critical section at a time. 2. Hold and Wait - a thread doesn't release the resources he acquired as long as he didn't finish his job even if other resources are un available. 3. No preemption - A thread doesn't have a priority over other threads. 4. Resource Cycle - There has to be a cycle chain of threads that waits for resources from other threads.

    The easiest condition to attack is the resource cycle by making sure that no cycles are possible.

    0 讨论(0)
  • 2021-01-04 23:21

    A good simple rule of thumb is to always obtain your locks in a consistent predictable order from everywhere in your application. For example, if your resources have names, always lock them in alphabetical order. If they have numeric ids, always lock from lowest to highest. The exact order or criteria is arbitrary. The key is to be consistent. That way you'll never have a deadlock situation. eg.

    1. Thread 1 locks resource A
    2. Thread 2 locks resource B
    3. Thread 1 waits to obtain a lock on B
    4. Thread 2 waits to obtain a lock on A
    5. Deadlock

    The above can never happen if you follow the rule of thumb outlined above. For a more detailed discussion, see the Wikipedia entry on the Dining Philosophers problem.

    0 讨论(0)
  • 2021-01-04 23:21

    There are plenty of simple "deadlock cures". But none that are easy to apply and work universally.

    The simplest of all, of course, is "never have more than one thread".

    Assuming you have a multithreaded application though, there are still a number of solutions:

    You can try to minimize shared state and synchronization. Two threads that just run in parallel and never interact can never deadlock. Deadlocks only occur when multiple threads try to access the same resource. Why do they do that? Can that be avoided? Can the resource be restructured or divided so that for example, one thread can write to it, and other threads are asynchronously passed the data they need?

    Perhaps the resource can be copied, giving each thread its own private copy to work with?

    And as already mentioned by every other answer, if and when you try to acquire locks, do so in a global consistent order. To simplify this, you should try to ensure that all the locks a thread is going to need are acquired as a single operation. If a thread needs to acquire locks A, B and C, it should not make three lock() calls at different times and from different places. You'll get confused, and you won't be able to keep track of which locks are held by the thread, and which ones it has yet to acquire, and then you'll mess up the order. If you can acquire all the lock you need once, then you can factor it out into a separate function call which acquires N locks, and does so in the correct order to avoid deadlocks.

    Then there are the more ambitious approaches: Techniques like CSP make threading extremely simple and easy to prove correct, even with thousands of concurrent threads. But it requires you to structure your program very differently from what you're used to.

    Transactional Memory is another promising option, and one that may be easier to integrate into conventional programs. But production-quality implementations are still very rare.

    0 讨论(0)
提交回复
热议问题