WaitForSingleObject - do threads waiting form a queue?

后端 未结 5 523
庸人自扰
庸人自扰 2021-01-18 06:04

If I set 3 threads to wait for a mutex to be release, do they form a queue based on the order they requested it in or is it undefined behaviour (i.e. we don\'t know which on

相关标签:
5条回答
  • 2021-01-18 06:22

    It is explicitly documented in the SDK article:

    If more than one thread is waiting on a mutex, a waiting thread is selected. Do not assume a first-in, first-out (FIFO) order. External events such as kernel-mode APCs can change the wait order.

    These kind of events are entirely out of your control. So "undefined behavior" is an appropriate way to describe it.

    0 讨论(0)
  • 2021-01-18 06:30

    There seem to be very mixed opinions about this and no clear information anywhere. In this thread: http://us.generation-nt.com/answer/are-events-fair-help-38447612.html some people seem to suggest that fairness of events is implemented using a simple fifo queue that ignores priorities, while others are saying that fairness should not be assumed.

    Bottom line, I think you're better off not basing your logic on fairness, or wrapping an event with your own implementation that guarantees fairness.

    0 讨论(0)
  • 2021-01-18 06:36

    Yes, only one thread will be wake up and lock mutex. But order is undefined.

    0 讨论(0)
  • 2021-01-18 06:41

    The wake-up order is undefined, see

    Can a single SetEvent() trigger multiple WaitForSingleObject()

    0 讨论(0)
  • 2021-01-18 06:43

    The Mutex Object is mostly fair. The APC case can occur but it is not that common. Especially if the thread is not doing I/O or is doing I/O using completion ports or synchronously.

    Most of the Windows user-mode locks (SRWLock, CriticalSection) are unfair if you can acquire them without blocking but fair if you have to block in the kernel. The reason it is done this way is to avoid lock convoys. The moment a fair lock becomes contended, every acquirer has to go through the scheduler and the context switch path before getting the lock. No one can 'skip ahead' and just take the lock because they happen to be running. Thus the lock acquire time for the last thread in the queue increases by the scheduling and context switch time for each prior thread in the queue. The system does not recover from this state until external load is mostly removed because this is a stable condition.

    For performance, I would recommend using one of the aforementioned user-mode locks since they are much faster than a kernel mutex, if they fit into your scenario.

    0 讨论(0)
提交回复
热议问题