Do mutexes guarantee ordering of acquisition?

前端 未结 6 1212
隐瞒了意图╮
隐瞒了意图╮ 2020-12-09 15:35

A coworker had an issue recently that boiled down to what we believe was the following sequence of events in a C++ application with two threads:

  • Thread A ho

相关标签:
6条回答
  • 2020-12-09 16:14

    The logic here is very simple - the thread is not preempted based on mutexes, because that would require a cost incurred for each mutex operation, which is definitely not what you want. The cost of grabbing a mutex is high enough without forcing the scheduler to look for other threads to run.

    If you want to fix this you can always yield the current thread. You can use std::this_thread::yield() - http://en.cppreference.com/w/cpp/thread/yield - and that might offer the chance to thread B to take over the mutex. But before you do that, allow me to tell you that this is a very fragile way of doing things, and offers no guarantee. You could, alternatively, investigate the issue deeper:

    1. Why is it a problem that the B thread is not started when A releases the resource? Your code should not depend on such logic.

    2. Consider using alternative thread synchronization objects like barriers (boost::barrier or http://linux.die.net/man/3/pthread_barrier_wait ) instead, if you really need this sort of logic.

    3. Investigate if you really need to release the mutex from A at that point - I find the practice of locking and releasing fast a mutex for more than one time a code smell, it usually impacts terribly the performace. See if you can group extraction of data in immutable structures which you can play around with.

    4. Ambitious, but try to work without mutexes - use instead lock-free structures and a more functional approach, including using a lot of immutable structures. I often found quite a performance gain from updating my code to not use mutexes (and still work correctly from the mt point of view)

    0 讨论(0)
  • 2020-12-09 16:15

    The guarantee of a std::mutex is enable exclusive access to shared resources. Its sole purpose is to eliminate the race condition when multiple threads attempt to access shared resources.

    The implementer of a mutex may choose to favor the current thread acquiring a mutex (over another thread) for performance reasons. Allowing the current thread to acquire the mutex and make forward progress without requiring a context switch is often a preferred implementation choice supported by profiling/measurements.

    Alternatively, the mutex could be constructed to prefer another (blocked) thread for acquisition (perhaps chosen according FIFO). This likely requires a thread context switch (on the same or other processor core) increasing latency/overhead. NOTE: FIFO mutexes can behave in surprising ways. E.g. Thread priorities must be considered in FIFO support - so acquisition won't be strictly FIFO unless all competing threads are the same priority.

    Adding a FIFO requirement to a mutex's definition constrains implementers to provide suboptimal performance in nominal workloads. (see above)

    Protecting a queue of callable objects (std::function) with a mutex would enable sequenced execution. Multiple threads can acquire the mutex, enqueue a callable object, and release the mutex. The callable objects can be executed by a single thread (or a pool of threads if synchrony is not required).

    0 讨论(0)
  • 2020-12-09 16:18

    How do you know this:

    While thread A is holding the mutex, thread B attempts to lock it. Since it is held, thread B is suspended.

    How do you know thread B is suspended. How do you know that it is not just finished the line of code before trying to grab the lock, but not yet grabbed the lock:

    Thread B:

    x = 17; // is the thread here?
    // or here? ('between' lines of code)
    mtx.lock();  // or suspended in here?
    // how can you tell?
    

    You can't tell. At least not in theory.

    Thus the order of acquiring the lock is, to the abstract machine (ie the language), not definable.

    0 讨论(0)
  • 2020-12-09 16:30

    Known problem. C++ mutexes are thin layer on top of OS-provided mutexes, and OS-provided mutexes are often not fair. They do not care for FIFO.

    The other side of the same coin is that threads are usually not pre-empted until they run out of their time slice. As a result, thread A in this scenario was likely to continue to be executed, and got the mutex right away because of that.

    0 讨论(0)
  • 2020-12-09 16:33

    You are inadvertently suggesting that threads should synchronise access to the synchronisation primitive. Mutexes are, as the name suggests, about Mutual Exclusion. They are not designed for control flow. If you want to signal a thread to run from another thread you need to use a synchronisation primitive designed for control flow i.e. a signal.

    0 讨论(0)
  • 2020-12-09 16:36

    •Thread A finishes the work that it was holding the mutex for, thus releasing the mutex. •Very shortly thereafter, thread A needs to touch a resource that is protected by the mutex, so it locks it again

    In real world, when the program is running. there is no guarantee provided by any threading library or the OS. Here "shortly thereafter" may mean a lot to the OS and the hardware. If you say, 2 minutes, then thread B would definitely get it. If you say 200 ms or low, there is no promise of A or B getting it.

    Number of cores, load on different processors/cores/threading units, contention, thread switching, kernel/user switches, pre-emption, priorities, deadlock detection schemes et. al. will make a lot of difference. Just by looking at green signal from far you cannot guarantee that you will get it green.

    If you want that thread B must get the resource, you may use IPC mechanism to instruct the thread B to gain the resource.

    0 讨论(0)
提交回复
热议问题