Is std::mutex sequentially consistent?

前端 未结 2 1022
情书的邮戳
情书的邮戳 2021-01-04 05:48

Say, I have two threads A and B writing to a global Boolean variables fA and fB respectively which are initially set to <

相关标签:
2条回答
  • 2021-01-04 06:25

    Is it possible to observe the modifications on fA and fB in different orders in different threads C and D?

    The basic idea of a lock "acquiring" the "released" state (and side effect history) of an unlock make that impossible: you promise to only access a shared object by acquiring the corresponding lock, and that lock will "synchronize" with all past modifications seen by the thread that did the unlock. So only one history, not only of lock-unlock operations, but of accesses to shared objects, can exist.

    0 讨论(0)
  • 2021-01-04 06:31

    Yes, that is allowed That output isn't possible, but std::mutex is not necessarily sequentially consistent. Acquire/release is enough to rule out that behaviour.

    std::mutex is not defined in the standard to be sequentially consistent, only that

    30.4.1.2 Mutex types [thread.mutex.requirements.mutex]

    11 Synchronization: Prior unlock() operations on the same object shall synchronize with (1.10) this operation [lock()].

    Synchronize-with seems to be defined in the same was as std::memory_order::release/acquire (see this question).
    As far as I can see, an acquire/release spinlock would satisfy the standards for std::mutex.

    Big edit:

    However, I don't think that means what you think (or what I thought). The output is still not possible, since acquire/release semantics are enough to rule it out. This is a kind of subtle point that is better explained here. It seems obviously impossible at first but I think it's right to be cautious with stuff like this.

    From the standard, unlock() synchronises with lock(). That means anything that happens before unlock() is visible after lock(). Happens before (henceforth ->) is a slightly weird relation explained better in the above link, but because there's mutexes around everything in this example, everything works like you expect, i.e. const auto _1 = fA; happens before const auto _2 = fB;, and any changes visible to a thread when it unlock()s the mutex are visible to the next thread that lock()s the mutex. Also it has some expected properties, e.g. if X happens before Y and Y happens before Z, then X -> Z, also if X happens before Y then Y doesn't happen before X.

    From here it's not hard to see the contradiction that seems intuitively correct.

    In short, there's a well defined order of operations for each mutex - e.g. for mutex A, threads A, C, D hold the locks in some sequence. For thread D to print fA=0, it must lock mA before thread A, vice versa for thread C. So the lock sequence for mA is D(mA) -> A(mA) -> C(mA).

    For mutex B the sequence must be C(mB) -> B(mB) -> D(mB).

    But from the program we know C(mA) -> C(mB), so that lets us put both together to get D(mA) -> A(mA) -> C(mA) -> C(mB) -> B(mB) -> D(mB), which means D(mA) -> D(mB). But the code also gives us D(mB) -> D(mA), which is a contradiction, meaning your observed output is not possible.

    This outcome is no different for an acquire/release spinlock, I think everyone was confusing regular acquire/release memory access on a variable with access to a variable protected by a spinlock. The difference is that with a spinlock, the reading threads also perform a compare/exchange and a release write, which is a completely different scenario to a single release write and acquire read.

    If you used a sequentially consistent spinlock then this wouldn't affect the output. The only difference is that you could always categorically answer questions like "mutex A was locked before mutex B" from a separate thread that didn't acquire either lock. But for this example and most others, that kind of statement isn't useful, hence acquire/release being the standard.

    0 讨论(0)
提交回复
热议问题