Interlocked and Memory Barriers

前端 未结 7 2006
情书的邮戳
情书的邮戳 2021-02-02 18:04

I have a question about the following code sample (m_value isn\'t volatile, and every thread runs on a separate processor)

void Foo() // executed by thr         


        
7条回答
  •  执念已碎
    2021-02-02 18:54

    If you don't tell the compiler or runtime that m_value should not be read ahead of Bar(), it can and may cache the value of m_value ahead of Bar() and simply use the cached value. If you want to ensure that it sees the "latest" version of m_value, either shove in a Thread.MemoryBarrier() or use Thread.VolatileRead(ref m_value). The latter is less expensive than a full memory barrier.

    Ideally you could shove in a ReadBarrier, but the CLR doesn't seem to support that directly.

    EDIT: Another way to think about it is that there are really two kinds of memory barriers: compiler memory barriers that tell the compiler how to sequence reads and writes and CPU memory barriers that tell the CPU how to sequence reads and writes. The Interlocked functions use CPU memory barriers. Even if the compiler treated them as compiler memory barriers, it still wouldn't matter, as in this specific case, Bar() could have been separately compiled and not known of the other uses of m_value that would require a compiler memory barrier.

提交回复
热议问题