I have a question about the following code sample (m_value isn\'t volatile, and every thread runs on a separate processor)
void Foo() // executed by thr
If you don't tell the compiler or runtime that m_value
should not be read ahead of Bar(), it can and may cache the value of m_value
ahead of Bar()
and simply use the cached value. If you want to ensure that it sees the "latest" version of m_value
, either shove in a Thread.MemoryBarrier()
or use Thread.VolatileRead(ref m_value)
. The latter is less expensive than a full memory barrier.
Ideally you could shove in a ReadBarrier, but the CLR doesn't seem to support that directly.
EDIT: Another way to think about it is that there are really two kinds of memory barriers: compiler memory barriers that tell the compiler how to sequence reads and writes and CPU memory barriers that tell the CPU how to sequence reads and writes. The Interlocked
functions use CPU memory barriers. Even if the compiler treated them as compiler memory barriers, it still wouldn't matter, as in this specific case, Bar()
could have been separately compiled and not known of the other uses of m_value
that would require a compiler memory barrier.