C# 4 in a Nutshell (highly recommended btw) uses the following code to demonstrate the concept of MemoryBarrier (assuming A and B were run on different threads):
<
Barrier #2 guarentees that the write to _complete
gets committed immediately. Otherwise it could remain in a queued state meaning that the read of _complete
in B
would not see the change caused by A
even though B
effectively used a volatile read.
Of course, this example does not quite do justice to the problem because A
does nothing more after writing to _complete
which means that the write will be comitted immediately anyway since the thread terminates early.
The answer to your question of whether the if
could still evaluate to false
is yes for exactly the reasons you stated. But, notice what the author says regarding this point.
Barriers 1 and 4 prevent this example from writing “0”. Barriers 2 and 3 provide a freshness guarantee: they ensure that if B ran after A, reading _complete would evaluate to true.
The emphasis on "if B ran after A" is mine. It certainly could be the case that the two threads interleave. But, the author was ignoring this scenario presumably to make his point regarding how Thread.MemoryBarrier
works simpler.
By the way, I had a hard time contriving an example on my machine where barriers #1 and #2 would have altered the behavior of the program. This is because the memory model regarding writes was strong in my environment. Perhaps, if I had a multiprocessor machine, was using Mono, or had some other different setup I could have demonstrated it. Of course, it was easy to demonstrate that removing barriers #3 and #4 had an impact.
The example is unclear for two reasons:
If you consider the following, it becomes clearer:
Ps. This article explains the inner workings of x86 nicely.