My question is about order of execution guarantees in C# (and presumably .Net in general). I give Java examples I know something about to compare with.
For Java (from \"
Without having read anything about .NET memory model, I can assure you .NET gives you at least those guarantees (i.e. lock behaves like an acquire an unlock like a release), since they are the weakest guarantees that are useful.
ISO 23270:2006 — Information technology—Programming languages—C#, §10.10 says (and I quote):
10.10 Execution order Execution shall proceed such that the side effects of each executing thread are preserved at critical execution points. A side effect is defined as a read or write of a volatile field, a write to a non-volatile variable, a write to an external resource, and the throwing of an exception. The critical execution points at which the order of these side effects shall be preserved are references to volatile fields (§17.4.3),
lock
statements (§15.12), and thread creation and termination. An implementation is free to change the order of execution of a C# program, subject to the following constraints:
Data dependence is preserved within a thread of execution. That is, the value of each variable is computed as if all statements in the thread were executed in original program order. (emphasis mine).
Initialization ordering rules are preserved (§17.4.4, §17.4.5).
The ordering of side effects is preserved with respect to volatile reads and writes (§17.4.3). Additionally, an implementation need not evaluate part of an expression if it can deduce that that expression’s value is not used and that no needed side effects are produced (including any caused by calling a method or accessing a volatile field). When program execution is interrupted by an asynchronous event (such as an exception thrown by another thread), it is not guaranteed that the observable side effects are visible in the original program order.
The other CLI standards are likewise available gratis from the ISO at
But if you are worried about multi-threading issues, you'll need to dig deeper into the standards and understand the rules about atomicity. Not every operation is warranted to be atomic. If you are multi-threaded and invoking methods that reference anything but local variables (e.g., instance or class (static) members) without serializing access via lock
, a mutex, a semaphore, or some other serialization technique, you are leaving yourself open to race conditions.
What you are looking for is Thread.MemoryBarrier
However they may not be necessary for Microsoft's current implementation of .NET. See this SO question for more details.
I'm worried that you're even asking this but since you asked.
y = 10;
Thread.MemoryBarrier();
x = 5;
Thread.MemoryBarrier();
a = b + 10;
Thread.MemoryBarrier();
// ...
From msdn
Synchronizes memory access as follows: The processor executing the current thread cannot reorder instructions in such a way that memory accesses prior to the call to MemoryBarrier execute after memory accesses that follow the call to MemoryBarrier.