问题
Interlocked.Increment
seems like among the most standard/simple of operations one would need to perform in multithreaded code.
I assume that the functionality of the method is some sort pattern that anyone with threading experience would be able to replicate.
So basically what I am wondering is if someone could provide an exact duplicate (with explanation of how it works) of what the Interlocked.Increment
method is actually doing internally? (I have looked for the source of the actual method but been unable to find it)
回答1:
According to Mr Albahari it does two things:
- makes the atomicity of the operation known to the OS and VM, so that e.g. operations on 64bit values on 32bit system will be atomic
- generates
full fence
restricting reordering and caching of the Interlocked vars
Have a look at that link - it gives some nice examples.
回答2:
I assume that it is an implementation detail, but one way to look at it is to inspect the JIT compiled code. Consider the following sample.
private static int Value = 42;
public static void Foo() {
Interlocked.Increment(ref Value);
}
On x86 it generates the following
lock inc dword <LOCATION>
The lock modifier locks the bus to prevent multiple CPUs from updating the same data location.
On x64 it generates
lock xadd dword ptr <LOCATION>,eax
回答3:
I would expect it to be a wrapper to the InterlockedIncrement64 Win32 API call.
EDIT: I see that it was a very short response. Building a little on it: It is easy to replicate the functionality of the function, but not the performance. There are native instructions in most CPUs that provide you with the atomic "Interlocked exchange and add" instruction, so you want that instruction to be used to implement your function, and I expect that the easiest way to achieve that from within C# would be to make the win32 API call. For more background on the subject, have a look at this whitepaper.
来源:https://stackoverflow.com/questions/5700100/what-is-interlocked-increment-actually-doing