Why is the standard C# event invocation pattern thread-safe without a memory barrier or cache invalidation? What about similar code?

后端 未结 5 1789
余生分开走
余生分开走 2021-02-19 03:37

In C#, this is the standard code for invoking an event in a thread-safe way:

var handler = SomethingHappened;
if(handler != null)
    handler(this, e);
         


        
5条回答
  •  不知归路
    2021-02-19 04:10

    I think I have figured out what the answer is. But I'm not a hardware guy, so I'm open to being corrected by someone more familiar with how CPUs work.


    The .NET 2.0 memory model guarantees:

    Writes cannot move past other writes from the same thread.

    This means that the writing CPU (A in the example), will never write a reference to an object into memory (to Q), until after it has written out contents of that object being constructed (to R). So far, so good. This cannot be re-ordered:

    R = 
    Q = &R
    

    Let's consider the reading CPU (B). What is to stop it reading from R before it reads from Q?

    On a sufficiently naïve CPU, one would expect it to be impossible to read from R without first reading from Q. We must first read Q to get the address of R. (Note: it is safe to assume that the C# compiler and JIT behave this way.)

    But, if the reading CPU has a cache, couldn't it have stale memory for R in its cache, but receive the updated Q?

    The answer seems to be no. For sane cache coherency protocols, invalidation is implemented as a queue (hence "invalidation queue"). So R will always be invalidated before Q is invalidated.

    Apparently the only hardware where this is not the case is the DEC Alpha (according to Table 1, here). It is the only listed architecture where dependent reads can be re-ordered. (Further reading.)

提交回复
热议问题