Why is the standard C# event invocation pattern thread-safe without a memory barrier or cache invalidation? What about similar code?

后端 未结 5 1804
余生分开走
余生分开走 2021-02-19 03:37

In C#, this is the standard code for invoking an event in a thread-safe way:

var handler = SomethingHappened;
if(handler != null)
    handler(this, e);
         


        
5条回答
  •  抹茶落季
    2021-02-19 04:18

    This is a really good question. Let us consider your first example.

    var handler = SomethingHappened;
    if(handler != null)
        handler(this, e);
    

    Why is this safe? To answer that question you first have to define what you mean by "safe". Is it safe from a NullReferenceException? Yes, it is pretty trivial to see that caching the delegate reference locally eliminates that pesky race between the null check and the invocation. Is it safe to have more than one thread touching the delegate? Yes, delegates are immutable so there is no way that one thread can cause the delegate to get into a half-baked state. The first two are obvious. But, what about a scenario where thread A is doing this invocation in a loop and thread B at some later point in time assigns the first event handler? Is that safe in the sense that thread A will eventually see a non-null value for the delegate? The somewhat surprising answer to this is probably. The reason is that the default implementations of the add and remove accessors for the event create memory barriers. I believe the early version of the CLR took an explicit lock and later versions used Interlocked.CompareExchange. If you implemented your own accessors and omitted a memory barrier then the answer could be no. I think in reality it highly depends on whether Microsoft added memory barriers to the construction of the multicast delegate itself.

    On to the second and more interesting example.

    var localFoo = this.memberFoo;
    if(localFoo != null)
        localFoo.Bar(localFoo.baz);
    

    Nope. Sorry, this actually is not safe. Let us assume memberFoo is of type Foo which is defined like the following.

    public class Foo
    {
      public int baz = 0;
      public int daz = 0;
    
      public Foo()
      {
        baz = 5;
        daz = 10;
      }
    
      public void Bar(int x)
      {
        x / daz;
      }
    }
    

    And then let us assume another thread does the following.

    this.memberFoo = new Foo();
    

    Despite what some may think there is nothing that mandates that instructions have to be executed in the order that they were defined in the code as long as the intent of the programmer is logically preserved. The C# or JIT compilers could actually formulate the following sequence of instructions.

    /* 1 */ set register = alloc-memory-and-return-reference(typeof(Foo));
    /* 2 */ set register.baz = 0;
    /* 3 */ set register.daz = 0;
    /* 4 */ set this.memberFoo = register;
    /* 5 */ set register.baz = 5;  // Foo.ctor
    /* 6 */ set register.daz = 10; // Foo.ctor
    

    Notice how the assignment to memberFoo occurs before the constructor is run. That is valid because it does not have any unintended side-effects from the perspective of the thread executing it. It could, however, have a major impact on other threads. What happens if your null check of memberFoo on the reading thread occurred when the writing thread just fininished instruction #4? The reader will see a non-null value and then attempt to invoke Bar before the daz variable got set to 10. daz will still hold its default value of 0 thus leading to a divide by zero error. Of course, this is mostly theoretical because Microsoft's implementation of the CLR creates a release-fence on writes that would prevent this. But, the specification would technically allow for it. See this question for related content.

提交回复
热议问题