Events and multithreading once again

后端 未结 2 2063
情话喂你
情话喂你 2021-02-04 08:41

I\'m worried about the correctness of the seemingly-standard pre-C#6 pattern for firing an event:

EventHandler localCopy = SomeEvent;
if (localCopy != null)
             


        
2条回答
  •  逝去的感伤
    2021-02-04 09:05

    According to the sources you provided and a few others in the past, it breaks down to this:

    • With the Microsoft implementation, you can rely on not having read introduction [1] [2] [3]

    • For any other implementation, it may have read introduction unless it states otherwise

    EDIT: Having re-read the ECMA CLI specification carefully, read introductions are possible, but constrained. From Partition I, 12.6.4 Optimization:

    Conforming implementations of the CLI are free to execute programs using any technology that guarantees, within a single thread of execution, that side-effects and exceptions generated by a thread are visible in the order specified by the CIL. For this purpose only volatile operations (including volatile reads) constitute visible side-effects. (Note that while only volatile operations constitute visible side-effects, volatile operations also affect the visibility of non-volatile references.)

    A very important part of this paragraph is in parentheses:

    Note that while only volatile operations constitute visible side-effects, volatile operations also affect the visibility of non-volatile references.

    So, if the generated CIL reads a field only once, the implementation must behave the same. If it introduces reads, it's because it can prove that the subsequent reads will yield the same result, even facing side effects from other threads. If it cannot prove that and it still introduces reads, it's a bug.

    In the same manner, C# the language also constrains read introduction at the C#-to-CIL level. From the C# Language Specification Version 5.0, 3.10 Execution Order:

    Execution of a C# program proceeds such that the side effects of each executing thread are preserved at critical execution points. A side effect is defined as a read or write of a volatile field, a write to a non-volatile variable, a write to an external resource, and the throwing of an exception. The critical execution points at which the order of these side effects must be preserved are references to volatile fields (§10.5.3), lock statements (§8.12), and thread creation and termination. The execution environment is free to change the order of execution of a C# program, subject to the following constraints:

    • Data dependence is preserved within a thread of execution. That is, the value of each variable is computed as if all statements in the thread were executed in original program order.

    • Initialization ordering rules are preserved (§10.5.4 and §10.5.5).

    • The ordering of side effects is preserved with respect to volatile reads and writes (§10.5.3). Additionally, the execution environment need not evaluate part of an expression if it can deduce that that expression’s value is not used and that no needed side effects are produced (including any caused by calling a method or accessing a volatile field). When program execution is interrupted by an asynchronous event (such as an exception thrown by another thread), it is not guaranteed that the observable side effects are visible in the original program order.

    The point about data dependence is the one I want to emphasize:

    Data dependence is preserved within a thread of execution. That is, the value of each variable is computed as if all statements in the thread were executed in original program order.

    As such, looking at your example (similar to the one given by Igor Ostrovsky [4]):

    EventHandler localCopy = SomeEvent;
    if (localCopy != null)
        localCopy(this, args);
    

    The C# compiler should not perform read introduction, ever. Even if it can prove that there are no interfering accesses, there's no guarantee from the underlying CLI that two sequential non-volatile reads on SomeEvent will have the same result.

    Or, using the equivalent null conditional operator since C# 6.0:

    SomeEvent?.Invoke(this, args);
    

    The C# compiler should always expand to the previous code (guaranteeing a unique non-conflicting variable name) without performing read introduction, as that would leave the race condition.

    The JIT compiler should only perform the read introduction if it can prove that there are no interfering accesses, depending on the underlying hardware platform, such that the two sequential non-volatile reads on SomeEvent will in fact have the same result. This may not be the case if, for instance, the value is not kept in a register and if the cache may be flushed between reads.

    Such optimization, if local, can only be performed on plain (non-ref and non-out) parameters and non-captured local variables. With inter-method or whole program optimizations, it can be performed on shared fields, ref or out parameters and captured local variables that can be proven they are never visibly affected by other threads.

    So, there's a big difference whether it's you writing the following code or the C# compiler generating the following code, versus the JIT compiler generating machine code equivalent to the following code, as the JIT compiler is the only one capable of proving if the introduced read is consistent with the single thread execution, even facing potential side-effects caused by other threads:

    if (SomeEvent != null)
        SomeEvent(this, args);
    

    An introduced read that may yield a different result is a bug, even according to the standard, as there's an observable difference were the code executed in program order without the introduced read.

    As such, if the comment in Igor Ostrovsky's example [4] is true, I say it's a bug.


    [1]: A comment by Eric Lippert; quoting:

    To address your point about the ECMA CLI spec and the C# spec: the stronger memory model promises made by CLR 2.0 are promises made by Microsoft. A third party that decided to make their own implementation of C# that generates code that runs on their own implementation of CLI could choose a weaker memory model and still be compliant with the specifications. Whether the Mono team has done so, I do not know; you'll have to ask them.

    [2]: CLR 2.0 memory model by Joe Duffy, reiterating the next link; quoting the relevant part:

    • Rule 1: Data dependence among loads and stores is never violated.
    • Rule 2: All stores have release semantics, i.e. no load or store may move after one.
    • Rule 3: All volatile loads are acquire, i.e. no load or store may move before one.
    • Rule 4: No loads and stores may ever cross a full-barrier (e.g. Thread.MemoryBarrier, lock acquire, Interlocked.Exchange, Interlocked.CompareExchange, etc.).
    • Rule 5: Loads and stores to the heap may never be introduced.
    • Rule 6: Loads and stores may only be deleted when coalescing adjacent loads and stores from/to the same location.

    [3]: Understand the Impact of Low-Lock Techniques in Multithreaded Apps by Vance Morrison, the latest snapshot I could get on the Internet Archive; quoting the relevant portion:

    Strong Model 2: .NET Framework 2.0

    (...)

    1. All the rules that are contained in the ECMA model, in particular the three fundamental memory model rules as well as the ECMA rules for volatile.
    2. Reads and writes cannot be introduced.
    3. A read can only be removed if it is adjacent to another read to the same location from the same thread. A write can only be removed if it is adjacent to another write to the same location from the same thread. Rule 5 can be used to make reads or writes adjacent before applying this rule.
    4. Writes cannot move past other writes from the same thread.
    5. Reads can only move earlier in time, but never past a write to the same memory location from the same thread.

    [4]: C# - The C# Memory Model in Theory and Practice, Part 2 by Igor Ostrovsky, where he shows a read introduction example that, according to him, the JIT may perform such that two consequent reads may have different results; quoting the relevant part:

    Read Introduction As I just explained, the compiler sometimes fuses multiple reads into one. The compiler can also split a single read into multiple reads. In the .NET Framework 4.5, read introduction is much less common than read elimination and occurs only in very rare, specific circumstances. However, it does sometimes happen.

    To understand read introduction, consider the following example:

    public class ReadIntro {
      private Object _obj = new Object();
      void PrintObj() {
        Object obj = _obj;
        if (obj != null) {
          Console.WriteLine(obj.ToString());
        // May throw a NullReferenceException
        }
      }
      void Uninitialize() {
        _obj = null;
      }
    }
    

    If you examine the PrintObj method, it looks like the obj value will never be null in the obj.ToString expression. However, that line of code could in fact throw a NullReferenceException. The CLR JIT might compile the PrintObj method as if it were written like this:

    void PrintObj() {
      if (_obj != null) {
        Console.WriteLine(_obj.ToString());
      }
    }
    

    Because the read of the _obj field has been split into two reads of the field, the ToString method may now be called on a null target.

    Note that you won’t be able to reproduce the NullReferenceException using this code sample in the .NET Framework 4.5 on x86-x64. Read introduction is very difficult to reproduce in the .NET Framework 4.5, but it does nevertheless occur in certain special circumstances.

提交回复
热议问题