Is there a .Net class to do what ManualResetEvent.PulseAll() would do (if it existed)?

前端 未结 2 1523
栀梦
栀梦 2021-02-14 23:00

Is there a .Net class to do what ManualResetEvent.PulseAll() would do (if it existed)?

I have a need to atomically release a set of threads that are waiting

相关标签:
2条回答
  • version 1
    Maximum clarity: a new ManualResetEvent is eagerly installed at the beginning of each PulseAll cycle.

    public class PulseEvent
    {
        public PulseEvent()
        {
            mre = new ManualResetEvent(false);
        }
    
        ManualResetEvent mre;
    
        public void PulseAll() => Interlocked.Exchange(ref mre, new ManualResetEvent(false)).Set();
    
        public bool Wait(int ms) => Volatile.Read(ref mre).WaitOne(ms);
    
        public void Wait() => Wait(Timeout.Infinite);
    };
    

    version 2
    This version avoids creating the internal event for any PulseAll cycles that happen to complete without waiters. The first waiter(s), per cycle, enter an optimistic lock-free race to create and atomically install a single shared event.

    public class PulseEvent
    {
        ManualResetEvent mre;
    
        public void PulseAll() => Interlocked.Exchange(ref mre, null)?.Set();
    
        public bool Wait(int ms)
        {
            ManualResetEvent tmp =
               mre ??
               Interlocked.CompareExchange(ref mre, tmp = new ManualResetEvent(false), null) ??
               tmp;
            return tmp.WaitOne(ms);
        }
    
        public void Wait() => Wait(Timeout.Infinite);
    };
    

    version 3
    This version eliminates per-cycle allocations by allocating two persistent ManualResetEvent objects and flipping between them.This slightly alters the semantics versus the above examples, as follows:

    • First, recycling the same two locks means that your PulseAll cycles must be long enough to allow all of the waiters to clear the previous lock. Otherwise, when you call PulseAll twice in quick succession, any waiting threads that were putatively released by the previous PulseAll call--but which the OS hasn't had a chance to schedule yet--may end up getting re-blocked for the new cycle as well. I mention this mostly as a theoretical consideration, because it's a moot issue unless you block an extreme number of threads on sub-microsecond pulse cycles. You can decide whether this condition is relevant for your situation or not. If so, or if you're unsure or cautious, you can always use version 1 or version 2 above, which don't have this limitation.

    • Also "arguably" different (but see paragraph below for why this second point may be provably irrelevant) in this version, calls to PulseAll that are deemed essentially simultaneous are merged, meaning all but one of those multiple "simultaneous" callers become NOPs. Such behavior is not without precedent (see "Remarks" here) and may be desirable, depending on the application.

    Note that the latter point must be considered a legitimate design choice, as opposed to a bug, theoretical flaw or concurrency error. This is because Pulse locks are inherently ambiguous in situations of multiple simultaneous PulseAll: specifically, there's no way to prove that any waiter who doesn't get released by the single, designated pulser would necessarily be released by one of the other merged/elided pulses either.

    Saying it a different way, this type of lock isn't designed to atomically serialize the PulseAll callers, and in fact it truly can't be, because it will always be possible for a skipped "simultaneous" pulse to independently come and go, even if entirely after the time of the merged pulse, and yet still "pulsing" before the arrival of the waiter (who wouldn't get pulsed).

    public class PulseEvent
    {
        public PulseEvent()
        {
            cur = new ManualResetEvent(false);
            alt = new ManualResetEvent(true);
        }
    
        ManualResetEvent cur, alt;
    
        public void PulseAll()
        {
            ManualResetEvent tmp;
            if ((tmp = Interlocked.Exchange(ref alt, null)) != null) // try claiming 'pulser'
            {
                tmp.Reset();                     // prepare for re-use, ending previous cycle
                (tmp = Interlocked.Exchange(ref cur, tmp)).Set();    // atomic swap & pulse
                Volatile.Write(ref alt, tmp);    // release claim; re-allow 'pulser' claims
            }
        }
    
        public bool Wait(int ms) => cur.WaitOne(ms);  // 'cur' is never null (unlike 'alt')
    
        public void Wait() => Wait(Timeout.Infinite);
    };
    

     


    Finally, a couple general observations. An important recurring theme here and in this type of code generally is that the ManualResetEvent must not be changed to the signalled state (i.e. by calling Set) while it is still publicly visible. In the above code, we use Interlocked.Exchange to atomically change the identity of the active lock in 'cur' (in this case, by instantaneously swapping in the alternate) and doing this before the Set is crucial for guaranteeing that there can be no more new waiters added to that ManualResetEvent, beyond those that were already blocked at the moment of swapping.

    Only after this swap is it safe to release those waiting threads by calling Set on our (now-)private copy. If we were to call Set on the ManualResetEvent while it was still published, it would be possible for a late-arriving waiter who had actually missed the instantenous pulse to nevertheless see the open lock and sail-through without waiting for the next one, as required by definition.

    Interestingly, this means that even though it might intuitively feel like the exact moment that the "pulsing" occurs should coincide with Set being called, in fact it is more correctly said to be right before that, at the moment of the Interlocked.Exchange, because that's the action that strictly establishes the before/after cut-off time and seals the definitive set of waiters (if any) who are to be released.

    So waiters who miss the cut-off and arrive immediately after must only be able to see--and will block on--the event now designated for the the next cycle, and this true even if the current cycle hasn't been signalled yet, nor any of its waiting threads released, all as required for correctness of "instantaneous" pulsing.

    0 讨论(0)
  • 2021-02-15 00:02

    You can use a Barrier object. It allows an unspecified number of Tasks to run, then wait for all others to reach that point.

    And you can use it in a way similar to WaitGroup in Go if you do not know which tasks from which code blocks will start to work as a specific unit of work.

    0 讨论(0)
提交回复
热议问题