Using dispatch_once_t per object and not per class

前端 未结 3 1119
孤城傲影
孤城傲影 2020-12-10 14:50

There are multiple sources calling a particular method, but I would like to ensure that it is called exactly once (per object)

I would like to use syntax like

<
相关标签:
3条回答
  • 2020-12-10 15:12

    dispatch_once() executes its block once and only once for the lifetime of an application. Here's the GCD reference link. Since you mention that you want [self _finishOnce] to happen once per object, you should not be using dispatch_once()

    0 讨论(0)
  • 2020-12-10 15:14

    Avner, you're probably regretting you asked by now ;-)

    Regarding your edit to the question, and taking into account other issues, you've more-or-less recreated the "old school" way of doing this, and maybe that is just what you should do (code typed in directly, expect typos):

    @implemention RACDisposable
    {
       BOOL ranExactlyOnceMethod;
    }
    
    - (id) init
    {
       ...
       ranExactlyOnceMethod = NO;
       ...
    }
    
    - (void) runExactlyOnceMethod
    {
       @synchronized(self)     // lock
       {
          if (!ranExactlyOnceMethod) // not run yet?
          {
              // do stuff once
              ranExactlyOnceMethod = YES;
          }
       }
    }
    

    There is a common optimization to this, but given the other discussion let's skip that.

    Is this "cheap"? Well probably not, but all things are relative, its expense is probably not significant - but YMMV!

    HTH

    0 讨论(0)
  • 2020-12-10 15:22

    As mentioned in the linked answer to a similar question, the reference documentation says:

    The predicate must point to a variable stored in global or static scope. The result of using a predicate with automatic or dynamic storage is undefined.

    The overall concerns are well enumerated in that answer. That said, it is possible to make it work. To elaborate: The concern here is that the storage for the predicate be reliably zero'ed out on initialization. With static/global semantics, this is strongly guaranteed. Now I know what you're thinking, "...but Objective-C objects are also zeroed out on init!", and you'd be generally right. Where the problem comes in is with read/write re-ordering. Certain architectures (i.e. ARM), have weakly consistent memory models, which means that memory reads/writes can be re-ordered as long as the original intent of the primary thread of execution's consistency is preserved. In this case, re-ordering could potentially leave you open to a situation where the "zeroing" operation is delayed such that it happened after another thread tries to read the token. (i.e. -init returns, the object pointer becomes visible to another thread, that other thread tries to access the token, but it is still garbage because the zeroing operation has not happened yet.) To avoid this problem, you can add a call to OSMemoryBarrier() to the end of your -init method, and you should be OK. (Note that there is a non-zero performance penalty to adding a memory barrier here, and to memory barriers in general.) The details of memory barriers are left as "further reading" (but if you're going to rely on them, you'd be well advised to understand them, at least conceptually.)

    My guess is that the "prohibition" on using dispatch_once with non-global/static storage stems from the fact that out-of-order execution and memory barriers are complex topics, getting barriers right is hard, getting them wrong tends to lead to extremely subtle and hard-to-nail-down bugs and, perhaps most importantly (although I haven't measured it empirically), introducing the required memory barrier to ensure safe use of the dispatch_once_t in an ivar almost certainly negates some (all?) of the performance benefit that dispatch_once has over "classic" locking patterns.

    Also note that there are two kinds of "re-ordering." There's re-ordering that happens as a compiler optimization (this is the re-ordering that is effected by the volatile keyword) and then there's re-ordering at the hardware level in different ways on different architectures. This hardware-level re-ordering is the re-ordering that is manipulated/controlled by a memory barrier. (i.e. the volatile keyword is not sufficient.)

    OP was asking specifically about a way to "finish once." One example (that to my eyes appears safe/correct) for such a pattern can be seen in ReactiveCocoa's RACDisposable class, which keeps zero or one blocks to run at disposal time and guarantees that the "disposable" is only ever disposed once, and that the block, if there is one, is only ever called once. It looks like this:

    @interface RACDisposable ()
    {
            void * volatile _disposeBlock;
    }
    @end
    
    ...
    
    @implementation RACDisposable
    
    // <snip>
    
    - (id)init {
            self = [super init];
            if (self == nil) return nil;
    
            _disposeBlock = (__bridge void *)self;
            OSMemoryBarrier();
    
            return self;
    }
    
    // <snip>
    
    - (void)dispose {
            void (^disposeBlock)(void) = NULL;
    
            while (YES) {
                    void *blockPtr = _disposeBlock;
                    if (OSAtomicCompareAndSwapPtrBarrier(blockPtr, NULL, &_disposeBlock)) {
                            if (blockPtr != (__bridge void *)self) {
                                    disposeBlock = CFBridgingRelease(blockPtr);
                            }
    
                            break;
                    }
            }
    
            if (disposeBlock != nil) disposeBlock();
    }
    
    // <snip>
    
    @end
    

    It uses OSMemoryBarrier() in init, just like you would have to use for dispatch_once, then it uses OSAtomicCompareAndSwapPtrBarrier which, as the name suggests, implies a memory barrier, to atomically "flip the switch". In case it's not clear, what's going on here is that at -init time the ivar is set to self. This condition is used as a "marker" to differentiate between the cases of "there is no block but we have not disposed" and "there was a block but we have already disposed."

    In practical terms, if memory barriers seem opaque and mysterious to you, my advice would be to just use classic locking patterns until you've measured that those classic locking patterns are causing real, measurable performance issues for your application.

    0 讨论(0)
提交回复
热议问题