Mutex alternatives in swift

前端 未结 3 1271
予麋鹿
予麋鹿 2020-12-29 04:53

I have a shared-memory between multiple threads. I want to prevent these threads access this piece of memory at a same time. (like producer-consumer problem)

<
相关标签:
3条回答
  • 2020-12-29 05:03

    There are many solutions for this but I use serial queues for this kind of action:

    let serialQueue = DispatchQueue(label: "queuename")
    serialQueue.sync { 
        //call some code here, I pass here a closure from a method
    }
    

    Edit/Update: Also for semaphores:

    let higherPriority = DispatchQueue.global(qos: .userInitiated)
    let lowerPriority = DispatchQueue.global(qos: .utility)
    
    let semaphore = DispatchSemaphore(value: 1)
    
    func letUsPrint(queue: DispatchQueue, symbol: String) {
        queue.async {
            debugPrint("\(symbol) -- waiting")
            semaphore.wait()  // requesting the resource
    
            for i in 0...10 {
                print(symbol, i)
            }
    
            debugPrint("\(symbol) -- signal")
            semaphore.signal() // releasing the resource
        }
    }
    
    letUsPrint(queue: lowerPriority, symbol: "Low Priority Queue Work")
    letUsPrint(queue: higherPriority, symbol: "High Priority Queue Work")
    
    RunLoop.main.run()
    
    0 讨论(0)
  • 2020-12-29 05:07

    Thanks to beshio's comment, you can use semaphore like this:

    let semaphore = DispatchSemaphore(value: 1)
    

    use wait before using the resource:

    semaphore.wait()
    // use the resource
    

    and after using release it:

    semaphore.signal()
    

    Do this in each thread.

    0 讨论(0)
  • 2020-12-29 05:14

    As people commented (incl. me), there are several ways to achieve this kind of lock. But I think dispatch semaphore is better than others because it seems to have the least overhead. As found in Apples doc, "Replacing Semaphore Code", it doesn't go down to kernel space unless the semaphore is already locked (= zero), which is the only case when the code goes down into the kernel to switch the thread. I think that semaphore is not zero most of the time (which is of course app specific matter, though). Thus, we can avoid lots of overhead.

    One more comment on dispatch semaphore, which is the opposite scenario to above. If your threads have different execution priorities, and the higher priority threads have to lock the semaphore for a long time, dispatch semaphore may not be the solution. This is because there's no "queue" among waiting threads. What happens at this case is that higher priority threads get and lock the semaphore most of the time, and lower priority threads can lock the semaphore only occasionally, thus, mostly just waiting. If this behavior is not good for your application, you have to consider dispatch queue instead.

    0 讨论(0)
提交回复
热议问题