I\'m entering the concurrency programming with some semaphore issues. My function first loads data from server, analyze received info and then, if necessary, makes second reque
This is deadlocking because you are waiting for a semaphore on the URLSession
's delegateQueue
. The default delegate queue is not the main queue, but it is a serial background queue (i.e. an OperationQueue
with a maxConcurrentOperationCount
of 1
). So your code is waiting for a semaphore on the same serial queue that is supposed to be signaling the semaphore.
The tactical fix is to make sure you're not calling wait
on the same serial queue that the session's completion handlers are running on. There are two obvious fixes:
Do not use shared
session (whose delegateQueue
is a serial queue), but rather instantiate your own URLSession
and specify its delegateQueue
to be a concurrent OperationQueue
that you create:
let queue = OperationQueue()
queue.name = "com.domain.app.networkqueue"
let configuration = URLSessionConfiguration.default()
let session = URLSession(configuration: configuration, delegate: nil, delegateQueue: queue)
Alternatively, you can solve this by dispatching the code with the semaphore off to some other queue, e.g.
let mainRequest = session.dataTask(with: mainUrl) { data, response, error in
// ...
DispatchQueue.global(attributes: .qosUserInitiated).async {
let semaphore = DispatchSemaphore(value: 0)
for i in 1 ... n {
let childUrl = URL(string: "https://blabla/\(i)")!
let childRequest = session.dataTask(with: childUrl) { data, response, error in
// ...
semaphore.signal()
}
childRequest.resume()
_ = semaphore.wait(timeout: .distantFuture)
}
}
}
mainRequest.resume()
For the sake of completeness, I'll note that you probably shouldn't be using semaphores to issue these requests at all, because you'll end up paying a material performance penalty for issuing a series of consecutive requests (plus you're blocking a thread, which is generally discouraged).
The refactoring of this code to do that is a little more considerable. It basically entails issuing a series of concurrent requests, perhaps use "download" tasks rather than "data" tasks to minimize memory impact, and then when all of the requests are done, piece it all together as needed at the end (triggered by either a Operation
"completion" operation or dispatch group notification).