My program is a server which handles incoming requests. Each valid request is wrapped in NSOperation
and passed to a normal NSOperationQueue
.
I can think of 2 possible ways to avoid this problem. (which I'm going to test them and will update later)
- Use
dispatch_semaphore
to limit submitting the block to this concurrent queue.- Limit
maxConcurrentOperationCount
of theNSOperationQueue
.
Yes, those are two common patterns. For the sake of future readers, the other solution to this “exhausting worker threads” problem is Objective-C's dispatch_apply
, also known as concurrentPerform
in Swift, which allows concurrent operations in a manner that won’t exhaust your pool of worker threads. But that’s really only applicable when launching a whole series of tasks up front (e.g. you want to parallelize a for
loop), not the scenario you outline in your question. But, still, for the record, dispatch_apply
/concurrentPerform
is the third common solution for this general problem.
I cannot find information about this limit.
This used to be covered really nicely in WWDC 2012 video Asynchronous Design Patterns with Blocks, GCD, and XPC, but that video is no longer available (other WWDC 2012 videos are, but not that one, curiously). But they do walk through the limited worker thread issue in WWDC 2015 video Building Responsive and Efficient Apps with GCD.