I am using the Executors
framework in Java to create thread pools for a multi-threaded application, and I have a question related to performance.
I have an
Looking at the source code you'll realize that:
Executors.newFixedThreadPool(threadPoolSize);
is equivalent to:
return new ThreadPoolExecutor(threadPoolSize, threadPoolSize, 0L, MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
Since it doesn't provide explicit RejectedExecutionHandler
, default AbortPolicy
is used. It basically throws RejectedExecutionException
once the queue is full. But the queue is unbounded, so it will never be full. Thus this executor accepts inifnite1 number of tasks.
Your declaration is much more complex and quite different:
new LinkedBlockingQueue<Runnable>(10000)
will cause the thread pool to discard tasks if more than 10000 are awaiting.
I don't understand what your RejectedExecutionHandler
is doing. If the pool discovers it cannot put any more runnables to the queue it calls your handler. In this handler you... try to put that Runnable
into the queue again (which will fail in like 99% of the cases block). Finally you swallow the exception. Seems like ThreadPoolExecutor.DiscardPolicy
is what you are after.
Looking at your comments below seems like you are trying to block or somehow throttle clients if tasks queue is too large. I don't think blocking inside RejectedExecutionHandler
is a good idea. Instead consider CallerRunsPolicy
rejection policy. Not entirely the same, but close enough.
To wrap up: if you want to limit the number of pending tasks, your approach is almost good. If you want to limit the number of concurrent threads, the first one-liner is enough.
1 - assuming 2^31 is infinity