ThreadPoolExecutor Block When Queue Is Full?

前端 未结 8 1801
无人及你
无人及你 2020-11-29 17:42

I am trying to execute lots of tasks using a ThreadPoolExecutor. Below is a hypothetical example:

def workQueue = new ArrayBlockingQueue(3, f         


        
相关标签:
8条回答
  • 2020-11-29 18:04

    I solved this problem using a custom RejectedExecutionHandler, which simply blocks the calling thread for a little while and then tries to submit the task again:

    public class BlockWhenQueueFull implements RejectedExecutionHandler {
    
        public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
    
            // The pool is full. Wait, then try again.
            try {
                long waitMs = 250;
                Thread.sleep(waitMs);
            } catch (InterruptedException interruptedException) {}
    
            executor.execute(r);
        }
    }
    

    This class can just be used in the thread-pool executor as a RejectedExecutionHandler like any other. In this example:

    executorPool = new def threadPoolExecutor = new ThreadPoolExecutor(3, 3, 1L, TimeUnit.HOURS, workQueue, new BlockWhenQueueFull())
    

    The only downside I see is that the calling thread might get locked slightly longer than strictly necessary (up to 250ms). For many short-running tasks, perhaps decrease the wait-time to 10ms or so. Moreover, since this executor is effectively being called recursively, very long waits for a thread to become available (hours) might result in a stack overflow.

    Nevertheless, I personally like this method. It's compact, easy to understand, and works well. Am I missing anything important?

    0 讨论(0)
  • 2020-11-29 18:05

    Here is my code snippet in this case:

    public void executeBlocking( Runnable command ) {
        if ( threadPool == null ) {
            logger.error( "Thread pool '{}' not initialized.", threadPoolName );
            return;
        }
        ThreadPool threadPoolMonitor = this;
        boolean accepted = false;
        do {
            try {
                threadPool.execute( new Runnable() {
                    @Override
                    public void run() {
                        try {
                            command.run();
                        }
                        // to make sure that the monitor is freed on exit
                        finally {
                            // Notify all the threads waiting for the resource, if any.
                            synchronized ( threadPoolMonitor ) {
                                threadPoolMonitor.notifyAll();
                            }
                        }
                    }
                } );
                accepted = true;
            }
            catch ( RejectedExecutionException e ) {
                // Thread pool is full
                try {
                    // Block until one of the threads finishes its job and exits.
                    synchronized ( threadPoolMonitor ) {
                        threadPoolMonitor.wait();
                    }
                }
                catch ( InterruptedException ignored ) {
                    // return immediately
                    break;
                }
            }
        } while ( !accepted );
    }
    

    threadPool is a local instance of java.util.concurrent.ExecutorService which has been initialized already.

    0 讨论(0)
  • 2020-11-29 18:12

    A fairly straightforward option is to wrap your BlockingQueue with an implementation that calls put(..) when offer(..) is being invoked:

    public class BlockOnOfferAdapter<T> implements BlockingQueue<T> {
    
    (..)
    
      public boolean offer(E o) {
            try {
                delegate.put(o);
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
                return false;
            }
            return true;
      }
    
    (.. implement all other methods simply by delegating ..)
    
    }
    

    This works because by default put(..) waits until there is capacity in the queue when it is full, see:

        /**
         * Inserts the specified element into this queue, waiting if necessary
         * for space to become available.
         *
         * @param e the element to add
         * @throws InterruptedException if interrupted while waiting
         * @throws ClassCastException if the class of the specified element
         *         prevents it from being added to this queue
         * @throws NullPointerException if the specified element is null
         * @throws IllegalArgumentException if some property of the specified
         *         element prevents it from being added to this queue
         */
        void put(E e) throws InterruptedException;
    

    No catching of RejectedExecutionException or complicated locking necessary.

    0 讨论(0)
  • 2020-11-29 18:15

    You could use a semaphore to block threads from going into the pool.

    ExecutorService service = new ThreadPoolExecutor(
        3, 
        3, 
        1, 
        TimeUnit.HOURS, 
        new ArrayBlockingQueue<>(6, false)
    );
    
    Semaphore lock = new Semaphore(6); // equal to queue capacity
    
    for (int i = 0; i < 100000; i++ ) {
        try {
            lock.acquire();
            service.submit(() -> {
                try {
                  task.run();
                } finally {
                  lock.release();
                }
            });
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
    }
    

    Some gotchas:

    • Only use this pattern with a fixed thread pool. The queue is unlikely to be full often, thus new threads won't be created. Check out the java docs on ThreadPoolExecutor for more details: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html There is a way around this, but it is out of scope of this answer.
    • Queue size should be higher than the number of core threads. If we were to make the queue size 3, what would end up happening is:

      • T0: all three threads are doing work, the queue is empty, no permits are available.
      • T1: Thread 1 finishes, releases a permit.
      • T2: Thread 1 polls the queue for new work, finds none, and waits.
      • T3: Main thread submits work into the pool, thread 1 starts work.

      The example above translates to thread the main thread blocking thread 1. It may seem like a small period, but now multiply the frequency by days and months. All of a sudden, short periods of time add up to a large amount of time wasted.

    0 讨论(0)
  • 2020-11-29 18:22

    This is what I ended up doing:

    int NUM_THREADS = 6;
    Semaphore lock = new Semaphore(NUM_THREADS);
    ExecutorService pool = Executors.newCachedThreadPool();
    
    for (int i = 0; i < 100000; i++) {
        try {
            lock.acquire();
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
        pool.execute(() -> {
            try {
                // Task logic
            } finally {
                lock.release();
            }
        });
    }
    
    0 讨论(0)
  • 2020-11-29 18:24

    What you need to do is to wrap your ThreadPoolExecutor into Executor which explicitly limits amount of concurrently executed operations inside it:

     private static class BlockingExecutor implements Executor {
    
        final Semaphore semaphore;
        final Executor delegate;
    
        private BlockingExecutor(final int concurrentTasksLimit, final Executor delegate) {
            semaphore = new Semaphore(concurrentTasksLimit);
            this.delegate = delegate;
        }
    
        @Override
        public void execute(final Runnable command) {
            try {
                semaphore.acquire();
            } catch (InterruptedException e) {
                e.printStackTrace();
                return;
            }
    
            final Runnable wrapped = () -> {
                try {
                    command.run();
                } finally {
                    semaphore.release();
                }
            };
    
            delegate.execute(wrapped);
    
        }
    }
    

    You can adjust concurrentTasksLimit to the threadPoolSize + queueSize of your delegate executor and it will pretty much solve your problem

    0 讨论(0)
提交回复
热议问题