What determines the number of threads a Java ForkJoinPool creates?

前端 未结 5 1694
北恋
北恋 2020-12-02 11:00

As far as I had understood ForkJoinPool, that pool creates a fixed number of threads (default: number of cores) and will never create more threads (unless the a

相关标签:
5条回答
  • 2020-12-02 11:32

    strict, full-strict, and terminally-strict have to do with processing a directed acyclic graph (DAG). You can google those terms to get a full understanding of them. That is the type of processing the framework was designed to process. Look at the code in the API for Recursive..., the framework relies on your compute() code to do other compute() links and then do a join(). Each Task does a single join() just like processing a DAG.

    You are not doing DAG processing. You are forking many new Tasks and waiting (join()) on each. Have a read in the source code. It's horrendously complex but you may be able to figure it out. The framework does not do proper Task Management. Where is it going to put the waiting Task when it does a join()? There is no suspended queue, that would require a monitor thread to constantly look at the queue to see what is finished. This is why the framework uses "continuation threads". When one task does join() the framework is assuming it is waiting for a single lower Task to finish. When many join() methods are present the thread cannot continue so a helper or continuation thread needs to exist.

    As noted above, you need a scatter-gather type fork-join process. There you can fork as many Tasks

    0 讨论(0)
  • 2020-12-02 11:39

    From the source comments:

    Compensating: Unless there are already enough live threads, method tryPreBlock() may create or re-activate a spare thread to compensate for blocked joiners until they unblock.

    I think what's happening is that you're not finishing any of the tasks very quickly, and since there aren't available worker threads when you submit a new task, a new thread gets created.

    0 讨论(0)
  • 2020-12-02 11:48

    There're related questions on stackoverflow:

    ForkJoinPool stalls during invokeAll/join

    ForkJoinPool seems to waste a thread

    I made a runnable stripped down version of what is happening (jvm arguments i used: -Xms256m -Xmx1024m -Xss8m):

    import java.util.ArrayList;
    import java.util.List;
    import java.util.concurrent.ForkJoinPool;
    import java.util.concurrent.RecursiveAction;
    import java.util.concurrent.RecursiveTask;
    import java.util.concurrent.TimeUnit;
    
    public class Test1 {
    
        private static ForkJoinPool pool = new ForkJoinPool(2);
    
        private static class SomeAction extends RecursiveAction {
    
            private int counter;         //recursive counter
            private int childrenCount=80;//amount of children to spawn
            private int idx;             // just for displaying
    
            private SomeAction(int counter, int idx) {
                this.counter = counter;
                this.idx = idx;
            }
    
            @Override
            protected void compute() {
    
                System.out.println(
                    "counter=" + counter + "." + idx +
                    " activeThreads=" + pool.getActiveThreadCount() +
                    " runningThreads=" + pool.getRunningThreadCount() +
                    " poolSize=" + pool.getPoolSize() +
                    " queuedTasks=" + pool.getQueuedTaskCount() +
                    " queuedSubmissions=" + pool.getQueuedSubmissionCount() +
                    " parallelism=" + pool.getParallelism() +
                    " stealCount=" + pool.getStealCount());
                if (counter <= 0) return;
    
                List<SomeAction> list = new ArrayList<>(childrenCount);
                for (int i=0;i<childrenCount;i++){
                    SomeAction next = new SomeAction(counter-1,i);
                    list.add(next);
                    next.fork();
                }
    
    
                for (SomeAction action:list){
                    action.join();
                }
            }
        }
    
        public static void main(String[] args) throws Exception{
            pool.invoke(new SomeAction(2,0));
        }
    }
    

    Apparently when you perform a join, current thread sees that required task is not yet completed and takes another task for himself to do.

    It happens in java.util.concurrent.ForkJoinWorkerThread#joinTask.

    However this new task spawns more of the same tasks, but they can not find threads in the pool, because threads are locked in join. And since it has no way to know how much time it will require for them to be released (thread could be in infinite loop or deadlocked forever), new thread(s) is(are) spawned (Compensating for joined threads as Louis Wasserman mentioned): java.util.concurrent.ForkJoinPool#signalWork

    So to prevent such scenario you need to avoid recursive spawning of tasks.

    For example if in above code you set initial parameter to 1, active thread amount will be 2, even if you increase childrenCount tenfold.

    Also note that, while amount of active threads increases, amount of running threads is less or equal to parallelism.

    0 讨论(0)
  • 2020-12-02 11:48

    The both code snippets posted by Holger Peine and elusive-code doesn't actually follow recommended practice which appeared in javadoc for 1.8 version:

    In the most typical usages, a fork-join pair act like a call (fork) and return (join) from a parallel recursive function. As is the case with other forms of recursive calls, returns (joins) should be performed innermost-first. For example, a.fork(); b.fork(); b.join(); a.join(); is likely to be substantially more efficient than joining code a before code b.

    In both cases FJPool was instantiated via default constructor. This leads to construction of the pool with asyncMode=false, which is default:

    @param asyncMode if true,
    establishes local first-in-first-out scheduling mode for forked tasks that are never joined. This mode may be more appropriate than default locally stack-based mode in applications in which worker threads only process event-style asynchronous tasks. For default value, use false.

    that way working queue is actually lifo:
    head -> | t4 | t3 | t2 | t1 | ... | <- tail

    So in snippets they fork() all task pushing them on stack and than join() in same order, that is from deepest task (t1) to topmost (t4) effectively blocking until some other thread will steal (t1), then (t2) and so on. Since there is enouth tasks to block all pool threads (task_count >> pool.getParallelism()) compensation kicks in as Louis Wasserman described.

    0 讨论(0)
  • 2020-12-02 11:58

    It is worth noting that the output of the code posted by elusive-code depends on the version of java. Running the code in the java 8 I see the output:

    ...
    counter=0.73 activeThreads=45 runningThreads=5 poolSize=49 queuedTasks=105 queuedSubmissions=0 parallelism=2 stealCount=3056
    counter=0.75 activeThreads=46 runningThreads=1 poolSize=51 queuedTasks=0 queuedSubmissions=0 parallelism=2 stealCount=3158
    counter=0.77 activeThreads=47 runningThreads=3 poolSize=51 queuedTasks=0 queuedSubmissions=0 parallelism=2 stealCount=3157
    counter=0.74 activeThreads=45 runningThreads=3 poolSize=51 queuedTasks=5 queuedSubmissions=0 parallelism=2 stealCount=3153
    

    But running the same code in the java 11 the output is different:

    ...
    counter=0.75 activeThreads=1 runningThreads=1 poolSize=2 queuedTasks=4 queuedSubmissions=0 parallelism=2 stealCount=0
    counter=0.76 activeThreads=1 runningThreads=1 poolSize=2 queuedTasks=3 queuedSubmissions=0 parallelism=2 stealCount=0
    counter=0.77 activeThreads=1 runningThreads=1 poolSize=2 queuedTasks=2 queuedSubmissions=0 parallelism=2 stealCount=0
    counter=0.78 activeThreads=1 runningThreads=1 poolSize=2 queuedTasks=1 queuedSubmissions=0 parallelism=2 stealCount=0
    counter=0.79 activeThreads=1 runningThreads=1 poolSize=2 queuedTasks=0 queuedSubmissions=0 parallelism=2 stealCount=0
    
    0 讨论(0)
提交回复
热议问题