forkjoinpool

Java: How to get finished threads to pickup tasks from running threads

六眼飞鱼酱① 提交于 2019-12-04 18:06:17
I am working on a multithreaded application with tasks that have varying run times. When one thread finishes, is there a way for it to take over some tasks from a still running thread? Here is an example. I kick off my program with 5 threads, and each have 50 tasks. When the quickest running thread finishes, another thread still has 40 tasks to complete. How can I get the finished thread to take 20 tasks from the other thread, so each continue working on 20 a piece, rather than waiting for the running thread to complete the remaining 40? Use ForkJoinPool A ForkJoinPool differs from other kinds

ThreadPoolExecutor vs ForkJoinPool: stealing subtasks

 ̄綄美尐妖づ 提交于 2019-12-04 03:50:42
From java docs, A ForkJoinPool differs from other kinds of ExecutorService mainly by virtue of employing work-stealing: all threads in the pool attempt to find and execute subtasks created by other active tasks (eventually blocking waiting for work if none exist). This enables efficient processing when most tasks spawn other subtasks (as do most ForkJoinTasks). When setting asyncMode to true in constructors, ForkJoinPools may also be appropriate for use with event-style tasks that are never joined. After going through below ForkJoinPool example , Unlike ThreadPoolExecutor, I have not seen

Java 8 parallel stream and ThreadLocal

守給你的承諾、 提交于 2019-12-03 06:57:32
问题 I am trying to figure out how can I copy a ThreadLocal value in Java 8 parallel stream. So if we consider this: public class ThreadLocalTest { public static void main(String[] args) { ThreadContext.set("MAIN"); System.out.printf("Main Thread: %s\n", ThreadContext.get()); IntStream.range(0,8).boxed().parallel().forEach(n -> { System.out.printf("Parallel Consumer - %d: %s\n", n, ThreadContext.get()); }); } private static class ThreadContext { private static ThreadLocal<String> val = ThreadLocal

Detailed difference between Java8 ForkJoinPool and Executors.newWorkStealingPool?

♀尐吖头ヾ 提交于 2019-12-03 05:08:31
问题 What is the low-level difference among using: ForkJoinPool = new ForkJoinPool(X); and ExecutorService ex = Executors.neWorkStealingPool(X); Where X is the desired level of parallelism i.e threads running.. According to the docs I found them similar. Also tell me which one is more appropriate and safe under any normal uses. I have 130 million entries to write into a BufferedWriter and Sort them using Unix sort by 1st column. Also let me know how many threads to keep if possible. Note: My

Java support for three different concurrency models

一笑奈何 提交于 2019-12-03 04:52:26
问题 I am going through different concurrency model in multi-threading environment (http://tutorials.jenkov.com/java-concurrency/concurrency-models.html) The article highlights about three concurrency models . Parallel Workers The first concurrency model is what I call the parallel worker model. Incoming jobs are assigned to different workers . Assembly Line The workers are organized like workers at an assembly line in a factory. Each worker only performs a part of the full job. When that part is

How to configure and tune Akka Dispatchers

心已入冬 提交于 2019-12-02 21:03:15
I'm looking over the documentation here: http://doc.akka.io/docs/akka/2.3.3/java/dispatchers.html We're using Akka in such a way where we have two separate dispatchers (default fork-join executors) for different actors. We're now running into some performance issues and we're looking into how we can tune the dispatcher configuration parameters and see how they affect the performance of the application. I've looked over the documentation but don't really understand the configuration parameters. For example, just for the simple default, fork-join-executor dispatcher: What are these and how

Detailed difference between Java8 ForkJoinPool and Executors.newWorkStealingPool?

落爺英雄遲暮 提交于 2019-12-02 18:22:50
What is the low-level difference among using: ForkJoinPool = new ForkJoinPool(X); and ExecutorService ex = Executors.neWorkStealingPool(X); Where X is the desired level of parallelism i.e threads running.. According to the docs I found them similar. Also tell me which one is more appropriate and safe under any normal uses. I have 130 million entries to write into a BufferedWriter and Sort them using Unix sort by 1st column. Also let me know how many threads to keep if possible. Note: My System has 8 core processors and 32 GB RAM. Work stealing is a technique used by modern thread-pools in

Deadlock happens if I use lambda in parallel stream but it doesn't happen if I use anonymous class instead? [duplicate]

拟墨画扇 提交于 2019-12-02 00:37:56
This question already has an answer here: Why does parallel stream with lambda in static initializer cause a deadlock? 3 answers The following code leads to deadlock(on my pc): public class Test { static { final int SUM = IntStream.range(0, 100) .parallel() .reduce((n, m) -> n + m) .getAsInt(); } public static void main(String[] args) { System.out.println("Finished"); } } But if I replace reduce lambda argument with anonymous class it doesn't lead to deadlock: public class Test { static { final int SUM = IntStream.range(0, 100) .parallel() .reduce(new IntBinaryOperator() { @Override public int

ForkJoinPool resets thread interrupted state

跟風遠走 提交于 2019-12-01 19:05:44
I just noticed the following phenomena when cancelling a Future returned by ForkJoinPool . Given the following example code: ForkJoinPool pool = new ForkJoinPool(); Future<?> fut = pool.submit(new Callable<Void>() { @Override public Void call() throws Exception { while (true) { if (Thread.currentThread().isInterrupted()) { // <-- never true System.out.println("interrupted"); throw new InterruptedException(); } } } }); Thread.sleep(1000); System.out.println("cancel"); fut.cancel(true); The program never prints interrupted . The docs of ForkJoinTask#cancel(boolean) say: mayInterruptIfRunning -

Calling sequential on parallel stream makes all previous operations sequential

天大地大妈咪最大 提交于 2019-11-30 12:59:33
I've got a significant set of data, and want to call slow, but clean method and than call fast method with side effects on result of the first one. I'm not interested in intermediate results, so i would like not to collect them. Obvious solution is to create parallel stream, make slow call , make stream sequential again, and make fast call. The problem is, ALL code executing in single thread, there is no actual parallelism. Example code: @Test public void testParallelStream() throws ExecutionException, InterruptedException { ForkJoinPool forkJoinPool = new ForkJoinPool(Runtime.getRuntime()