So as it goes in the current scenario, we have a set of APIs as listed below:
Consumer start();
Consumer performDailyAggregates();
Consumer
As Andreas pointed out, Consumer::andThen
is an associative function and while the resulting consumer may have a different internal structure, it is still equivalent.
But let's debug it
public static void main(String[] args) {
performAllTasks(IntStream.range(0, 10)
.mapToObj(i -> new DebuggableConsumer(""+i)), new Object());
}
private static void performAllTasks(Stream> consumerList, T data) {
Consumer reduced = consumerList.reduce(Consumer::andThen).orElse(x -> {});
reduced.accept(data);
System.out.println(reduced);
}
static class DebuggableConsumer implements Consumer
will print
0
1
2
3
4
5
6
7
8
9
combined
├─combined
│ ├─combined
│ │ ├─combined
│ │ │ ├─combined
│ │ │ │ ├─combined
│ │ │ │ │ ├─combined
│ │ │ │ │ │ ├─combined
│ │ │ │ │ │ │ ├─combined
│ │ │ │ │ │ │ │ ├─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@378fd1ac
│ │ │ │ │ │ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@49097b5d
│ │ │ │ │ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@6e2c634b
│ │ │ │ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@37a71e93
│ │ │ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@7e6cbb7a
│ │ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@7c3df479
│ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@7106e68e
│ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@7eda2dbb
│ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@6576fe71
└─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@76fb509a
whereas changing the reduction code to
private static void performAllTasks(Stream> consumerList, T data) {
Consumer reduced = consumerList.parallel().reduce(Consumer::andThen).orElse(x -> {});
reduced.accept(data);
System.out.println(reduced);
}
prints on my machine
0
1
2
3
4
5
6
7
8
9
combined
├─combined
│ ├─combined
│ │ ├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@49097b5d
│ │ └─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@6e2c634b
│ └─combined
│ ├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@37a71e93
│ └─combined
│ ├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@7e6cbb7a
│ └─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@7c3df479
└─combined
├─combined
│ ├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@7106e68e
│ └─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@7eda2dbb
└─combined
├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@6576fe71
└─combined
├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@76fb509a
└─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@300ffa5d
illustrating the point of Andreas’ answer, but also highlighting an entirely different problem. You may max it out by using, e.g. IntStream.range(0, 100)
in the example code.
The result of the parallel evaluation is actually better than the sequential evaluation, as the sequential evaluation creates an unbalanced tree. When accepting an arbitrary stream of consumers, this can be an actual performance issue or even lead to a StackOverflowError
when trying to evaluate the resulting consumer.
For any nontrivial number of consumers, you actually want a balanced consumer tree, but using a parallel stream for that is not the right solution, as a) Consumer::andThen
is a cheap operation with no real benefit from parallel evaluation and b) the balancing would depend on unrelated properties, like the nature of the stream source and the number of CPU cores, which determine when the reduction falls back to the sequential algorithm.
Of course, the simplest solution would be
private static void performAllTasks(Stream> consumers, T data) {
consumers.forEachOrdered(c -> c.accept(data));
}
But when you want to construct a compound Consumer
for re-using, you may use
private static final int ITERATION_THRESHOLD = 16; // tune yourself
public static Consumer combineAllTasks(Stream> consumers) {
List> consumerList = consumers.collect(Collectors.toList());
if(consumerList.isEmpty()) return t -> {};
if(consumerList.size() == 1) return consumerList.get(0);
if(consumerList.size() < ITERATION_THRESHOLD)
return balancedReduce(consumerList, Consumer::andThen, 0, consumerList.size());
return t -> consumerList.forEach(c -> c.accept(t));
}
private static T balancedReduce(List l, BinaryOperator f, int start, int end) {
if(end-start>2) {
int mid=(start+end)>>>1;
return f.apply(balancedReduce(l, f, start, mid), balancedReduce(l, f, mid, end));
}
T t = l.get(start++);
if(start
The code will provide a single Consumer
just using a loop when the number of consumers exceeds a threshold. This is the simplest and most efficient solution for a larger number of consumers and in fact, you could drop all other approaches for the smaller numbers and still get a reasonable performance…
Note that this still doesn’t hinder parallel processing of the stream of consumers, if their construction really benefits from it.