I was watching a presentation on Java, and at one point, the lecturer said:
\"Mutability is OK, sharing is nice, shared mutability is devil\'s work.\"
In the first example, if you were to use parallel(), you’d have no guarantee of the insertions (multiple threads inserting the same element for example).
collect(...) on the other hand, when run in parallel, splits the work and internally collects the results in an intermediate step and then adds them to the final list, ensuring order and safety.
The problem comes into play when performing parallel processing.
//double the even values and put that into a list.
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5, 1, 2, 3, 4, 5);
List<Integer> doubleOfEven = new ArrayList<>();
numbers.stream()
.filter(e -> e % 2 == 0)
.map(e -> e * 2)
.forEach(e -> doubleOfEven.add(e)); // <--- Unnecessary use of side-effects!
This unnecessarily uses side-effects while not all side effects are bad if used correctly when it comes to using streams one must provide behaviour that is safe to execute concurrently on different pieces of the input. i.e. writing code which doesn’t access shared mutable data to do its work.
The line:
.forEach(e -> doubleOfEven.add(e)); // Unnecessary use of side-effects!
unnecessarily uses side-effects and when executed in parallel, the non-thread-safety of ArrayList
would cause incorrect results.
A while back I read a blog by Henrik Eichenhardt answering as to why a shared mutable state is the root of all evil.
This is a short reasoning as to why shared mutability is not good; extracted from the blog.
non-determinism = parallel processing + mutable state
This equation basically means that both parallel processing and mutable state combined result in non-deterministic program behaviour. If you just do parallel processing and have only immutable state everything is fine and it is easy to reason about programs. On the other hand if you want to do parallel processing with mutable data you need to synchronize the access to the mutable variables which essentially renders these sections of the program single threaded. This is not really new but I haven't seen this concept expressed so elegantly. A non-deterministic program is broken.
This blog goes on to derive the inner details as to why parallel programs without proper synchronization are broken, which you can find within the appended link.
List<Integer> doubleOfEven2 =
numbers.stream()
.filter(e -> e % 2 == 0)
.map(e -> e * 2)
.collect(toList()); // No side-effects!
This uses a collect reduction operation on the elements of this stream using a Collector
.
This is much safer, more efficient, and more amenable to parallelization.
The thing is that the lecture is slightly wrong at the same time. The example that he provided uses forEach
, which is documented as:
The behavior of this operation is explicitly nondeterministic. For parallel stream pipelines, this operation does not guarantee to respect the encounter order of the stream, as doing so would sacrifice the benefit of parallelism...
You could use:
numbers.stream()
.filter(e -> e % 2 == 0)
.map(e -> e * 2)
.parallel()
.forEachOrdered(e -> doubleOfEven.add(e));
And you would always have the same guaranteed result.
On the other hand the example that uses Collectors.toList
is better, because Collectors respect encounter order
, so it works just fine.
Interesting point is that Collectors.toList
uses ArrayList
underneath that is not a thread safe collection. It's just that is uses many of them (for parallel processing) and merges at the end.
A last note that parallel and sequential do not influence the encounter order, it's the operation applied to the Stream
that do. Excellent read here.
We also need to think that even using a thread safe collection is still not safe with Streams completely, especially when you are relying on side-effects
.
List<Integer> numbers = Arrays.asList(1, 3, 3, 5);
Set<Integer> seen = Collections.synchronizedSet(new HashSet<>());
List<Integer> collected = numbers.stream()
.parallel()
.map(e -> {
if (seen.add(e)) {
return 0;
} else {
return e;
}
})
.collect(Collectors.toList());
System.out.println(collected);
collected
at this point could be [0,3,0,0]
OR [0,0,3,0]
or something else.
Assume two threads perform this task at the same time, the second thread one instruction behind the first.
The first thread creates doubleOfEven. The second thread creates doubleOfEven, the instance created by the first thread will be garbage collected. Then both threads will add the doubles of all even numbers to doubleOfEvent, so it will contain 0, 0, 4, 4, 8, 8, 12, 12, ... instead of 0, 4, 8, 12... (In reality these threads won't be perfectly in sync, so anything that can go wrong will go wrong).
Not that the second solution is that much better. You would have two threads setting the same global. In this case, they are setting it both to logically equal values, but if they set it to two different values, then you don't know which value you have afterwards. One thread will not get the result it wants.