I have a usecase where I have to
You can use MutableMapIterable.updateValueWith(K key, Function0<? extends V> factory, Function2<? super V,? super P,? extends V> function, P parameter) from Eclipse Collections.
The factory
argument creates an initial value if none is in the map. The function
argument is applied to the map value along with an additional parameter to come up with a new map value. That parameter
is passed as the final argument to updateValueWith()
. The function is called even in the case where the key wasn't in the map. So the initial value is really the function
applied to the output of factory
and parameter
. The function
must not mutate the value; it should return a new value. In your example, the map values are Strings which are immutable so we're fine.
In ConcurrentMaps like org.eclipse.collections.impl.map.mutable.ConcurrentHashMap
, the implementation of updateValueWith()
is also thread-safe and atomic. It’s important that function
does not mutate the map values or it wouldn’t be thread-safe. It should return new values instead. In your example, the map values are Strings which are immutable so we're fine.
If your method recalculateNewValue()
just does String concatenation, here's how you might use updateValueWith()
.
Function0<String> factory = () -> "initial ";
Function2<String, String, String> recalculateNewValue = String::concat;
MutableMap<String, String> map = new ConcurrentHashMap<>();
map.updateValueWith("test", factory, recalculateNewValue, "append1 ");
Assert.assertEquals("initial append1 ", map.get("test"));
map.updateValueWith("test", factory, recalculateNewValue, "append2");
Assert.assertEquals("initial append1 append2", map.get("test"));
You can use Java 8's ConcurrentMap.compute(K key, BiFunction remappingFunction) to accomplish the same thing, but it has a few disadvantages.
ConcurrentMap<String, String> map = new ConcurrentHashMap<>();
map.compute("test", (key, oldValue) -> oldValue == null ? "initial append1 " : oldValue + "append1 ");
Assert.assertEquals("initial append1 ", map.get("test"));
map.compute("test", (key, oldValue) -> oldValue == null ? "initial append1 " : oldValue + "append2");
Assert.assertEquals("initial append1 append2", map.get("test"));
updateValueWith()
shares the same lambdas, but every call to compute()
creates new garbage on the heap. Note: I am a committer for Eclipse Collections
I don´t think it´s correct. As I understand it the merge() method would be the right tool for the job. I currently have the same problem and wrote a litte test to see the results.
This test starts 100 workers. Each of them is incrementing the value in the map 100 times. So the expected result would be 10000.
There are two types of workers. One that uses the replace algorithm and on that uses merge. The test is run two times with the different implementations.
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class ConcurrentMapTest
{
private static ConcurrentMap<String, Integer> map = new ConcurrentHashMap<>();
private final class ReplaceWorker implements Runnable
{
public void run()
{
for(int i = 0; i<100; i++)
{
Integer putIfAbsent = map.putIfAbsent("key", Integer.valueOf(1));
if(putIfAbsent == null)
return;
map.replace("key", putIfAbsent + 1);
}
}
}
private final class MergeWorker implements Runnable
{
public void run()
{
for(int i = 0; i<100; i++)
{
map.merge("key", Integer.valueOf(1), (ov, nv) -> {
return ov + 1;
});
}
}
}
public MergeWorker newMergeWorker()
{
return new MergeWorker();
}
public ReplaceWorker newReplaceWorker()
{
return new ReplaceWorker();
}
public static void main(String[] args)
{
map.put("key", 1);
ConcurrentMapTest test = new ConcurrentMapTest();
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(10, 10, 100, TimeUnit.MILLISECONDS, new ArrayBlockingQu
for(int i = 0; i<100; i++)
{
threadPool.submit(test.newMergeWorker());
}
awaitTermination(threadPool);
System.out.println(test.map.get("key"));
map.put("key", 1);
threadPool = new ThreadPoolExecutor(10, 10, 100, TimeUnit.MILLISECONDS, new ArrayBlockingQueue<>(1000));
for(int i = 0; i<100; i++)
{
threadPool.submit(test.newReplaceWorker());
}
awaitTermination(threadPool);
System.out.println(test.map.get("key"));
}
private static void awaitTermination(ExecutorService threadPool)
{
try
{
threadPool.shutdown();
boolean awaitTermination = threadPool.awaitTermination(1, TimeUnit.SECONDS);
System.out.println("terminted successfull: " + awaitTermination);
}
catch (InterruptedException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
result: terminted successfull: true 10000 terminted successfull: true 1743
The problem is that there is a gap between the get and the put in your case, so with concurrent accsess to the map results get overwritten. With merge it´s an atomic operation although the documentation does not say anything about it.
You could make it a little shorter with the code below which is equivalent to yours. I have stress tested it a little with thousands of threads accessing it concurrently: it works as expected, with a number of retries (loops) being performed (obviously, you can never prove correctness with testing in the concurrent world).
public void insertOrReplace(String key, String value) {
for (;;) {
String oldValue = concurrentMap.putIfAbsent(key, value);
if (oldValue == null)
return;
final String newValue = recalculateNewValue(oldValue, value);
if (concurrentMap.replace(key, oldValue, newValue))
return;
}
}
Your method seems thread safe. If you do not require the performance benefits of ConcurrentHashMap, consider using a regular HashMap instead and synchronize all access to it. Your method is similar to AtomicInteger.getAndSet(int), so it should be fine. I doubt there is an easier way to do this unless you're looking for a library call to do the work for you.