In our system, we have a method that will do some work when it\'s called with a certain ID:
public void doWork(long id) { /* ... */ }
Now, this
I invented a thing like that for myself some time ago. I call it an equivalence-class lock, meaning, it locks on all of the things that are equal to the given thing. You can get it from my github, and use it subject to the Apache 2 license, if you like, or just read it and forget it!
Premature optimization is the root of evil
Try it with a (synchronized) map.
Maybe if it grows too big, you can clear its content at regular intervals.
You can try the following little 'hack'
String str = UNIQUE_METHOD_PREFIX + Long.toString(id);
synchornized(str.intern()) { .. }
which is 100% guaranteed to return the same instance.
The UNIQUE_METHOD_PREFIX
, may be a hardcoded constant, or may be obtained using:
StackTraceElement ste = Thread.currentThread().getStackTrace()[0];
String uniquePrefix = ste.getDeclaringClass() + ":" +ste.getMethodName();
which will guarantee that the lock happens only on this precise method. That's in order to avoid deadlocks.
To start with:
You're talking here about a lock-striping setup. One end of the continuum is a single giant lock for all ids which will is easy and safe but not concurrent. The other end is a lock per id which is easy (to some degree) and safe and very concurrent but might require a large number of "lock-able objects" in memory (if you don't already have them). Somewhere in the middle is the idea of creating a lock for a range of ids - this lets you adjust concurrency based on your environment and make choices about tradeoffs between memory and concurrency.
ConcurrentHashMap can be used to achieve this as CHM is made up internally of segments (sub-maps) and there is one lock per segment. This gives you concurrency equal to the number of segments (which defaults to 16 but is configurable).
There are a bunch of other possible solutions for partitioning your ID space and creating sets of locks but you are right to be sensitive to the clean up and memory leak issues - taking care of that while maintaining concurrency is a tricky business. You'll need to use some kind of reference counting on each lock and manage the eviction of old locks carefully to avoid evicting a lock that's in the process of being locked. If you go this route, use ReentrantLock or ReentrantReadWriteLock (and not synchronized on objects) as that lets you explicitly manage the lock as an object and use the extra methods available on it.
There is also some stuff on this and a StripedMap example in Java Concurrency in Practice section 11.4.3.
Wouldn't it be enough to use a SynchronizedHashMap or Collections.synchronizedMap(Map m) from the java.util.concurrent package instead of a plain HashMap where calls for retrieving and inserting are not synchronized?
something like:
Map<Long,Object> myMap = new HashMap<Long,Object>();
Map<Long,Object> mySyncedMap=Collections.synchronizedMap(myMap);
I'd say you're already pretty far to having a solution. Make a LockManager
who lazily and reference-counted-ly manages those locks for you. Then use it in doWork
:
public void doWork(long id) {
LockObject lock = lockManager.GetMonitor(id);
try {
synchronized(lock) {
// ...
}
} finally {
lock.Release();
}
}