I am using in my code at the moment a ReentrantReadWriteLock to synchronize access over a tree-like structure. This structure is large, and read by many threads at once wit
I have made a little progress on this. By declaring the lock variable explicitly as a ReentrantReadWriteLock
instead of simply a ReadWriteLock
(less than ideal, but probably a necessary evil in this case) I can call the getReadHoldCount() method. This lets me obtain the number of holds for the current thread, and thus I can release the readlock this many times (and reacquire it the same number afterwards). So this works, as shown by a quick-and-dirty test:
final int holdCount = lock.getReadHoldCount();
for (int i = 0; i < holdCount; i++) {
lock.readLock().unlock();
}
lock.writeLock().lock();
try {
// Perform modifications
} finally {
// Downgrade by reacquiring read lock before releasing write lock
for (int i = 0; i < holdCount; i++) {
lock.readLock().lock();
}
lock.writeLock().unlock();
}
Still, is this going to be the best I can do? It doesn't feel very elegant, and I'm still hoping that there's a way to handle this in a less "manual" fashion.
Java 8 now has a java.util.concurrent.locks.StampedLock
with a tryConvertToWriteLock(long)
API
More info at http://www.javaspecialists.eu/archive/Issue215.html
to OP: just unlock as many times as you have entered the lock, simple as that:
boolean needWrite = false;
readLock.lock()
try{
needWrite = checkState();
}finally{
readLock().unlock()
}
//the state is free to change right here, but not likely
//see who has handled it under the write lock, if need be
if (needWrite){
writeLock().lock();
try{
if (checkState()){//check again under the exclusive write lock
//modify state
}
}finally{
writeLock.unlock()
}
}
in the write lock as any self-respect concurrent program check the state needed.
HoldCount shouldn't be used beyond debug/monitor/fast-fail detect.
What you are trying to do is simply not possible this way.
You cannot have a read/write lock that you can upgrade from read to write without problems. Example:
void test() {
lock.readLock().lock();
...
if ( ... ) {
lock.writeLock.lock();
...
lock.writeLock.unlock();
}
lock.readLock().unlock();
}
Now suppose, two threads would enter that function. (And you are assuming concurrency, right? Otherwise you would not care about locks in the first place....)
Assume both threads would start at the same time and run equally fast. That would mean, both would acquire a read lock, which is perfectly legal. However, then both would eventually try to acquire the write lock, which NONE of them will ever get: The respective other threads hold a read lock!
Locks that allow upgrading of read locks to write locks are prone to deadlocks by definition. Sorry, but you need to modify your approach.
What about this something like this?
class CachedData
{
Object data;
volatile boolean cacheValid;
private class MyRWLock
{
private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
public synchronized void getReadLock() { rwl.readLock().lock(); }
public synchronized void upgradeToWriteLock() { rwl.readLock().unlock(); rwl.writeLock().lock(); }
public synchronized void downgradeToReadLock() { rwl.writeLock().unlock(); rwl.readLock().lock(); }
public synchronized void dropReadLock() { rwl.readLock().unlock(); }
}
private MyRWLock myRWLock = new MyRWLock();
void processCachedData()
{
myRWLock.getReadLock();
try
{
if (!cacheValid)
{
myRWLock.upgradeToWriteLock();
try
{
// Recheck state because another thread might have acquired write lock and changed state before we did.
if (!cacheValid)
{
data = ...
cacheValid = true;
}
}
finally
{
myRWLock.downgradeToReadLock();
}
}
use(data);
}
finally
{
myRWLock.dropReadLock();
}
}
}
Use the "fair" flag on the ReentrantReadWriteLock. "fair" means that lock requests are served on first come, first served. You could experience performance depredation since when you'll issue a "write" request, all of the subsequent "read" requests will be locked, even if they could have been served while the pre-existing read locks are still locked.