(question revised): So far, the answers all include a single thread re-entering the lock region linearly, through things like recursion, where you can trace the steps of a s
Suppose you have a queue that contains actions:
public static Queue<Action> q = whatever;
Suppose Queue<T>
has a method Dequeue
that returns a bool indicating whether the queue could be successfully dequeued.
And suppose you have a loop:
static void Main()
{
q.Add(M);
q.Add(M);
Action action;
while(q.Dequeue(out action))
action();
}
static object lockObject = new object();
static void M()
{
Action action;
lock(lockObject)
{
if (q.Dequeue(out action))
action();
}
}
Clearly the main thread enters the lock in M twice; this code is re-entrant. That is, it enters itself, through an indirect recursion.
Does this code look implausible to you? It should not. This is how Windows works. Every window has a message queue, and when a message queue is "pumped", methods are called corresponding to those messages. When you click a button, a message goes in the message queue; when the queue is pumped, the click handler corresponding to that message gets invoked.
It is therefore extremely common, and extremely dangerous, to write Windows programs where a lock contains a call to a method which pumps a message loop. If you got into that lock as a result of handling a message in the first place, and if the message is in the queue twice, then the code will enter itself indirectly, and that can cause all manner of craziness.
The way to eliminate this is (1) never do anything even slightly complicated inside a lock, and (2) when you are handling a message, disable the handler until the message is handled.
One of the more subtle ways you can recurse into a lock block is in GUI frameworks. For example, you can asynchronously invoke code on a single UI thread (a Form class)
private object locker = new Object();
public void Method(int a)
{
lock (locker)
{
this.BeginInvoke((MethodInvoker) (() => Method(a)));
}
}
Of course, this also puts in an infinite loop; you'd likely have a condition by which you'd want to recurse at which point you wouldn't have an infinite loop.
Using lock
is not a good way to sleep/awaken threads. I would simply use existing frameworks like Task Parallel Library (TPL) to simply create abstract tasks (see Task
) to creates and the underlying framework handles creating new threads and sleeping them when needed.
Let's think about something other than recursion.
In some of business logics, they would like to control the behaviors of synchronization.
One of these patterns, they invoke Monitor.Enter
somewhere and would like to invoke Monitor.Exit
elsewhere later. Here is the code to get the idea about that:
public partial class Infinity: IEnumerable<int> {
IEnumerator IEnumerable.GetEnumerator() {
return this.GetEnumerator();
}
public IEnumerator<int> GetEnumerator() {
for(; ; )
yield return ~0;
}
public static readonly Infinity Enumerable=new Infinity();
}
public partial class YourClass {
void ReleaseLock() {
for(; lockCount-->0; Monitor.Exit(yourLockObject))
;
}
void GetLocked() {
Monitor.Enter(yourLockObject);
++lockCount;
}
void YourParallelMethod(int x) {
GetLocked();
Debug.Print("lockCount={0}", lockCount);
}
public static void PeformTest() {
new Thread(
() => {
var threadCurrent=Thread.CurrentThread;
Debug.Print("ThreadId {0} starting...", threadCurrent.ManagedThreadId);
var intanceOfYourClass=new YourClass();
// Parallel.ForEach(Infinity.Enumerable, intanceOfYourClass.YourParallelMethod);
foreach(var i in Enumerable.Range(0, 123))
intanceOfYourClass.YourParallelMethod(i);
intanceOfYourClass.ReleaseLock();
Monitor.Exit(intanceOfYourClass.yourLockObject); // here SynchronizationLockException thrown
Debug.Print("ThreadId {0} finished. ", threadCurrent.ManagedThreadId);
}
).Start();
}
object yourLockObject=new object();
int lockCount;
}
If you invoke YourClass.PeformTest()
, and get a lockCount greater than 1, you've reentered; not necessarily be concurrent.
If it was not safe for reentrancy, you will get stuck in the foreach loop.
In the code block where Monitor.Exit(intanceOfYourClass.yourLockObject)
will throw you a SynchronizationLockException
, it is because we are trying to invoke Exit
more than the times it have entered. If you are about to use the lock
keyword, you possibly would not encounter this situation except directly or indirectly of recursive calls. I guess that's why the lock
keyword was provided: it prevents the Monitor.Exit
to be omitted in a careless manner.
I remarked the calling of Parallel.ForEach
, if you are interested then you can test it for fun.
To test the code, .Net Framework 4.0
is the least requirement, and following additional name spaces are required, too:
using System.Threading.Tasks;
using System.Diagnostics;
using System.Threading;
using System.Collections;
Have fun.
ThreadPool threads cannot be reused elsewhere just because they went to sleep; they need to finish before they're reused. A thread that is taking a long time in a lock region does not become eligible to run more code at some other independent point of control. The only way to experience lock re-entry is by recursion or executing methods or delegates inside a lock that re-enter the lock.
IMHO, Re-entering a lock is not something you need to take care to avoid (given many people's mental model of locking this is, at best, dangerous, see Edit below). The point of the documentation is to explain that a thread cannot block itself using Monitor.Enter
. This is not always the case with all synchronization mechanisms, frameworks, and languages. Some have non-reentrant synchronization in which case you have to be careful that a thread doesn't block itself. What you do need to be careful about is always calling Monitor.Exit
for every Monitor.Enter
call. The lock
keyword does this for you automatically.
A trivial example with re-entrance:
private object locker = new object();
public void Method()
{
lock(locker)
{
lock(locker) { Console.WriteLine("Re-entered the lock."); }
}
}
The thread has entered the lock on the same object twice so it must be released twice. Usually it is not so obvious and there are various methods calling each other that synchronize on the same object. The point is that you don't have to worry about a thread blocking itself.
That said you should generally try to minimize the amount the time you need to hold a lock. Acquiring a lock is not computationally expensive, contrary to what you may hear (it is on the order of a few nanoseconds). Lock contention is what is expensive.
Edit
Please read Eric's comments below for additional details, but the summary is that when you see a lock
your interpretation of it should be that "all activations of this code block are associated with a single thread", and not, as it is commonly interpreted, "all activations of this code block execute as a single atomic unit".
For example:
public static void Main()
{
Method();
}
private static int i = 0;
private static object locker = new object();
public static void Method()
{
lock(locker)
{
int j = ++i;
if (i < 2)
{
Method();
}
if (i != j)
{
throw new Exception("Boom!");
}
}
}
Obviously, this program blows up. Without the lock
, it is the same result. The danger is that the lock
leads you into a false sense of security that nothing could modify state on you between initializing j
and evaluating the if
. The problem is that you (perhaps unintentionally) have Method
recursing into itself and the lock
won't stop that. As Eric points out in his answer, you might not realize the problem until one day someone queues up too many actions simultaneously.
Re-Entrance is possible if you have a structure like so:
Object lockObject = new Object();
void Foo(bool recurse)
{
lock(lockObject)
{
Console.WriteLine("In Lock");
if (recurse) { foo(false); }
}
}
While this is a pretty simplistic example, it's possible in many scenarios where you have interdependent or recursive behaviour.
For example:
Same-thread re-entry on the same lock is needed to ensure you don't get deadlocks occurring with your own code.