I\'ve had this question for quite a while now, trying to read lots of resources and understanding what is going on - but I\'ve still failed to get a good understanding of why th
The main misconception is the assumption that you are comparing “CAS
vs. synchronized
”. Given, how sophisticated JVMs implement synchronized
, you are comparing the performance of a CAS
-based algorithm using AtomicLong
with the performance of the CAS
-based algorithm used to implement synchronized
.
Similar to Lock
, the internal information for an object monitor basically consist of an int
status telling whether it has been owned and how often it is nested, a reference to the current owner thread and a queue of threads waiting to be able to acquire it. The expensive aspect is the waiting queue. Putting a thread into the queue, removing it from thread scheduling, and eventually waking it up when the current owner releases the monitor, are operations that can take a significant time.
However, in the uncontended case, the waiting queue is, of course, not involved. Acquiring the monitor consist of a single CAS
to change the status from “unowned” (usually zero) to “owned, acquired once” (guess the typical value). If successful, the thread can proceed with the critical action, followed by a release which implies just writing the “unowned” state with the necessary memory visibility and waking up another blocked thread, if there is one.
Since the wait queue is the significantly more expensive thing, implementations usually try to avoid it even in the contended case by performing some amount of spinning, making several repeated CAS
attempts before falling back to enqueuing the thread. If the critical action of the owner is as simple as a single multiplication, chances are high that the monitor will be released during the spinning phase already. Note that synchronized
is “unfair”, allowing a spinning thread to proceed immediately, even if there are already enqueued threads waiting far longer.
If you compare the fundamental operations performed by the synchronized(lock){ n = n * 123; }
when no queuing is involved and by al.updateAndGet(x -> x * 123);
, you’ll notice that they are roughly on par. The main difference is that the AtomicLong
approach will repeat the multiplication on contention while for the synchronized
approach, there is a risk of being put into the queue if no progress has been made during spinning.
But synchronized
allows lock coarsening for code repeatedly synchronizing on the same object, which might be relevant for a benchmark loop calling the syncShared
method. Unless there’s also a way to fuse multiple CAS
updates of an AtomicLong
, this can give synchronized
a dramatic advantage. (See also this article covering several aspects discussed above)
Note that due to the “unfair” nature of synchronized
, creating far more threads than CPU cores doesn’t have to be a problem. In the best case, “number of threads minus number of cores” threads end up on the queue, never waking up, while the remaining threads succeed in the spinning phase, one thread on each core. But likewise, threads not running on a CPU core can’t reduce the performance of the AtomicLong
update as they can neither, invalidate the current value to other threads nor make a failed CAS
attempt.
In either case, when CAS
ing on the member variable of an unshared object or when performing synchronized
on an unshared object, the JVM may detect the local nature of the operation and elide most of the associated costs. But this may depend on several subtle environmental aspects.
The bottom line is that there is no easy decision between atomic updates and synchronized
blocks. Things get far more interesting with more expensive operations, which may raise the likelihood of threads getting enqueued in the contended case of synchronized
, which may make it acceptable that the operation has to be repeated in the contended case of an atomic update.