I have a question regarding synchronization of code that is executed by several threads:
As far as I know each thread has its own stack, hence, non-static variables
Only primitive types, such as int
are guaranteed to be allocated on the stack. Objects and arrays are all typically stored in the heap unless Escape Analysis determines the scope of the object is 'restricted to the scope of the procedure'.
Stack yes (think of a call stack, local variables), but class variables live in the heap and you have to synchronize access to them:)
non-static variables exist in different locations in the memory for each thread
This is not true, so the answer to
if the code that the threads execute includes some class variable v1, then each thread has its own "instance" of v1 (different memory address), and no other thread can "touch" it... isn't it so
is no. Threads can touch object instances allocated and modified by other threads and the burden is on the programmer to ensure this does not affect program correctness.
Class member variables exist in a single place in memory per-class instance, not per thread. It is true that between memory barriers (think the start {
and end }
of synchronized
), that a thread may have a cache of the state of an object, but that is not the same as the language mandating per-thread storage. The "memory for each thread" is its stack which does not contain object members* -- only references to objects.
The best way to think of it is that there is one location on the heap for each object, but that there might be multiple reads&|writes involving that memory location happening at the same time.
I can see how you would come to the conclusions you did if you heard that threads allocate objects in different parts of the heap. Some JVMs have an optimization whereby they do thread-local allocation but that does not prevent other threads from accessing those objects.
Thread-local allocation
If the allocator were truly implemented as shown in Listing 1, the shared heapStart field would quickly become a significant concurrency bottleneck, as every allocation would involve acquiring the lock that guards this field. To avoid this problem, most JVMs use thread-local allocation blocks, where each thread allocates a larger chunk of memory from the heap and services small allocation requests sequentially out of that thread-local block. As a result, the number of times a thread has to acquire the shared heap lock is greatly reduced, improving concurrency.
* - it's possible that JVM optimizations allow some objects to be allocated on the stack.
Some key points which can help clarifying your doubts -
Objects are always allocated on heap.
Class level variables are shared across threads (thread of same object )
Local variables are always thread safe (if not exposed to outside world in non thread safe manner)
"non-static variables exist in different locations" could not possibly be correct. In Java, you never directly get to know anything of "the stack". All of your class variables, static or instance, come from the heap. As a java developer, however, you don't really care about that.
The only time you don't care about thread-safety is when your classes are immutable (don't change after construction) OR you aren't ever doing anything in threads. If your classes don't fall into these two categories, you need to think about making them thread-safe.
The more immutability you can get into your designs, the easier the Threading issues are to reason about and overcome.
Nrj has got the right idea.
The stack is thread-safe whereas the heap is not thread-safe unless you synchronized the code. The stack contains local variables and method parameters (primitive and reference) whereas the heap contains objects.