Does Java volatile prevent caching or enforce write-through caching?

后端 未结 2 998
暖寄归人
暖寄归人 2021-02-05 15:17

I\'m trying to understand Java\'s volatile keyword with respect to writing to a volatile atomic variable in a multithreaded program with CPU caches

相关标签:
2条回答
  • 2021-02-05 15:21

    The guarantees are only what you see in the language specification. In theory, writing a volatile variable might force a cache flush to main memory, or it might not, perhaps with a subsequent read forcing the cache flush or somehow causing transfer of the data between caches without a cache flush. This vagueness is deliberate, as it permits potential future optimizations that might not be possible if the mechanics of volatile variables were spelled out in more detail.

    In practice, with current hardware, it probably means that, absent a coherent cache, writing a volatile variable forces a cache flush to main memory. With a coherent cache, of course, such a flush isn't needed.

    0 讨论(0)
  • 2021-02-05 15:35

    Within Java, it's most accurate to say that all threads will see the most recent write to a volatile field, along with any writes which preceded that volatile read/write.

    Within the Java abstraction, this is functionally equivalent to the volatile fields being read/written from shared memory (but this isn't strictly accurate at a lower level).


    At a much lower level than is relevant to Java; in modern hardware, any and all reads/writes to any and all memory addresses always occur in L1 and registers first. That being said, Java is designed to hide this kind of low level behavior from the programmer, so this is only conceptually relevant to the discussion.

    When we use the volatile keyword on a field in Java, this simply tells the compiler to insert something known as a memory barrier on the reads/writes to this field. A memory barrier effectively ensures two things;

    1. Any threads reading this address will use the most up-to-date value (the barrier makes them wait until the most recent write makes it back to shared memory, and no reading threads can continue until this updated value makes it to their L1 cache).

    2. No reads/writes to ANY fields can cross over the barrier (aka, they are always written back before the other thread can continue, and the compiler/OOO cannot move them to a point after the barrier).

    To give a simple Java example;

    //on one thread
    counter += 1; //normal int field
    flag = true; //flag is volatile
    
    //on another thread
    if (flag) foo(counter); //will see the incremented value
    

    Essentially, when setting flag to true, we create a memory barrier. When Thread #2 tries to read this field, it runs into our barrier and waits for the new value to arrive. At the same time, the CPU ensures that counter += 1 is written back before that new value arrives. As a result, if flag == true then counter will have been incremented.


    So to sum up;

    1. All threads see the most up-to-date values of volatile fields (which can be loosely described as "reads/writes go through shared memory").

    2. Reads/writes to volatile fields establish happens-before relationships with previous reads/writes to any fields on one thread.

    0 讨论(0)
提交回复
热议问题