The volatile key word and memory consistency errors

前端 未结 6 2000
借酒劲吻你
借酒劲吻你 2021-02-09 11:20

In the oracle Java documentation located here, the following is said:

Atomic actions cannot be interleaved, so they can be used without f

相关标签:
6条回答
  • 2021-02-09 11:48

    What do they mean by "reduces the risk"?

    Atomicity is one issue addressed by the Java Memory Model. However, more important than Atomicity are the following issues:

    • memory architecture, e.g. impact of CPU caches on read and write operations
    • CPU optimizations, e.g. reordering of loads and stores
    • compiler optimizations, e.g. added and removed loads and stores

    The following listing contains a frequently used example. The operations on x and y are atomic. Still, the program can print both lines.

    int x = 0, y = 0;
    
    // thread 1
    x = 1
    if (y == 0) System.out.println("foo");
    
    // thread 2
    y = 1
    if (x == 0) System.out.println("bar");
    

    However, if you declare x and y as volatile, only one of the two lines can be printed.


    How is a memory consistency error still possible when using volatile?

    The following example uses volatile. However, updates might still get lost.

    int x = 0;
    
    // thread 1
    x += 1;
    
    // thread 2
    x += 1;
    

    Would it be true to say that the only effect of placing volatile on a non-double, non-long primitive is to enable the "happens-before" relationship with subsequent reads from other threads?

    Happens-before is often misunderstood. The consistency model defined by happens-before is weak and difficult to use correctly. This can be demonstrated with the following example, that is known as Independent Reads of Independent Writes (IRIW):

    volatile int x = 0, y = 0;
    
    // thread 1
    x = 1;
    
    // thread 2
    y = 1;
    
    // thread 3
    if (x == 1) System.out.println(y);
    
    // thread 4
    if (y == 1) System.out.println(x);
    

    Only with happens-before, two 0s would be valid result. However, that's apparently counter-intuitive. For that reason, Java provides a stricter consistency model, that forbids this relativity issue, and that is known as sequential consistency. You can find it in sections §17.4.3 and §17.4.5 of the Java Language Specification. The most important part is:

    A program is correctly synchronized if and only if all sequentially consistent executions are free of data races. If a program is correctly synchronized, then all executions of the program will appear to be sequentially consistent (§17.4.3).

    That means, volatile gives you more than happens-before. It gives you sequential consistency if used for all conflicting accesses (§17.4.3).

    0 讨论(0)
  • 2021-02-09 11:49

    when coming to concurrency, you might want to ensure 2 things:

    • atomic operations: a set of operations is atomic - this is usually achieved with "synchronized" (higher level constructs). Also with volatile for instance for read/write on long and double.

    • visibility: a thread B sees a modification made by a thread A. Even if an operation is atomic, like a write to an int variable, a second thread can still see a non-up-to-date value of the variable, due to processor caches. Putting a variable as volatile ensures that the second thread does see the up-to-date value of that variable. More than that, it ensures that the second thread sees an up-to-date value of ALL the variables written by the first thread before the write to the volatile variable.

    0 讨论(0)
  • 2021-02-09 11:52

    The usual example:

    while(!condition)
        sleep(10);
    

    if condition is volatile, this behaves as expected. If it is not, the compiler is allowed to optimize this to

    if(!condition)
        for(;;)
            sleep(10);
    

    This is completely orthogonal to atomicity: if condition is of a hypothetical integer type that is not atomic, then the sequence

    thread 1 writes upper half to 0
    thread 2 reads upper half (0)
    thread 2 reads lower half (0)
    thread 1 writes lower half (1)
    

    can happen while the variable is updated from a nonzero value that just happens to have a lower half of zero to a nonzero value that has an upper half of zero; in this case, thread 2 reads the variable as zero. The volatile keyword in this case makes sure that thread 2 really reads the variable instead of using its local copy, but it does not affect timing.

    Third, atomicity does not protect against

    thread 1 reads value (0)
    thread 2 reads value (0)
    thread 1 writes incremented value (1)
    thread 2 writes incremented value (1)
    

    One of the best ways to use atomic volatile variables are the read and write counters of a ring buffer:

    thread 1 looks at read pointer, calculates free space
    thread 1 fills free space with data
    thread 1 updates write pointer (which is `volatile`, so the side effects of filling the free space are also committed before)
    thread 2 looks at write pointer, calculates amount of data received
    ...
    

    Here, no lock is needed to synchronize the threads, atomicity guarantees that the read and write pointers will always be accessed consistently and volatile enforces the necessary ordering.

    0 讨论(0)
  • 2021-02-09 12:06

    For question 1, the risk is only reduced (and not eliminated) because volatile only applies to a single read/write operation and not more complex operations such as increment, decrement, etc.

    For question 2, the effect of volatile is to make changes immediately visible to other threads. As the quoted passage states "this does not eliminate all need to synchronize atomic actions, because memory consistency errors are still possible." Simply because reads are atomic does not mean that they are thread safe. So establishing a happens before relationship is almost a (necessary) side-effect of guaranteeing memory consistency across threads.

    0 讨论(0)
  • 2021-02-09 12:09

    Ad 1: With a volatile variable, the variable is always checked against a master copy and all threads see a consistent state. But if you use that volatility variable in a non-atomic operation writing back the result (say a = f(a)) then you might still create a memory inconsistency. That's how I would understand the remark "reduces the risk". A volatile variable is consistent at the time of read, but you still might need to use a synchronize.

    Ad 2: I don't know. But: If your definition of "happens before" includes the remark

    This means that changes to a volatile variable are always visible to other threads. What's more, it also means that when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change.

    I would not dare to rely on any other property except that volatile ensures this. What else do you expect from it?!

    0 讨论(0)
  • 2021-02-09 12:09

    Assume that you have a CPU with a CPU cache or CPU registers. Independent from your CPU architecture in terms of number of cores it has, volatile does NOT guarantee you a perfect inconsistency. The only way to achieve this is to use synchronized or atomic references with a performance price.

    For example you have multiple threads (Thread A & Thread B) working on a shared data. Assume that Thread A wants to update the shared data and it's is started .For performance reasons, Thread A's stack was moved to CPU cache or registers. Then Thread A updated the shared data. But the problem with those places is that actually they don't flush back the updated value to the main memory immediately. This is where inconsistency's offered because up to the flash back operation, Thread B might have wanted to play with the same data, which would have taken it from the main memory - yet unupdated value.

    If you use volatile all the operations will be perfomed on the main memory so you don't have a flush back latency. But, this time you may suffer from thread pipeline. In the middle of write operation (composed of number of atomic operations), Thread B may have been executed by the os to perform a read operation and that's it! Thread B will read the unupdated value again. That's why it's said it reduces the risk.

    Hope you got it.

    0 讨论(0)
提交回复
热议问题