问题
Using a simplified version of a basic seqlock , gcc reorders a nonatomic load up across an atomic load(memory_order_seq_cst)
when compiling the code with -O3
. This reordering isn't observed when compiling with other optimization levels or when compiling with clang ( even on O3
). This reordering seems to violate a synchronizes-with relationship that should be established and I'm curious to know why gcc reorders this particular load and if this is even allowed by the standard.
Consider this following load
function:
auto load()
{
std::size_t copy;
std::size_t seq0 = 0, seq1 = 0;
do
{
seq0 = seq_.load();
copy = value;
seq1 = seq_.load();
} while( seq0 & 1 || seq0 != seq1);
std::cout << "Observed: " << seq0 << '\n';
return copy;
}
Following seqlock procedure, this reader spins until it is able to load two instances of seq_
, which is defined to be a std::atomic<std::size_t>
, that are even ( to indicate that a writer is not currently writing ) and equal ( to indicate that a writer has not written to value
in between the two loads of seq_
). Furthermore, because these loads are tagged with memory_order_seq_cst
( as a default argument ), I would imagine that the instruction copy = value;
would be executed on each iteration as it cannot be reordered up across the initial load, nor can it reordered down below the latter.
However, the generated assembly issues the load from value
before the first load from seq_
and is even performed outside of the loop. This could lead to improper synchronization or torn reads of value
that do not get resolved by the seqlock algorithm. Additionally, I've noticed that this only occurs when sizeof(value)
is below 123 bytes. Modifying value
to be of some type >= 123 bytes yields the correct assembly and is loaded upon each loop iteration in between the two loads of seq_
. Is there any reason why this seemingly arbitrary threshold dictates which assembly is generated?
This test harness exposes the behavior on my Xeon E3-1505M, in which "Observed: 2" will be printed from the reader and the value 65535 will be returned. This combination of observed values of seq_
and the returned load from value
seem to violate the synchronizes-with relationship that should be established by the writer thread publishing seq.store(2)
with memory_order_release
and the reader thread reading seq_
with memory_order_seq_cst
.
Is it valid for gcc to reorder the load, and if so, why does it only do so when sizeof(value)
is < 123? clang, no matter the optimization level or the sizeof(value)
will not reorder the load. Clang's codegen, I believe, is the appropriate and correct approach.
回答1:
Congratulations, I think you've hit a bug in gcc
!
Now I think you can make a reasonable argument, as the other answer does, that the original code you showed could perhaps have been correctly optimized that way by gcc
by relying on a fairly obscure argument about the unconditional access to value
: essentially you can't have been relying on a synchronizes-with relationship between the load seq0 = seq_.load();
and the subsequent read of value
, so reading it "somewhere else" shouldn't change the semantics of a race-free program. I'm not actually sure of this argument, but here's a "simpler" case I got from reducing your code:
#include <atomic>
#include <iostream>
std::atomic<std::size_t> seq_;
std::size_t value;
auto load()
{
std::size_t copy;
std::size_t seq0;
do
{
seq0 = seq_.load();
if (!seq0) continue;
copy = value;
seq0 = seq_.load();
} while (!seq0);
return copy;
}
This isn't a seqlock
or anything - it just waits for seq0
to change from zero to non-zero, and then reads value
. The second read of seq_
is superfluous as is the while
condition, but without them the bug goes away.
This is now the read-side of the well known idiom which does work and is race-free: one thread writes to value
, then sets seq0
non-zero with a release store. The threads calling load
see the non-zero store, and synchronize with it, and so can safely read value
. Of course, you can't keep writing to value
, it's a "one time" initialization, but this a common pattern.
With the above code, gcc is still hoisting the read of value:
load():
mov rax, QWORD PTR value[rip]
.L2:
mov rdx, QWORD PTR seq_[rip]
test rdx, rdx
je .L2
mov rdx, QWORD PTR seq_[rip]
test rdx, rdx
je .L2
rep ret
Oops!
This behavior occurs up to gcc 7.3, but not in 8.1. Your code also compiles as you wanted in 8.1:
mov rbx, QWORD PTR seq_[rip]
mov rbp, QWORD PTR value[rip]
mov rax, QWORD PTR seq_[rip]
回答2:
Reordering such operations is not allowed in general, but it is allowed in this case because you invoked undefined behavior by creating a race condition in the read by interleaving a non-atomic read and write in different threads.
The C++11 standard says:
Two expression evaluations conflict if one of them modifies a memory location (1.7) and the other one accesses or modifies the same memory location.
And also that:
The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior.
This applies even to things that occur before the undefined behavior:
A conforming implementation executing a well-formed program shall produce the same observable behavior as one of the possible executions of the corresponding instance of the abstract machine with the same program and the same input. However, if any such execution contains an undefined operation, this International Standard places no requirement on the implementation executing that program with that input (not even with regard to operations preceding the first undefined operation).
Because reading from a non-atomic write there creates undefined behavior (even if you overwrite and ignore the value), GCC is allowed to assume it does not occur and thus optimize out the seqlock.
Based on another answer, it seems this is actually caused by a bug in GCC which persists when you fix the UB, but that optimization wasn't technically invalid for your code since you invoked UB.
来源:https://stackoverflow.com/questions/36958372/gcc-reordering-up-across-load-with-memory-order-seq-cst-is-this-allowed