问题
The "Treiber Stack" is generally one of the simplest lock-free data structures, and so it is often used when teaching introductions to lock-free algorithms.
I've seen many implementations of Treiber Stacks using C++ atomics. The algorithm itself is trivial, so the real challenge is handling all the other incidental details of lock-free data-structures, such as providing some way of performing safe memory reclamation, avoiding the ABA problem, and allocating nodes in a lock-free manner. This can be solved in various ways, such as using atomic reference counting, hazard pointers, counted/tagged pointers to avoid ABA, and using a lock-free memory pool.
But ignoring all those details and focusing on the simple algorithm itself, one question that occurred to me is the fact that every implementation of Treiber Stacks that I can recall implements the node class using atomic next pointers. For example:
struct Node
{
T value;
std::atomic<Node*> next;
};
But after thinking about the algorithm, I'm not sure why the next pointer needs to be atomic.
The general PUSH algorithm (ignoring lock-free allocation, safe memory reclamation, backoff, ABA avoidance, etc.) is:
Node* n = new Node();
Node* front = m_front.load();
n->next.store(front);
while (!m_front.compare_exchange_weak(front, n))
{
n->next.store(front);
}
The general POP algorithm (again, ignoring all details except the actual algorithmic logic) is:
Node* front = m_front.load();
Node* next = front->next.load();
while (!m_front.compare_exchange_weak(front, next))
{
next = front->next.load();
}
And here is an real-world example implementation of the PUSH algorithm:
https://github.com/khizmax/libcds/blob/master/cds/intrusive/treiber_stack.h#L736
So I don't understand why the next pointer even needs to be atomic. Most C++ implementations use relaxed loads/stores with the next
pointer, so we don't need any memory fences when reading/writing to the next pointer, but my thinking is that it doesn't need to be atomic at all.
From what I can see, at no time is the next pointer of any node concurrently written to. Rather, the next pointer may be concurrently loaded, but I never see any opportunity for the algorithm to concurrently load+store or concurrently store+store. In fact, in the PUSH algorithm, the next pointer is never accessed concurrently at all.
So it seems to me that next pointers are effectively "read-only" when accessed concurrently, so I'm not sure why it would even be necessary to make them atomic at all.
Yet, every C++ implementation of a Treiber Stack I've seen makes the next pointers to be atomic. So am I correct, or is there some reason the next pointer must be made atomic?
回答1:
If it was as simple as the code you showed, you'd be right. A Node
is never modified after a pointer to it is published. But you left out the part where Nodes are cleaned up so they can be garbage-collected. (You can't just delete
after popping; another thread could still have a pointer to it and not yet have read it. This is a tricky problem for RCU as well.)
This is the function you left out, called after a CAS in pop
succeeds:
protected:
void clear_links( node_type * pNode ) CDS_NOEXCEPT
{
pNode->m_pNext.store( nullptr, memory_model::memory_order_relaxed );
}
Here's an ordering where a reader reads a next
while it's being written:
A: Node* front = m_front.load();
B: Node* front = m_front.load(); // same value
A: Node* next = front->next.load();
A: m_front.compare_exchange_weak(front, next) // succeeds, no loop
A: clear_links(front); // i.e. front->next.store(nullptr);
B: front->next.load();
Thus, C++ Undefined Behaviour, end of story as far as standards compliance is concerned.
In practice, a non-atomic load will happen to be atomic anyway on most CPU architectures, or at worst experience tearing. (IDK of any ISA where it results in anything unpredictable other than the value, but C++ leaves this option open).
I'm not sure there's any scenario where a torn value could actually get used (put into m_front
), because clear_links()
can't run until after a successful CAS. And if CAS succeeded in one thread, it will fail in the other thread because it will only try a torn next
value with the old front
as the expected
arg to CAS.
In practice, pretty much every implementation anyone cares about has no extra cost for relaxed-atomic loads/stores vs. regular for pointer-sized objects. In fact, this stack pretty much sucks if atomicity isn't "free" for a pointer.
e.g. on AVR (8-bit RISC microcontrollers which use 16-bit pointers), it would be cheaper to just take a lock on the data structure, instead of letting std::atomic
use locks for every load/store in this algo. (Especially since there aren't multi-core AVR CPUs, so locks are probably very cheap to implement.)
atomic<>
also gets the compiler to assume that a value could be asynchronously modified by another thread. So it stops it from ever optimizing away a load or store, somewhat like volatile
. (But see also Why don't compilers merge redundant std::atomic writes?.) I don't think there's anything here where that's needed, and wouldn't already happen.
Non-atomic ops are ordered by atomic acquire and release operations, similar to relaxed atomic operations, and CAS modifies front
, so front->nexthas a new
front` so a non-atomic load couldn't optimize away.
It might be an interesting experiment to see if you get identical asm output from a compiler after replacing atomic <Node*> next
with Node *next
. (Or with a non_atomic
wrapper class that still has load/store member functions so you don't have to modify much code).
Using relaxed atomic stores looks good to me. You definitely don't want to implement it the way you showed, with seq_cst
stores as part of initializing a new object that hasn't had any pointers to it published yet. At that point atomicity is not needed, but it's free (on normal CPUs) so there's no benefit to avoiding it. None of the stores or loads could be optimized away.
来源:https://stackoverflow.com/questions/46415027/c-treiber-stack-and-atomic-next-pointers