Say I have two threads that manipulate the global variable x
. Each thread (or each core I suppose) will have a cached copy of x
.
Now say th
No, a memory barrier does not ensure that cache coherence has been "completed". It often involves no coherence operation at all and can be performed speculatively or as a no-op.
It only enforces the ordering semantics described in the barrier. For example, an implementation might just put a marker in the store queue such that store-to-load forwarding doesn't occur for stores older than the marker.
Intel, in particular, already has a strong memory model for normal loads and stores (the kind that compilers generate and that you'd use in assembly) where the only possible re-ordering is later loads passing earlier stores. In the terminology of SPARC memory barriers, every barrier other than StoreLoad
is already a no-op.
In practice, the interesting barriers on x86 are attached to LOCKed
instructions, and the execution of such an instruction doesn't necessarily involve any cache coherence at all. If the line is already in an exclusive state, the CPU may simply execute the instruction, making sure not to release the exclusive state of the line while the operation is in progress (i.e., between the read of the argument and writeback of the result) and then only deal with preventing store-to-load forwarding from breaking the total ordering that LOCK
instructions come with. Currently they do that by draining the store queue, but in future processors even that could be speculative.
What a memory barrier or barrier+op does is ensure that the operation is seen by other agents in a relative order that obeys all the restriction of the barrier. That certainly doesn't usually involve pushing the result to other CPUs as a coherence operation as you question implies.
If no other processor has X in its cache, doing x=5 on processor A will not update the caches in any other processor. If processor B reads variable X, processor A will detect the read (this is called snooping) and will provide the data, 5, on the bus for processor B. Now processor B will have the value 5 in its cache. If no other processor reads variable X then their caches will never be updated with the new value 5.
Now when set x to 5 is executed, the cached value of x will be set to 5, this will cause the cache coherence protocol to act and update the caches of the other cores with the new value of x.
There are multiple different x86 CPUs with different cache coherency protocols (none, MESI, MOESI), plus different types of caching (uncached, write-combining, write-only, write-through, write-back).
In general when a write is being done (when setting x to 5) the CPU determines the type of caching being done (from MTRRs or TLBs), and if the cache line could be cached it checks its own cache to determine what state that cache line is in (from its own perspective).
Then the type of caching and the state of the cache line is used to determine if the data is written directly to the physical address space (bypassing caches), or if it has to fetch the cache line from elsewhere while simultaneously telling other CPUs to invalidate old copies, or if it has exclusive access in its own caches and can modify it in the cache without telling anything.
A CPU never "injects" data into another CPU's cache (and only tells other CPUs to invalidate/discard their copy of a cache line). Telling other CPUs to invalidate/discard their copy of a cache line causes them to fetch the current copy of it if/when they want it again.
Note that none of this has anything to do with memory barriers.
There are 3 types of memory barriers (sfence
, lfence
and mfence
), which tell the CPU to complete stores, loads or both before allowing later stores, loads or both to occur. Because the CPU is normally cache coherent anyway these memory barriers/fences are normally pointless/unnecessary. However there are situations where the CPU is not cache coherent (incuding "store forwarding", when the write-combining caching type is being used, when non-temporal stores are being used, etc). Memory barriers/fences are needed to enforce ordering (if necessary) for these special/rare cases.
The memory barriers present on the x86 architecture - but this is true in general - not only guarantee that all the previous1 loads, or stores, are completed before any subsequent load or store is executed - they also guarantee that the stores have became globally visible.
By globally visible it is meant that other cache-aware agents - like other CPUs - can see the store.
Other agents non aware of the caches - like a DMA capable device - will not usually see the store if the target memory has been marked with a cache type that doesn't enforce an immediate write into memory.
This has nothing to do with the barrier it-self, it is a simple fact of the x86 architecture: caches are visible to the programmer and when dealing with hardware they are usually disabled.
Intel is purposely generic on the description of the barriers because it doesn't want to tie her-self to a specific implementation.
You need to think in abstract: globally visible implies that the hardware will take all the necessary steps to make the store globally visible. Period.
To understand the barriers however it is worth taking a look at the current implementations.
Note that Intel is free to turn the modern implementation up-side down at will, as long it keep the visible behaviour correct.
A store in an x86 CPU is executed in the core, then placed in the store buffer.
For example mov DWORD [eax+ebx*2+4], ecx
, once decoded is stalled until eax
, ebx
and ecx
are ready2 then it is dispatched to an execution unit capable of computing its address.
When the execution is done the store has become a pair (address, value) that is moved into the store buffer.
The store is said to be completed locally (in the core).
The store buffer allows the OoO part of the CPU to forget about the store and consider it completed even if an attempt to write is has not even been made yet.
Upon specific events, like a serialization event, an exception, the execution of a barrier or the exhaustion of the buffer, the CPU flushes the store buffer.
The flush is always in order - First In, First written.
From the store buffer the store enters the realm of the cache.
It can be combined yet into another buffer called the Write Combining buffer (and later written into memory by-passing the caches) if the target address is marked with a WC cache type, it can be written into the L1D cache, the L2, the L3 or the LLC if it is not one of the previous if the cache type is WB or WT.
It can also be written directly in memory if the cache type is UC or WT.
As today that's what it means to become globally visible: leave the store buffer.
Beware of two very important things:
sfence
does exactly that: wait for all the previous stores to complete locally and then drains the store buffer.
Since each store in the store buffer can potentially miss, you see how heavy such instruction is. (But out-of-order execution including later loads can continue. Only mfence
would block later loads from being globally visible (reading from L1d cache) until after the store buffer finishes committing to cache.)
But does sfence
wait for the store to propagates into other caches?
Well, no.
Because there is not propagation - lets see what a write into the cache implies from an high-level perspective.
The cache is kept coherent among all the processors with the MESI protocol (MESIF for multi-socket Intel systems, MOESI for AMD ones).
We will only see MESI.
Suppose the writes indexes the cache line L, and suppose all the processors has this line L in their caches with the same value.
The state of this line is Shared, in every CPU.
When our stores lands in the cache, L is marked as Modified and a special transaction is made on the internal bus (or QPI for multi-socket Intel systems) to invalidate line L in other processors.
If L was not initially in the S state, the protocol is changed accordingly (e.g. if L is in state Exclusive no transactions on the bus are done[1]).
At this point the write is complete and sfence
completes.
This is enough to keep the cache coherent.
When another CPU request line L, our CPU snoops that request and L is flushed to memory or into the internal bus so the other CPU will read the updated version.
The state of L is set to S again.
So basically L is read on-demand - this makes sense since propagating the write to other CPU is expensive and some architectures do it by writing L back into memory (this works because the other CPU has L in state Invalid so it must read it from memory).
Finally it is not true that sfence
et all are normally useless, on the contrary they are extremely useful.
It is just that normally we don't care how other CPUs see us making our stores - but acquiring a lock without an acquiring semantic as defined, for example, in C++, and implemented with the fences, is totally nuts.
You should think of the barriers as Intel says: they enforce the order of global visibility of memory accesses.
You can help your self understanding this by thinking of the barriers as enforcing the order or writing into the cache. The cache coherence will then take rest of assuring that a write to a cache is globally visible.
I can't help but stress out one more time that cache coherency, global visibility and memory ordering are three different concepts.
The first guarantees the second, that is enforced by the third.
Memory ordering -- enforces --> Global visibility -- needs -> Cache coherency
'.______________________________'_____________.' '
Architectural ' '
'._______________________________________.'
micro-architectural
Footnotes:
That was a simplification. On Intel CPUs, mov [eax+ebx*2+4], ecx
decodes into two separate uops: store-address and store-data. The store-address uop has to wait until eax
and ebx
are ready, then it is dispatched to an execution unit capable of computing its address. That execution unit writes the address into the store buffer, so later loads (in program order) can check for store-forwarding.
When ecx
is ready, the store-data uop can dispatch to the store-data port, and write the data into the same store buffer entry.
This can happen before or after the address is known, because the store-buffer entry is reserved probably in program order, so the store buffer (aka memory order buffer) can keep track of load / store ordering once the address of everything is eventually known, and check for overlaps. (And for speculative loads that ended up violating x86's memory ordering rules if another core invalidated the cache line they loaded from before the earliest point they were architecturally allowed to laod. This leads to a memory-order mis-speculation pipeline clear.)