lockless

Are X86 atomic RMW instructions wait free

社会主义新天地 提交于 2020-07-21 03:42:32
问题 On x86, atomic RMW instructions like lock add dword [rdi], 1 are implemented using cache locking on modern CPUs. So a cache line is locked for duration of the instruction. This is done by getting the line EXCLUSIVE/MODIFIED state when value is read and the CPU will not respond to MESI requests from other CPU's until the instruction is finished. There are 2 flavors of concurrent progress conditions, blocking and non-blocking. Atomic RMW instructions are non-blocking. CPU hardware will never

Are X86 atomic RMW instructions wait free

隐身守侯 提交于 2020-07-21 03:40:46
问题 On x86, atomic RMW instructions like lock add dword [rdi], 1 are implemented using cache locking on modern CPUs. So a cache line is locked for duration of the instruction. This is done by getting the line EXCLUSIVE/MODIFIED state when value is read and the CPU will not respond to MESI requests from other CPU's until the instruction is finished. There are 2 flavors of concurrent progress conditions, blocking and non-blocking. Atomic RMW instructions are non-blocking. CPU hardware will never

A readers/writer lock… without having a lock for the readers?

我怕爱的太早我们不能终老 提交于 2020-05-15 04:56:26
问题 I get the feeling this may be a very general and common situation for which a well-known no-lock solution exists. In a nutshell, I'm hoping there's approach like a readers/writer lock, but that doesn't require the readers to acquire a lock and thus can be better average performance. Instead there'd be some atomic operations (128-bit CAS) for a reader, and a mutex for a writer. I'd have two copies of the data structure, a read-only one for the normally-successful queries, and an identical copy

A readers/writer lock… without having a lock for the readers?

杀马特。学长 韩版系。学妹 提交于 2020-05-15 04:54:18
问题 I get the feeling this may be a very general and common situation for which a well-known no-lock solution exists. In a nutshell, I'm hoping there's approach like a readers/writer lock, but that doesn't require the readers to acquire a lock and thus can be better average performance. Instead there'd be some atomic operations (128-bit CAS) for a reader, and a mutex for a writer. I'd have two copies of the data structure, a read-only one for the normally-successful queries, and an identical copy

Implementing 64 bit atomic counter with 32 bit atomics

天涯浪子 提交于 2019-12-30 13:10:48
问题 I would like to cobble together a uint64 atomic counter from atomic uint32s. The counter has a single writer and multiple readers. The writer is a signal handler so it must not block. My idea is to use a generation count with the low bit as a read lock. The reader retries until the generation count is stable across the read, and the low bit is unset. Is the following code correct in design and use of memory ordering? Is there a better way? using namespace std; class counter { atomic<uint32_t>

Is lockless hashing without std::atomics guaranteed to be thread-safe in C++11?

隐身守侯 提交于 2019-12-24 03:14:05
问题 Consider the following attempt at a lockless hashtable for multithreaded search algorithms (inspired by this paper) struct Data { uint64_t key; uint64_t value; }; struct HashEntry { uint64_t key_xor_value; uint64_t value; }; void insert_data(Data const& e, HashEntry* h, std::size_t tableOffset) { h[tableOffset].key_xor_value = e.key ^ e.value; h[tableOffset].value = e.value; } bool data_is_present(Data const& e, HashEntry const* h, std::size_t tableOffset) { auto const tmp_key_xor_value = h

Implementing a lock-free queue (for a Logger component)

筅森魡賤 提交于 2019-12-21 21:58:16
问题 I am designing a new improved Logger component (.NET 3.5, C#). I would like to use a lock-free implementation. Logging events will be sent from (potentially) multiple threads, although only a single thread will do the actual output to file/other storage medium. In essence, all the writers are * enqueuing* their data into some queue, to be retrieves by some other process (LogFileWriter). Can this be achieved in a lock-less manner? i could not find a direct reference to this particular problem

Is there such a thing as a lockless queue for multiple read or write threads?

坚强是说给别人听的谎言 提交于 2019-12-19 17:18:21
问题 I was thinking, is it possible to have a lockless queue when more than one thread is reading or writing? I've seen an implementation with a lockless queue that worked with one read and one write thread but never more than one for either. Is it possible? I don't think it is. Can/does anyone want to prove it? 回答1: There are multiple algorithms available, I ended up implementing the An Optimistic Approach to Lock-Free FIFO Queues, which avoids the ABA problem via pointer-tagging (needs the

Lock Free stack implementation idea - currently broken

微笑、不失礼 提交于 2019-12-11 06:19:46
问题 I came up with an idea I am trying to implement for a lock free stack that does not rely on reference counting to resolve the ABA problem, and also handles memory reclamation properly. It is similar in concept to RCU, and relies on two features: marking a list entry as removed, and tracking readers traversing the list. The former is simple, it just uses the LSB of the pointer. The latter is my "clever" attempt at an approach to implementing an unbounded lock free stack. Basically, when any

SpinWait in lockless update

 ̄綄美尐妖づ 提交于 2019-12-07 15:53:28
问题 While reading Albahari's Threading in C#, I've noticed that the "lock free update" pattern uses a SpinWait at the end of the cycle: static void LockFreeUpdate<T> (ref T field, Func <T, T> updateFunction) where T : class { var spinWait = new SpinWait(); while (true) { // read T snapshot1 = field; // apply transformation T calc = updateFunction (snapshot1); // compare if not preempted T snapshot2 = Interlocked.CompareExchange (ref field, calc, snapshot1); // if succeeded, we're done if