lock-free

Ensuring usage of double-compare-and-swap instruction, for lock-free stack?

半世苍凉 提交于 2019-12-11 11:47:21
问题 (Assume 64-bit x86-64 architecture and Intel 3rd/4th generation CPU) Here is a lock-free implementation for a stack from Concurrency in Action book, page 202: template<typename T> class lock_free_stack { private: struct node; struct counted_node_ptr { int external_count; node* ptr; }; struct node { std::shared_ptr<T> data; std::atomic<int> internal_count; counted_node_ptr next; node(T const& data_):data(std::make_shared<T>(data_)),internal_count(0){} }; std::atomic<counted_node_ptr> head;

Linking pthread disables lock-free shared_ptr implementation

我的未来我决定 提交于 2019-12-11 09:53:55
问题 The title pretty much conveys all relevant information, but here's a minimal repro: #include <atomic> #include <cstdio> #include <memory> int main() { auto ptr = std::make_shared<int>(0); bool is_lockless = std::atomic_is_lock_free(&ptr); printf("shared_ptr is lockless: %d\n", is_lockless); } Compiling this with the following compiler options produces a lock-free shared_ptr implementation: g++ -std=c++11 -march=native main.cpp While this doesn't: g++ -std=c++11 -march=native -pthread main.cpp

Can we do something atomically with 2 or more lock-free containers without locking both?

自作多情 提交于 2019-12-11 05:35:10
问题 I'm looking for Composable operations - it fairly easily to do using transactional memory. (Thanks to Ami Tavory) And it easily to do using locks (mutex/spinlock) - but it can lead to deadlocks - so lock-based algorithms composable only with manual tuning. Lock-free algorithms do not have the problem of deadlocks, but it is not composable. Required to designed 2 or more containers as a single composed lock-free data structure. Is there any approach, helper-implementation or some lock-free

What are the correct memory orders to use when inserting a node at the beginning of a lock free singly linked list?

混江龙づ霸主 提交于 2019-12-11 01:40:23
问题 I have a simple linked list. There is no danger of the ABA problem, I'm happy with Blocking category and I don't care if my list is FIFO, LIFO or randomized. At long as the inserting succeeds without making others fails. The code for that looks something like this: class Class { std::atomic<Node*> m_list; ... }; void Class::add(Node* node) { node->next = m_list.load(std::memory_order_acquire); while (!m_list.compare_exchange_weak(node->next, node, std::memory_order_acq_rel, std::memory_order

Fast and Lock Free Single Writer, Multiple Reader

杀马特。学长 韩版系。学妹 提交于 2019-12-10 23:38:56
问题 I've got a single writer which has to increment a variable at a fairly high frequence and also one or more readers who access this variable on a lower frequency. The write is triggered by an external interrupt. Since i need to write with high speed i don't want to use mutexes or other expensive locking mechanisms. The approach i came up with was copying the value after writing to it. The reader now can compare the original with the copy. If they are equal, the variable's content is valid.

Do I need to use memory barriers to protect a shared resource?

不羁岁月 提交于 2019-12-10 22:06:39
问题 In a multi-producer, multi-consumer situation. If producers are writing into int a , and consumers are reading from int a , do I need memory barriers around int a ? We all learned that: Shared resources should always be protected and the standard does not guarantee a proper behavior otherwise. However on cache-coherent architectures visibility is ensured automatically and atomicity of 8, 16, 32 and 64 bit variables MOV operation is guaranteed. Therefore, why protect int a at all? 回答1: At

Read-write thread-safe smart pointer in C++, x86-64

余生长醉 提交于 2019-12-10 14:44:41
问题 I develop some lock free data structure and following problem arises. I have writer thread that creates objects on heap and wraps them in smart pointer with reference counter. I also have a lot of reader threads, that work with these objects. Code can look like this: SmartPtr ptr; class Reader : public Thread { virtual void Run { for (;;) { SmartPtr local(ptr); // do smth } } }; class Writer : public Thread { virtual void Run { for (;;) { SmartPtr newPtr(new Object); ptr = newPtr; } } }; int

Is this hazard pointer example flawed because of ABA issue?

帅比萌擦擦* 提交于 2019-12-10 13:18:10
问题 In the book C++ Concurrency in Action, the author gave an example of using hazard pointer to implement a lock-free stack data structure. Part of the code is as follows: std::shared_ptr<T> pop() { std::atomic<void*>& hp=get_hazard_pointer_for_current_thread(); node* old_head=head.load(); node* temp; do { temp=old_head; hp.store(old_head); old_head=head.load(); } while(old_head!=temp); // ... } The description says that You have to do this in a while loop to ensure that the node hasn’t been

Lock-free programming: reordering and memory order semantics

拥有回忆 提交于 2019-12-10 11:45:52
问题 I am trying to find my feet in lock-free programming. Having read different explanations for memory ordering semantics, I would like to clear up what possible reordering may happen. As far as I understood, instructions may be reordered by the compiler (due to optimization when the program is compiled) and CPU (at runtime?). For the relaxed semantics cpp reference provides the following example: // Thread 1: r1 = y.load(memory_order_relaxed); // A x.store(r1, memory_order_relaxed); // B //

ARM LL/SC exclusive access by register width or cache line width?

白昼怎懂夜的黑 提交于 2019-12-10 10:24:26
问题 I'm working on the next release of my lock-free data structure library. I'm using LL/SC on ARM. To use LL/SC as LL/SC (rather than emulating CAS) there has to be a single STR between the LDREX and STREX. Now, I've written the code and this works. What concerns me however is the possibility it may not work. I've read on PowerPC if you access the same cache line as the LL/SC target, you break the LL/SC. So I'm thinking if my STR target is on the same cache line as my LL/SC target, then pow, I'm