lock-free

avoiding collisions when collapsing infinity lock-free buffer to circular-buffer

早过忘川 提交于 2020-01-05 13:45:27
问题 I'm solving two feeds arbitrate problem of FAST protocol. Please don't worry if you not familar with it, my question is pretty general actually. But i'm adding problem description for those who interested (you can skip it). Data in all UDP Feeds are disseminated in two identical feeds (A and B) on two different multicast IPs. It is strongly recommended that client receive and process both feeds because of possible UDP packet loss. Processing two identical feeds allows one to statistically

C++: How can lock-free data structures be implemented in C++ if std::atomic_flag is the only lock-free atomic type?

丶灬走出姿态 提交于 2020-01-04 04:23:05
问题 Typical way of implementing lock-free data structures is using atomic CAS operations, such as std::compare_exchange_strong or std::compare_exchange_weak . The example of this technique's usage can be seen in Antony Williams' "C++ Concurrency in Action", where a lock-free stack is implemented. The stack is implemented as a linked list with std::atomic<node*> head pointer. CAS operations are performed on this pointer during pushs and pops. But C++ standard guarantees that only std::atomic_flag

question about lock free queues

折月煮酒 提交于 2020-01-03 04:24:09
问题 I have a question about the use of lock free queues. Suppose I have a single-producer single-consumer queue, where the producer and consumer are bound to separate cores. The queue elements are buffers of shared memory, which is mmapped by both producer and consumer at the start. The producer gets a queue element, populates the buffer with data and enqueues it, and the consumer dequeues the element, reads it and processes it in some fashion. Do I, as a user of the lock-free queue, have to

fastest possible way to pass data from one thread to another

旧街凉风 提交于 2020-01-02 02:47:15
问题 I'm using boost spsc_queue to move my stuff from one thread to another. It's one of the critical places in my software so I want to do it as soon as possible. I wrote this test program: #include <boost/lockfree/spsc_queue.hpp> #include <stdint.h> #include <condition_variable> #include <thread> const int N_TESTS = 1000; int results[N_TESTS]; boost::lockfree::spsc_queue<int64_t, boost::lockfree::capacity<1024>> testQueue; using std::chrono::nanoseconds; using std::chrono::duration_cast; int

Lock-free memory reclamation with hazard pointers

走远了吗. 提交于 2020-01-01 02:30:29
问题 Hazard pointers are a technique for safely reclaiming memory in lock-free code without garbage-collection. The idea is that before accessing an object that can be deleted concurrently, a thread sets its hazard pointer to point to that object. A thread that wants to delete an object will first check whether any hazard pointers are set to point to that object. If so, deletion will be postponed, so that the accessing thread does not end up reading deleted data. Now, imagine our deleting thread

Lock-free memory reclamation with hazard pointers

我只是一个虾纸丫 提交于 2020-01-01 02:30:13
问题 Hazard pointers are a technique for safely reclaiming memory in lock-free code without garbage-collection. The idea is that before accessing an object that can be deleted concurrently, a thread sets its hazard pointer to point to that object. A thread that wants to delete an object will first check whether any hazard pointers are set to point to that object. If so, deletion will be postponed, so that the accessing thread does not end up reading deleted data. Now, imagine our deleting thread

(preferably boost) lock-free array/vector/map/etc?

 ̄綄美尐妖づ 提交于 2019-12-24 14:23:25
问题 Considering my lack of c++ knowledge, please try to read my intent and not my poor technical question. This is the backbone of my program https://github.com/zaphoyd/websocketpp/blob/experimental/examples/broadcast_server/broadcast_server.cpp I'm building a websocket server with websocket++ (and oh is websocket++ sweet. I highly recommend), and I can easily manipulate per user data thread-safely because it really doesn't need to be manipulated by different threads; however, I do want to be

Lock-free “decrement if not zero”

做~自己de王妃 提交于 2019-12-24 07:47:46
问题 I'm currently reinventing the wheel of a thread pool in C++. I've eliminated almost all locks from the code, except for multiple instances of the following construct: std::atomic_size_t counter; void produce() { ++counter; } void try_consume() { if (counter != 0) { --counter; // ... } else { // ... } } So, I need a thread-safe lock-free version of this function: bool take(std::atomic_size_t& value) { if (value != 0) { --value; return true; } return false; } There is one solution that I know

Lock-free cache implementation in C++11

你说的曾经没有我的故事 提交于 2019-12-22 12:02:04
问题 Is there any way in C++11 to implement a lock-free cache for an object, which would be safe to access from multiple threads? The calculation I'm looking to cache isn't super cheap but also isn't super expensive, so requiring a lock would defeat the purpose of caching in my case. IIUC, std::atomic isn't guaranteed to be lock-free. Edit: Since calculate isn't -too- expensive, I actually don't mind if it runs once or twice too many. But I -do- need to make sure all consumers get the correct

Lock-free cache implementation in C++11

半城伤御伤魂 提交于 2019-12-22 12:01:53
问题 Is there any way in C++11 to implement a lock-free cache for an object, which would be safe to access from multiple threads? The calculation I'm looking to cache isn't super cheap but also isn't super expensive, so requiring a lock would defeat the purpose of caching in my case. IIUC, std::atomic isn't guaranteed to be lock-free. Edit: Since calculate isn't -too- expensive, I actually don't mind if it runs once or twice too many. But I -do- need to make sure all consumers get the correct