lock-free

How to guarantee 64-bit writes are atomic?

痴心易碎 提交于 2019-11-29 21:49:28
When can 64-bit writes be guaranteed to be atomic, when programming in C on an Intel x86-based platform (in particular, an Intel-based Mac running MacOSX 10.4 using the Intel compiler)? For example: unsigned long long int y; y = 0xfedcba87654321ULL; /* ... a bunch of other time-consuming stuff happens... */ y = 0x12345678abcdefULL; If another thread is examining the value of y after the first assignment to y has finished executing, I would like to ensure that it sees either the value 0xfedcba87654321 or the value 0x12345678abcdef, and not some blend of them. I would like to do this without any

ForkJoinPool stalls during invokeAll/join

对着背影说爱祢 提交于 2019-11-29 14:41:10
问题 I try to use a ForkJoinPool to parallelize my CPU intensive calculations. My understanding of a ForkJoinPool is, that it continues to work as long as any task is available to be executed. Unfortunately I frequently observed worker threads idling/waiting, thus not all CPU are kept busy. Sometimes I even observed additional worker threads. I did not expect this, as I strictly tried to use non blocking tasks. My observation is very similar to those of ForkJoinPool seems to waste a thread. After

is_lock_free() returned false after upgrading to MacPorts gcc 7.3

我与影子孤独终老i 提交于 2019-11-29 14:10:58
Previously, with Apple LLVM 9.1.0, is_lock_free() on 128-bit structures have returned true. To have complete std::optional support, I then upgraded to MacPorts gcc 7.3. During my first try to compile, I encountered this notorious showstopper linker error: Undefined symbols for architecture x86_64: "___atomic_compare_exchange_16", referenced from: I know that I may need to add -latomic . With Apple LLVM 9.1.0, I don't need it, and I have a very bad feeling about this. If it's lock-free, you usually should not need to link to any additional library, the compiler alone is able to handle it.

how to put std::string into boost::lockfree::queue (or alternative)?

坚强是说给别人听的谎言 提交于 2019-11-29 13:38:26
I'm trying to put std::string s into boost::lockfree::queue s so that my threads can update each other with new data. When I try to use boost::lockfree::queue<std::string> updated_data; , g++ says : In instantiation of 'class boost::lockfree::queue >': error: static assertion failed: (boost::has_trivial_destructor::value) error: static assertion failed: (boost::has_trivial_assign::value) I've been shown generally what these errors mean , but I have no hope of ever fixing this myself, as I'm almost brand new to c++. Is there an alternative way to pass text data between threads with lockfree ?

Trouble with boost::lockfree::queue in shared memory (boost 1.53, gcc 4.7.2 / clang 3.0-6ubuntu3)

橙三吉。 提交于 2019-11-29 10:00:46
I have a problem with placing boost::lockfree::queue<<T, fixed_sized<false>, ..> in shared memory. I need it because I have to be able to insert more than 65535 messages into the queue, and fixed_sized queue is limited with 65535. The following code works properly (but capacity<...> option implies fixed_sized<true> ): typedef boost::interprocess::allocator< MessageT, boost::interprocess::managed_shared_memory::segment_manager> ShmemAllocator; typedef boost::lockfree::queue< MessageT, boost::lockfree::capacity<65535>, boost::lockfree::allocator<ShmemAllocator> > Queue; m_segment = new boost:

Why does a std::atomic store with sequential consistency use XCHG?

烈酒焚心 提交于 2019-11-29 08:26:34
Why is std::atomic 's store : std::atomic<int> my_atomic; my_atomic.store(1, std::memory_order_seq_cst); doing an xchg when a store with sequential consistency is requested? Shouldn't, technically, a normal store with a read/write memory barrier be enough? Equivalent to: _ReadWriteBarrier(); // Or `asm volatile("" ::: "memory");` for gcc/clang my_atomic.store(1, std::memory_order_acquire); I'm explicitly talking about x86 & x86_64. Where a store has an implicit acquire fence. mov -store + mfence and xchg are both valid ways to implement a sequential-consistency store on x86. The implicit lock

Is std::ifstream thread-safe & lock-free?

孤人 提交于 2019-11-29 08:25:40
I intend to perform opening for reading a single file from many threads using std::ifstream. My concern is if std::ifstream is thread-safe & lock-free? More details: I use g++ 4.4 on Ubuntu & Windows XP, 4.0 on Leopard. Each thread creates its own instance of std::ifstream Thanks in advance! That is implementation defined. Standard C++ says absolutely nothing about threading, and therefore any assumptions about threads inherently invoke unspecified or implementation defined behavior. We need the platform you are using to be more specific, but it's probably unreasonable to assume ifstream is

Lock free multiple readers single writer

一曲冷凌霜 提交于 2019-11-29 04:52:45
I have got an in memory data structure that is read by multiple threads and written by only one thread. Currently I am using a critical section to make this access threadsafe. Unfortunately this has the effect of blocking readers even though only another reader is accessing it. There are two options to remedy this: use TMultiReadExclusiveWriteSynchronizer do away with any blocking by using a lock free approach For 2. I have got the following so far (any code that doesn't matter has been left out): type TDataManager = class private FAccessCount: integer; FData: TDataClass; public procedure Read

boost::lockfree::spsc_queue busy wait strategy. Is there a blocking pop?

醉酒当歌 提交于 2019-11-28 23:51:49
So i'm using a boost::lockfree::spec_queue to communicate via two boost_threads running functors of two objects in my application. All is fine except for the fact that the spec_queue::pop() method is non blocking. It returns True or False even if there is nothing in the queue. However my queue always seems to return True (problem #1). I think this is because i preallocate the queue. typedef boost::lockfree::spsc_queue<q_pl, boost::lockfree::capacity<100000> > spsc_queue; This means that to use the queue efficiently i have to busy wait constantly popping the queue using 100% cpu. Id rather not

Do spin locks always require a memory barrier? Is spinning on a memory barrier expensive?

帅比萌擦擦* 提交于 2019-11-28 23:46:31
I wrote some lock-free code that works fine with local reads, under most conditions. Does local spinning on a memory read necessarily imply I have to ALWAYS insert a memory barrier before the spinning read? (To validate this, I managed to produce a reader/writer combination which results in a reader never seeing the written value, under certain very specific conditions--dedicated CPU, process attached to CPU, optimizer turned all the way up, no other work done in the loop--so the arrows do point in that direction, but I'm not entirely sure about the cost of spinning through a memory barrier.)