atomicity

Is there any justification not to ALWAYS use AtomicInteger as data members?

早过忘川 提交于 2019-12-22 10:13:05
问题 In a multi-threaded environment like Android, where a simple int variable may be manipulated by multiple threads, are there circumstances in which it is still justified to use an int as a data member? An int as a local variable, limited to the scope of the method that has exclusive access to it (and thus start & finish of modifying it is always in the same thread), makes perfect sense performance-wise. But as a data member, even if wrapped by an accessor, it can run into the well known

Does atomic variables guarantee memory visibility?

可紊 提交于 2019-12-22 04:34:23
问题 Small question about memory visibility. CodeSample1: class CustomLock { private boolean locked = false; public boolean lock() { if(!locked) { locked = true; return true; } return false; } } This code is prone to bugs in a multi-threaded environment, first because of the "if-then-act" which is not atomic, and second because of potential memory visibility issues where for example threadA sets the field to true, but threadB that later wishes to read the field's value might not see that, and

When is AtomicInteger preferrable over synchronized?

北城以北 提交于 2019-12-22 02:48:11
问题 Since AtomicInteger can be at at least an order of magnitude slower than an int protected by synchronized , why would I ever want to use AtomicInteger? For example, if all I want is to increment an int value in a thread-safe manner, why not always use: synchronized(threadsafeint) { threadsafeint++; } instead of using the much slower AtomicInteger.incrementAndGet()? 回答1: Since AtomicInteger can be at at least an order of magnitude slower than an int protected by synchronized, why would I ever

When is AtomicInteger preferrable over synchronized?

夙愿已清 提交于 2019-12-22 02:48:09
问题 Since AtomicInteger can be at at least an order of magnitude slower than an int protected by synchronized , why would I ever want to use AtomicInteger? For example, if all I want is to increment an int value in a thread-safe manner, why not always use: synchronized(threadsafeint) { threadsafeint++; } instead of using the much slower AtomicInteger.incrementAndGet()? 回答1: Since AtomicInteger can be at at least an order of magnitude slower than an int protected by synchronized, why would I ever

Should load-acquire see store-release immediately?

拥有回忆 提交于 2019-12-21 04:40:31
问题 Suppose we have one simple variable( std::atomic<int> var ) and 2 threads T1 and T2 and we have the following code for T1 : ... var.store(2, mem_order); ... and for T2 ... var.load(mem_order) ... Also let's assume that T2 (load) executes 123ns later in time (later in the modification order in terms of the C++ standard) than T1 (store). My understanding of this situation is as follows(for different memory orders): memory_order_seq_cst - T2 load is obliged to load 2 . So effectively it has to

Is there a way to ensure atomicity while having a multithreaded program with signal handlers?

拥有回忆 提交于 2019-12-20 04:29:04
问题 If I have a program like this (in pseudocode): mutex_lock; func() { lock(mutex_lock); // Some code (long enough to make a // race condition if no proper synchronisation // is available). We also going to call a signal, // say, SIGINT, through (ctrl-c), while we are in // the range of locking and unlocking the lock. unlock(mutex_lock); } sig_handler_func(sig) { // Say, we are handling SIGINT (ctrl-c) signal // And we need to call func from here too. if (sig == SIGINT) { func(); } } main() { //

Is a successful send() “atomic”?

孤街醉人 提交于 2019-12-19 09:20:07
问题 Does a successful call to send() with the number returned equal to the amount specified in the size parameter guarantee that no "partial sends" will occur? Or is there some way that the OS might be interrupted while servicing the system call, send part of the data, wait for a possibly long time, then send the rest and return without notifying me with a smaller return value? I'm not talking about a case where there is not enough room in the kernel buffer; I realize that I would then get a

ReplaceFile alternative when application keeps file locked

安稳与你 提交于 2019-12-19 07:54:40
问题 Editor FooEdit (let's call it) uses ReplaceFile() when saving to ensure that the save operation is effectively atomic, and that if anything goes wrong then the original file on disc is preserved. (The other important benefit of ReplaceFile() is continuity of file identity - creation date and other metadata.) FooEdit also keeps open a handle to the file with a sharing mode of just FILE_SHARE_READ, so that other processes can open the file but can't write to it while it while FooEdit has it

Functions for performing atomic operations

大城市里の小女人 提交于 2019-12-18 15:52:42
问题 Are there functions for performing atomic operations (like increment / decrement of an integer) etc supported by C Run time library or any other utility libraries? If yes, what all operations can be made atomic using such functions? Will it be more beneficial to use such functions than the normal synchronization primitives like mutex etc? OS : Windows, Linux, Solaris & VxWorks 回答1: Prior to C11 The C library doesn't have any. On Linux, gcc provides some -- look for __sync_fetch_and_add , _

Transactions in Redis with read operations

醉酒当歌 提交于 2019-12-18 12:35:28
问题 Using Redis, I want to perform an atomic sequence of commands, i.e. I need to guarantee that no other client will perform changes in the database while the sequence is being executed. If I used write commands only, I could use MULTI and EXEC statements to assure atomicity using transactions. However, I would also like to use read commands in my transactions. Hence I cannot use MULTI , because read commands are also being queued! Basically, in atomic manner, I need to do following: Read x from