memory-model

Why are Python integers implemented as objects?

若如初见. 提交于 2020-06-17 15:50:08
问题 Why are Python integers implemented as objects? The article Why Python is Slow: Looking Under the Hood as well as its comments contain useful information about the Python memory model and its ramifications, in particular wrt to performance. But this article does not ask or answer the question why the decision to implement integers as objects was made in the first place. In particular, referring to Python as dynamically typed is not an answer. It is possible to implement integers as integers

C11 Standalone memory barriers LoadLoad StoreStore LoadStore StoreLoad

左心房为你撑大大i 提交于 2020-05-15 08:23:24
问题 I want to use standalone memory barriers between atomic and non-atomic operations (I think it shouldn't matter at all anyway). I think I understand what a store barrier and a load barrier mean and also the 4 types of possible memory reorderings; LoadLoad , StoreStore , LoadStore , StoreLoad . However, I always find the acquire/release concepts confusing. Because when reading the documentation, acquire doesn't only speak about loads, but also stores, and release doesn't only speak about stores

Events and multithreading once again

岁酱吖の 提交于 2020-04-07 16:11:10
问题 I'm worried about the correctness of the seemingly-standard pre-C#6 pattern for firing an event: EventHandler localCopy = SomeEvent; if (localCopy != null) localCopy(this, args); I've read Eric Lippert's Events and races and know that there is a remaining issue of calling a stale event handler, but my worry is whether the compiler/JITter is allowed to optimize away the local copy, effectively rewriting the code as if (SomeEvent != null) SomeEvent(this, args); with possible

C11 Atomic Acquire/Release and x86_64 lack of load/store coherence?

时光怂恿深爱的人放手 提交于 2020-03-17 10:58:59
问题 I am struggling with Section 5.1.2.4 of the C11 Standard, in particular the semantics of Release/Acquire. I note that https://preshing.com/20120913/acquire-and-release-semantics/ (amongst others) states that: ... Release semantics prevent memory reordering of the write-release with any read or write operation that precedes it in program order. So, for the following: typedef struct test_struct { _Atomic(bool) ready ; int v1 ; int v2 ; } test_struct_t ; extern void test_init(test_struct_t* ts,

C11 Atomic Acquire/Release and x86_64 lack of load/store coherence?

£可爱£侵袭症+ 提交于 2020-03-17 10:58:47
问题 I am struggling with Section 5.1.2.4 of the C11 Standard, in particular the semantics of Release/Acquire. I note that https://preshing.com/20120913/acquire-and-release-semantics/ (amongst others) states that: ... Release semantics prevent memory reordering of the write-release with any read or write operation that precedes it in program order. So, for the following: typedef struct test_struct { _Atomic(bool) ready ; int v1 ; int v2 ; } test_struct_t ; extern void test_init(test_struct_t* ts,

Concurrent writes to different locations in the same cache line

烈酒焚心 提交于 2020-02-03 05:14:26
问题 Suppose I have a C++11 application where two threads write to different but nearby memory locations, using simple pointers to primitive types. Can I be sure that both these writes will end up in memory eventually (probably after both have reached a boost::barrier), or is there a risk that both CPU cores hold their own cache line containing that data, and the second core flushing its modification to RAM will overwrite and undo the modification done by the first write? I hope that cache

std::memory_order and instruction order, clarification

做~自己de王妃 提交于 2020-02-02 12:34:07
问题 This is a follow up question to this one. I want to figure exactly the meaning of instruction ordering, and how it is affected by the std::memory_order_acquire , std::memory_order_release etc... In the question I linked there's some detail already provided, but I felt like the provided answer isn't really about the order (which was more what was I looking for) but rather motivating a bit why this is necessary etc. I'll quote the same example which I'll use as reference #include <thread>

When should you not use [[carries_dependency]]?

好久不见. 提交于 2020-01-13 05:34:07
问题 I've found questions (like this one) asking what [[carries_dependency]] does, and that's not what I'm asking here. I want to know when you shouldn't use it, because the answers I've read all make it sound like you can plaster this code everywhere and magically you'd get equal or faster code. One comment said the code can be equal or slower, but the poster didn't elaborate. I imagine appropriate places to use this is on any function return or parameter that is a pointer or reference and that

Loads and stores reordering on ARM

試著忘記壹切 提交于 2020-01-13 04:23:06
问题 I'm not an ARM expert but won't those stores and loads be subjected to reordering at least on some ARM architectures? atomic<int> atomic_var; int nonAtomic_var; int nonAtomic_var2; void foo() { atomic_var.store(111, memory_order_relaxed); atomic_var.store(222, memory_order_relaxed); } void bar() { nonAtomic_var = atomic_var.load(memory_order_relaxed); nonAtomic_var2 = atomic_var.load(memory_order_relaxed); } I've had no success in making the compiler put memory barriers between them. I've

reordering atomic operations in C++

醉酒当歌 提交于 2020-01-10 02:53:27
问题 Suppose I have 2 threads: int value = 0; std::atomic<bool> ready = false; thread 1: value = 1 ready = true; thread 2: while (!ready); std::cout << value; Is this program able to output 0? I read about the C++ memory model - specifically, sequential consistency, which I believe is the default, and it wasn't particularly clear. Is the compiler only required to put atomic operations in the correct order relative to each other, or is it required to put atomic operations in the right order