Is there a difference in Go between a counter using atomic operations and one using a mutex?

后端 未结 3 1458
南方客
南方客 2021-02-07 11:26

I have seen some discussion lately about whether there is a difference between a counter implemented using atomic increment/load, and one using a mutex to synchronise increment/

3条回答
  •  醉酒成梦
    2021-02-07 11:57

    Atomics are faster in the common case: the compiler translates each call to a function from the sync/atomic package to a special set of machine instructions which basically operate on the CPU level — for instance, on x86 architectures, an atomic.AddInt64 would be translated to some plain ADD-class instruction prefixed with the LOCK instruction (see this for an example) — with the latter ensuring coherent view of the updated memory location across all the CPUs in the system.

    A mutex is a much complicated thing as it, in the end, wraps some bit of the native OS-specific thread synchronization API (for instance, on Linux, that's futex).

    On the other hand, the Go runtime is pretty much optimized when it comes to synchronization stuff (which is kinda expected — given one of the main selling points of Go), and the mutex implementation tries to avoid hitting the kernel to perform synchronization between goroutines, if possible, and carry it out completely in the Go runtime itself.

    This might explain no noticeable difference in the timings in your benchmarks, provided the contention over the mutexes was reasonably low.


    Still, I feel oblidged to note — just in case — that atomics and higher-level synchronization facilities are designed to solve different tasks. Say, you can't use atomics to protect some memory state during the execution of a whole function — and even a single statement, in the general case.

提交回复
热议问题