Overhead and implementation of using shared_ptr

前端 未结 2 1820
不知归路
不知归路 2021-02-19 07:12

Short introduction: I am working on multithread code and I have to share dynamically allocated objects between two threads. To make my code cleaner (and less error-prone) I want

相关标签:
2条回答
  • 2021-02-19 07:28

    GCC's shared_ptr will use no locking or atomics in single-threaded code. In multi-threaded code it will use atomic operations if an atomic compare-and-swap instruction is supported by the CPU, otherwise the reference counts are protected by a mutex. On i486 and later it uses atomics, i386 doesn't support cmpxchg so uses a mutex-based implementation. I believe ARM uses atomics for the ARMv7 architecture and later.

    (The same applies to both std::shared_ptr and std::tr1::shared_ptr.)

    0 讨论(0)
  • 2021-02-19 07:34

    First question: using operator->

    All the implementations I have seen have a local cache of T* right in the shared_ptr<T> class so that the field is on the stack, operator-> has thus a comparable cost to using a stack local T*: no overhead at all.

    Second question: mutex/atomics

    I expect libstdc++ to use atomics on x86 platform, whether through standard facilities or specific g++ intrinsics (in the older versions). I believe the Boost implementation already did so.

    I cannot, however, comment on ARM.

    Note: C++11 introducing move semantics, many copies are naturally avoided in the usage of shared_ptr.

    Note: read about correct usage of shared_ptr here, you can use references to shared_ptr (const or not) to avoid most of the copies/destruction in general, so the performance of those is not too important.

    0 讨论(0)
提交回复
热议问题