Overhead and implementation of using shared_ptr

前端 未结 2 1837
不知归路
不知归路 2021-02-19 07:12

Short introduction: I am working on multithread code and I have to share dynamically allocated objects between two threads. To make my code cleaner (and less error-prone) I want

2条回答
  •  佛祖请我去吃肉
    2021-02-19 07:34

    First question: using operator->

    All the implementations I have seen have a local cache of T* right in the shared_ptr class so that the field is on the stack, operator-> has thus a comparable cost to using a stack local T*: no overhead at all.

    Second question: mutex/atomics

    I expect libstdc++ to use atomics on x86 platform, whether through standard facilities or specific g++ intrinsics (in the older versions). I believe the Boost implementation already did so.

    I cannot, however, comment on ARM.

    Note: C++11 introducing move semantics, many copies are naturally avoided in the usage of shared_ptr.

    Note: read about correct usage of shared_ptr here, you can use references to shared_ptr (const or not) to avoid most of the copies/destruction in general, so the performance of those is not too important.

提交回复
热议问题