How can garbage collectors be faster than explicit memory deallocation?

后端 未结 4 458
遥遥无期
遥遥无期 2021-02-03 10:41

I was reading this html generated, (may expire, Here is the original ps file.)

GC Myth 3: Garbage collectors are always slower than explicit memory deallocation.
GC

相关标签:
4条回答
  • 2021-02-03 11:21

    How would GC be faster then explicit memory deallocation?

    1. GCs can pointer-bump allocate into a thread-local generation and then rely upon copying collection to handle the (relatively) uncommon case of evacuating the survivors. Traditional allocators like malloc often compete for global locks and search trees.

    2. GCs can deallocate many dead blocks simultaneously by resetting the thread-local allocation buffer instead of calling free on each block in turn, i.e. O(1) instead of O(n).

    3. By compacting old blocks so more of them fit into each cache line. The improved locality increases cache efficiency.

    4. By taking advantage of extra static information such as immutable types.

    5. By taking advantage of extra dynamic information such as the changing topology of the heap via the data recorded by the write barrier.

    6. By making more efficient techniques tractable, e.g. by removing the headache of manual memory management from wait free algorithms.

    7. By deferring deallocation to a more appropriate time or off-loading it to another core. (thanks to Andrew Hill for this idea!)

    0 讨论(0)
  • 2021-02-03 11:22

    A factor not yet mentioned is that when using manual memory allocation, even if object references are guaranteed not to form cycles, determining when the last entity to hold a reference has abandoned it can be expensive, typically requiring the use of reference counters, reference lists, or other means of tracking object usage. Such techniques aren't too bad on single-processor systems, where the cost of an atomic increment may be essentially the same as an ordinary one, but they scale very badly on multi-processor systems, where atomic-increment operations are comparatively expensive.

    0 讨论(0)
  • 2021-02-03 11:27

    One approach to make GC faster then explicit deallocation is to deallocate implicitly :

    the heap is divided in partitions, and the VM switches between the partitions from time to time (when a partition gets too full for example). Live objects are copied to the new partition and all the dead objects are not deallocated - they are just left forgotten. So the deallocation itself ends up costing nothing. The additional benefit of this approach is that the heap defragmentation is a free bonus.

    Please note this is a very general description of the actual processes.

    0 讨论(0)
  • 2021-02-03 11:30

    The trick is, that the underlying allocator for garbage collector can be much simpler than the explicit one and take some shortcuts that the explicit one can't.

    1. If the collector is copying (java and .net and ocaml and haskell runtimes and many others actually use one), freeing is done in big blocks and allocating is just pointer increment and cost is payed per object surviving collection. So it's faster especially when there are many short-lived temporary objects, which is quite common in these languages.
    2. Even for a non-copying collector (like the Boehm's one) the fact that objects are freed in batches saves a lot of work in combining the adjacent free chunks. So if the collection does not need to be run too often, it can easily be faster.
    3. And, well, many standard library malloc/free implementations just suck. That's why there are projects like umem and libraries like glib have their own light-weight version.
    0 讨论(0)
提交回复
热议问题