Efficient linked list in C++?

后端 未结 11 622
孤街浪徒
孤街浪徒 2021-02-02 06:36

This document says std::list is inefficient:

std::list is an extremely inefficient class that is rarely useful. It performs a heap allocation

相关标签:
11条回答
  • 2021-02-02 07:08

    std::list is a doubly linked list, so despite its inefficiency in element construction, it supports insert/delete in O(1) time complexity, but this feature is completely ignored in this quoted paragraph.

    It's ignored because it's a lie.

    The problem of algorithmic complexity is that it generally measures one thing. For example, when we say that insertion in a std::map is O(log N), we mean that it performs O(log N) comparisons. The costs of iterating, fetching cache lines from memory, etc... are not taken into account.

    This greatly simplifies analysis, of course, but unfortunately does not necessarily map cleanly to real-world implementation complexities. In particular, one egregious assumption is that memory allocation is constant-time. And that, is a bold-faced lie.

    General purpose memory allocators (malloc and co), do not have any guarantee on the worst-case complexity of memory allocations. The worst-case is generally OS-dependent, and in the case of Linux it may involve the OOM killer (sift through the ongoing processes and kill one to reclaim its memory).

    Special purpose memory allocators could potentially be made constant time... within a particular range of number of allocations (or maximum allocation size). Since Big-O notation is about the limit at infinity, it cannot be called O(1).

    And thus, where the rubber meets the road, the implementation of std::list does NOT in general feature O(1) insertion/deletion, because the implementation relies on a real memory allocator, not an ideal one.


    This is pretty depressing, however you need not lose all hopes.

    Most notably, if you can figure out an upper-bound to the number of elements and can allocate that much memory up-front, then you can craft a memory allocator which will perform constant-time memory allocation, giving you the illusion of O(1).

    0 讨论(0)
  • 2021-02-02 07:08

    I second @Useless' answer, particularly PS item 2 about revising requirements. If you relax the iterator invalidation constraint, then using std::vector<> is Stroustrup's standard suggestion for a small-number-of-items container (for reasons already mentioned in the comments). Related questions on SO.

    Starting from C++11 there is also std::forward_list.

    Also, if standard heap allocation for elements added to the container is not good enough, then I would say you need to look very carefully at your exact requirements and fine tune for them.

    0 讨论(0)
  • 2021-02-02 07:09

    As an alternative, you can use a growable array and handle the links explicitly, as indexes into the array.

    Unused array elements are put in a linked list using one of the links. When an element is deleted, it is returned to the free list. When the free list is exhausted, grow the array and use the next element.

    For the new free elements, you have two options:

    • append them to the free list at once,
    • append them on demand, based on the number of elements in the free list vs. the array size.
    0 讨论(0)
  • 2021-02-02 07:18

    Your requirements are exactly those of std::list, except that you've decided you don't like the overhead of node-based allocation.

    The sane approach is to start at the top and only do as much as you really need:

    1. Just use std::list.

      Benchmark it: is the default allocator really too slow for your purposes?

      • No: you're done.

      • Yes: goto 2

    2. Use std::list with an existing custom allocator such as the Boost pool allocator

      Benchmark it: is the Boost pool allocator really too slow for your purposes?

      • No: you're done.

      • Yes: goto 3

    3. Use std::list with a hand-rolled custom allocator finely tuned to your unique needs, based on all the profiling you did in steps 1 and 2

      Benchmark as before etc. etc.

    4. Consider doing something more exotic as a last resort.

      If you get to this stage, you should have a really well-specified SO question, with lots of detail about exactly what you need (eg. "I need to squeeze n nodes into a cacheline" rather than "this doc said this thing is slow and that sounds bad").


    PS. The above makes two assumptions, but both are worth investigation:

    1. as Baum mit Augen points out, it's not sufficient to do simple end-to-end timing, because you need to be sure where your time is going. It could be the allocator itself, or cache misses due to the memory layout, or something else. If something's slow, you still need to be sure why before you know what ought to change.
    2. your requirements are taken as a given, but finding ways to weaken requirements is often the easiest way to make something faster.

      • do you really need constant-time insertion and deletion everywhere, or only at the front, or the back, or both but not in the middle?
      • do you really need those iterator invalidation constraints, or can they be relaxed?
      • are there access patterns you can exploit? If you're frequently removing an element from the front and then replacing it with a new one, could you just update it in-place?
    0 讨论(0)
  • 2021-02-02 07:18

    I would suggest doing exactly what @Yves Daoust says, except instead of using a linked list for the free list, use a vector. Push and pop the free indices on the back of the vector. This is amortized O(1) insert, lookup, and delete, and doesn't involve any pointer chasing. It also doesn't require any annoying allocator business.

    0 讨论(0)
提交回复
热议问题