Efficient data structure for fast random access, search, insertion and deletion

前端 未结 8 2102
灰色年华
灰色年华 2021-01-31 11:11

I\'m looking for a data structure (or structures) that would allow me keep me an ordered list of integers, no duplicates, with indexes and values in the same range.

I ne

相关标签:
8条回答
  • How to achieve 2 with RB-trees? We can make them count their children with every insert/delete operations. This doesn't make these operationis last significantly longer. Then getting down the tree to find the i-th element is possible in log n time. But I see no implementation of this method in java nor stl.

    0 讨论(0)
  • 2021-01-31 11:46

    If you're working in .NET, then according to the MS docs http://msdn.microsoft.com/en-us/library/f7fta44c.aspx

    • SortedDictionary and SortedList both have O(log n) for retrieval
    • SortedDictionary has O(log n) for insert and delete operations, whereas SortedList has O(n).

    The two differ by memory usage and speed of insertion/removal. SortedList uses less memory than SortedDictionary. If the SortedList is populated all at once from sorted data, it's faster than SortedDictionary. So it depends on the situation as to which is really the best for you.

    Also, your argument for the Linked List is not really valid as it might be O(1) for the insert, but you have to traverse the list to find the insertion point, so it's really not.

    0 讨论(0)
  • 2021-01-31 11:47

    Howabout a Treemap? log(n) for the operations described.

    0 讨论(0)
  • 2021-01-31 11:48

    I don't know what language you're using, but if it's Java you can leverage LinkedHashMap or a similar Collection. It's got all of the benefits of a List and a Map, provides constant time for most operations, and has the memory footprint of an elephant. :)

    If you're not using Java, the idea of a LinkedHashMap is probably still suitable for a usable data structure for your problem.

    0 讨论(0)
  • 2021-01-31 11:50

    I would use a red-black tree to map keys to values. This gives you O(log(n)) for 1, 3, 4. It also maintains the keys in sorted order.

    For 2, I would use a hash table to map values to keys, which gives you O(1) performance. It also adds O(1) overhead for keeping the hash table updated when adding and deleting keys in the red-black tree.

    0 讨论(0)
  • 2021-01-31 11:53

    I like balanced binary trees a lot. They are sometimes slower than hash tables or other structures, but they are much more predictable; they are generally O(log n) for all operations. I would suggest using a Red-black tree or an AVL tree.

    0 讨论(0)
提交回复
热议问题