Why are entries in addition order in a .Net Dictionary?

前端 未结 11 561
死守一世寂寞
死守一世寂寞 2020-12-04 01:26

I just saw this behaviour and I\'m a bit surprised by it...

If I add 3 or 4 elements to a Dictionary, and then do a \"For Each\" to get all the keys, they appear in

相关标签:
11条回答
  • 2020-12-04 02:08

    A quote from MSDN :

    The order of the keys in the Dictionary<(Of <(TKey, TValue>)>).KeyCollection is unspecified, but it is the same order as the associated values in the Dictionary<(Of <(TKey, TValue>)>).ValueCollection returned by the Dictionary<(Of <(TKey, TValue>)>).Values property.

    0 讨论(0)
  • 2020-12-04 02:10

    Your entries might all be in the same hash bucket in the dictionary. Each bucket is probably a list of entries in the bucket. This would explain the entries coming back in order.

    0 讨论(0)
  • 2020-12-04 02:13

    A dictionary retrieves items in hashed order. The fact that they came out in insertion order was a total coincidence.

    The MSDN documentation says:

    The order of the keys in the KeyCollection is unspecified, but it is the same order as the associated values in the ValueCollection returned by the Values property.

    0 讨论(0)
  • 2020-12-04 02:13

    I think this comes from the old .NET 1.1 times where you had two kinds of dictionaries "ListDictionary" and "HybridDictionary". ListDictionary was a dictionary implemented internally as an ordered list and was recommended for "small sets of entries". Then you had HybridDictionary, that was initially organized internally as a list, but if it grew bigger than a configurable threshold would become a hash table. This was done because historically proper hash-based dictionaries were considered expensive. Now a days that doesn't make much sense, but I suppose .NET just based it's new Dictionary generic class on the old HybridDictionary.

    Note: Anyway, as someone else already pointed out, you should never count on the dictionary order for anything

    0 讨论(0)
  • 2020-12-04 02:14

    You cannot count on this behavior, but it's not surprising.

    Consider how you would implement key iteration for a simple hash table. You would need to iterate over all the hash buckets, whether or not they had anything in them. Getting a small data set from a big hashtable could be inefficient.

    Therefore it might be a good optimization to keep a separate, duplicate list of keys. Using a double-linked list you still get constant-time insert/delete. (You would keep a pointer from the hashtable bucket back to this list.) That way iterating through the list of keys depends only on the number of entries, not on the number of buckets.

    0 讨论(0)
提交回复
热议问题