问题
I am wondering what would be the time complexity on Java
HashMap
resizing when the load factor exceeds the threshold ? As far as I understand for HashMap the table size is always power of 2 an even number, so whenever we resize the table we don't necessary need to rehash all the keys (correct me if i am wrong), all we need to do is to allocate additional spaces without and copy over all the entries from the old table (I am not quite sure how does JVM deal with that internally), correct ? Whereas for Hashtable
since it uses a prime number as the table size, so we need to rehash all the entries whenever we re-size the table. So my question is does it still take O(n) linear time for resizing on HashMap
?
回答1:
So my question is does it still take O(n) linear time for resizing on HashMap.
Basically, yes.
... so whenever we resize the table we don't necessary need to rehash all the keys (correct me if i am wrong.
Actually, you would need to rehash all of the keys. When you double the hash table size, the hash chains need to be split. To do this, you need to test which of two chains the hash value for every key maps to. (Indeed, you need to do the same if the hash table had an open organization too.)
However, in the current generation of HashMap
implementations (derived from the Sun/Oracle codebases), the hashcode values are cached in the chained entry objects, so that the hashcode for a key doesn't ever need to be recomputed.
回答2:
When the table is resized, the entire contents of the original table must be copied to the new table, so it takes O(n) time to resize the table, where n is the number of elements in the original table. The amortized cost of any operation on a HashMap (assuming the uniform hashing assumption) is O(1), but yes, the worst case cost of a single insertion operation is O(n).
来源:https://stackoverflow.com/questions/14251292/time-complexity-for-java-hashmap-resizing