I was going through Java\'s HashMap source code when I saw the following
//The default initial capacity - MUST be a power of two.
static final int DEFAULT_IN
The map has to work out which internal table index to use for any given key, mapping any int
value (could be negative) to a value in the range [0, table.length)
. When table.length
is a power of two, that can be done really cheaply - and is, in indexFor
:
static int indexFor(int h, int length) {
return h & (length-1);
}
With a different table length, you'd need to compute a remainder and make sure it's non-negative . This is definitely a micro-optimization, but probably a valid one :)
Also, when automatic rehashing is performed, what exactly happens? Is the hash function altered too?
It's not quite clear to me what you mean. The same hash codes are used (because they're just computed by calling hashCode
on each key) but they'll be distributed differently within the table due to the table length changing. For example, when the table length is 16, hash codes of 5 and 21 both end up being stored in table entry 5. When the table length increases to 32, they will be in different entries.
The ideal situation is actually using prime number sizes for the backing array of an HashMap
. That way your keys will be more naturally distributed across the array. However this works with mod division and that operation became slower and slower with every release of Java.
In a sense, the power of 2 approach is the worst table size you can imagine because with poor hashcode implementations are more likely to produce key collosions in the array.
Therefor you'll find another very important method in Java's HashMap
implementation, which is the hash(int)
, that compensates for poor hashcodes.