问题
From the JavaDocs of HashSet:
This class offers constant time performance for the basic operations (add, remove, contains and size), assuming the hash function disperses the elements properly among the buckets. Iterating over this set requires time proportional to the sum of the HashSet instance's size (the number of elements) plus the "capacity" of the backing HashMap instance (the number of buckets). Thus, it's very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important
Why does iteration takes time proportional to the sum(number of elements in set+ capacity of backing map) and not only to the number of elements in the set itself ?
.
回答1:
HashSet
is imlemented using a HashMap
where the elements are the map keys. Since a map has a defined number of buckets that can contain one or more elements, iteration needs to check each bucket, whether it contains elements or not.
回答2:
Using LinkedHashSet follows the "linked" list of entries so the number of blanks doesn't matter. Normally you wouldn't have a HashSet where the capacity is much more than double the size actually used. Even if you do, scanning a million entries, mostly null
doesn't take much time (milli-seconds)
回答3:
Why does iteration takes time proportional to the sum(number of elements in set+ capacity of backing map) and not only to the number of elements in the set itself ?
The elements are dispersed inside the underlying HashMap
which is backed by an array.
So it is not known which buckets are occupied (but it is known how many elements are totally available).
So to iterate over all elements all buckets must be checked
回答4:
If your concern is the time it takes to iterate around the set, and you are using Java 6 or greater take a look at this beauty:
ConcurrentSkipListSet
来源:https://stackoverflow.com/questions/12069877/what-the-iteration-cost-on-a-hashset-also-depend-on-the-capacity-of-backing-map