问题
I understand that the complexity of the clear() operation is linear in the size of the container, because the destructors must be called. But what about primitive types (and POD)? It seems the best thing to do would be to set the vector size to 0, so that the complexity is constant.
If this is possible, is it also possible for std::unordered_map?
回答1:
It seems the best thing to do would be to set the vector size to 0, so that the complexity is constant.
In general, the complexity of resizing a vector to zero is linear in the number of elements currently stored in the vector
. Therefore, setting vector
's size to zero offers no advantage over calling clear()
- the two are essentially the same.
However, at least one implementation (libstdc++, source in bits/stl_vector.h) gives you an O(1) complexity for primitive types by employing partial template specialization.
The implementation of clear()
navigates its way to the std::_Destroy(from, to)
function in bits/stl_construct.h, which performs a non-trivial compile-time optimization: it declares an auxiliary template class _Destroy_aux
with the template parameter of type bool
. The class has a partial specialization for true
and an explicit specialization for false
. Both specializations define a single static function called __destroy
. In case the template parameter is true
, the function body is empty; in case the parameter is false
, the body contains a loop invoking T's destructor by calling std::_Destroy(ptr)
.
The trick comes on line 126:
std::_Destroy_aux<__has_trivial_destructor(_Value_type)>::
__destroy(__first, __last);
The auxiliary class is instantiated based on the result of the __has_trivial_destructor
check. The checker returns true
for built-in types, and false
for types with non-trivial destructor. As the result, the call to __destroy
becomes a no-op for int
, double
, and other POD types.
The std::unordered_map
is different from the vector
in that it may need to delete structures that represent "hash buckets" of POD objects, as opposed to deleting objects themselves*. The optimization of clear
to O(1)
is possible, but it is heavily dependent on the implementation, so I would not count on it.
* The exact answer depends on the implementation: hash tables implementing collision resolution based on open addressing (linear probing, quadratic probing, etc.) may be able to delete all buckets in O(1)
; implementations based on separate chaining would have to delete buckets one-by-one, though.
回答2:
gcc's version of std::_Destroy
, which is what is eventually used by clear()
, tries to template on on whether the type has a trivial destructor, so in that case the complexity is constant even without an optimisation pass. However I don't know how well the template works.
来源:https://stackoverflow.com/questions/11235975/what-is-the-complexity-of-stdvectortclear-when-t-is-a-primitive-type