I have a code where I routinely fill a vector with between 0 and 5000 elements. I know the maximum never exceeds 5000. Instead of initializing vector multiple times, I would
Anything you do to remove the existing items from the vector needs to (potentially) invoke the destructor of each item being destroyed. Therefore, from the container's viewpoint, the best you can hope for is linear complexity.
That leaves only the question of what sort of items you store in the vector. If you store something like int
that the compiler can/will know ahead of time has no destructor to invoke, chances are at least pretty good that removal will end up with constant complexity.
I doubt, however, that changing the syntax (e.g., clear()
vs. resize()
vs. erase(begin(), end())
) will make any significant difference at all. The syntax doesn't change that fact that (in the absence of threading) invoking N destructors is an O(N) operation.
The cost of clear()
depends greately on what the stored objects are, and in particular whether they have a trivial destructor. If the type does not have a trivial destructor, then the call must destroy all stored objects and it is in fact an O(n) operation, but you cannot really do anything better.
Now, if the stored elements have trivial destructors, then the implementation can optimize the cost away and clear()
becomes a cheap O(1) operation (just resetting the size --end
pointer).
Remember that to understand asymptotic complexity you need to know what it talks about. In the case of clear()
it represents the number of destructors called, but if the cost (hidden) is 0, then the operation is a no-op.
If your struct has a non-trivial destructor, then that needs to be called for all the elements of the vector regardless of how it is emptied. If your struct only has a trivial destructor, the compiler or the standard library implementation is allowed to optimize away the destruction process and give you a O(1) operation.