问题
Simple question, Would it be good for me to force myself to start using size_t (or unsigned longs?) in places where I would normally use ints when dealing with arrays or other large datastructures?
Say you have a vector pointer:
auto myVectorPtr = myVector;
Unknown to you, the size of this vector is larger than:
std::numeric_limits<int>::max();
and you have a loop:
for(int i = 0; i < myVectorPtr->size(); ++i)
wouldn't it be preferable to use
for(size_t i = 0; i < myVectorPtr->size(); ++i)
to avoid running into overflows?
I guess my question really is, are there any side effects of using size_t (or unsigned longs?) in arithmetic and other common operations. Is there anything I need to watch out for if I started using size_t (or unsigned longs?) instead of the classic int.
回答1:
size_t
is certainly better than int
. The safest thing to do would be to use the actual size_type
of the container, e.g.:
for( typename decltype(*myVectorPtr)::size_type i = 0; i < myVectorPtr->size(); ++i )
Unfortunately auto
cannot be used here because it would deduce its type from 0
, not from the size()
call.
It reads a bit nicer to use iterator or range-based interfaces:
for (auto iter = begin(*myVectorPtr); iter != end(*myVectorPtr); ++iter)
or
for (auto &&item : *myVectorPtr)
来源:https://stackoverflow.com/questions/28755721/c-self-enforcing-a-standard-size-t