When I reading some code, for integer, they use bunch of different type such as size_t, uint32, uint64
etc.
What is the motivation or purpose to do this?
Why not ju
These are for platform-independence.
size_t
is, by definition, the type returned by sizeof
. It is large enough to represent the largest object on the target system.
Not so many years ago, 32 bits would have been enough for any platform. 64 bits is enough today. But who knows how many bits will be needed 5, 10, or 50 years from now?
By writing your code not to care -- i.e., always use size_t
when you mean "size of an object" -- you can write code that will actually compile and run 5, 10, or 50 years from now. Or at least have a fighting chance.
Use the types to say what you mean. If for some reason you require a specific number of bits (probably only when dealing with an externally-defined format), use a size-specific type. If you want something that is "the natural word size of the machine" -- i.e., fast -- use int
.
If you are dealing with a programmatic interface like sizeof
or strlen
, use the data type appropriate for that interface, like size_t
.
And never try to assign one type to another unless it is large enough to hold the value by definition.