Programming languages (e.g. c, c++, and java) usually have several types for integer arithmetic:
signed
and unsigned
types
One by one to your questions:
signed
and unsigned
: depends on what you need. If you're sure, that the number will be unsigned - use unsigned
. This will give you the opportunity to use bigger numbers. For example, a signed char
(1B) has range [-128:127], but if it's unsigned
- the max value is doubled (you have one more bit to use - sign bit, so unsigned char
could be 255 (all bits are 1)
short
, int
, long
, long long
- these are pretty clear, aren't it? The smallest integer (except char
) is short
, next one is int
, etc. But these ones are platform dependent - int
could be 2B (long long ago :D ), 4B (usually). long
could be 4B (in 32bit platform), or 8B (on 64bit platform), etc. long long
is not standard type in C++ (it will be in C++0x), but usually it's a typedef for int64_t
.
int32_t
vs int
- int32_t
and other types like this guarantee their size. For example, int32_t
is guaranteed to be 32bit, while, as I already said, the size of int
is platform dependent.