Programming languages (e.g. c, c++, and java) usually have several types for integer arithmetic:
signed
and unsigned
typesThe default integral type (int
) gets a "first among equals" preferential treatment in pretty much all languages. So we can use that as a default, if no reasons to prefer another type exist.
Such reasons might be:
<<
and >>
).int32_t
) -- if your program is meant to be portable and expected to be compiled with different compilers, this becomes more important.Update (expanding on guaranteed size types)
My personal opinion is that types with no guaranteed fixed size are more trouble than worth today. I won't go into the historical reasons that gave birth to them (briefly: source code portability), but the reality is that in 2011 very few people, if any, stand to benefit from them.
On the other hand, there are lots of things that can go wrong when using such types:
For these reasons (and there are probably others too), using such types is in theory a major pain. Additionally, unless extreme portability is a requirement, you don't stand to benefit at all to compensate. And indeed, the whole purpose of typedefs like int32_t
is to eliminate usage of loosely sized types entirely.
As a practical matter, if you know that your program is not going to be ported to another compiler or architecture, you can ignore the fact that the types have no fixed size and treat them as if they are the known size your compiler uses for them.
In general you should use the type that suits the requirements of your program and promotes readability and future maintainability as much as possible.
Having said that, as Chris points out people do use shorts vs ints to save memory. Think about the following scenario, you have 1,000,000 (a fairly small number) ints (typically 32 bytes) vs shorts (typically 16 bytes). If you know you'll never need to represent a number larger than 32,767, you could use a short. Or you could use an unsigned short if you know you'll never need to represent a number larger than 65,535. This would save: ((32 - 16) x 1,000,000) = 16k of memory.
use shorter to save memory, longer to be able to represent larger numbers. If you don't have such requirements, consider what APIs you'll be sharing data with and set yourself up so you don't have to cast or convert too much.
Maybe just for fun, here you have a simple example showing how according to what type you choose you have one result or an other.
Naturally the actual reason why you would choose one type or an other, in my opinion, is related to other factors. For instance the great shift operator.
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
int i;
//unsigned long long x;
//int x;
short x;
x = 2;
for (i=2; i<15; ++i)
{
x=pow(x,2);
cout << x << endl;
}
return 0;
}
One by one to your questions:
signed
and unsigned
: depends on what you need. If you're sure, that the number will be unsigned - use unsigned
. This will give you the opportunity to use bigger numbers. For example, a signed char
(1B) has range [-128:127], but if it's unsigned
- the max value is doubled (you have one more bit to use - sign bit, so unsigned char
could be 255 (all bits are 1)
short
, int
, long
, long long
- these are pretty clear, aren't it? The smallest integer (except char
) is short
, next one is int
, etc. But these ones are platform dependent - int
could be 2B (long long ago :D ), 4B (usually). long
could be 4B (in 32bit platform), or 8B (on 64bit platform), etc. long long
is not standard type in C++ (it will be in C++0x), but usually it's a typedef for int64_t
.
int32_t
vs int
- int32_t
and other types like this guarantee their size. For example, int32_t
is guaranteed to be 32bit, while, as I already said, the size of int
is platform dependent.
Typically, you use int
, unless you need to expand it because you need a larger range or you want to shrink it because you know the value only makes sense in a smaller range. It's incredibly rare that you would need to change due to memory considerations- the difference between them is miniscule.