In 10, or even 5 years there will be no [Edit2: server or desktop] 32-bit CPUs.
So, are there any advantages in using int
(32bit) over
32-bit is still a completely valid data type; just like we have 16-bit and bytes still around. We didn't throw out 16-bit or 8-bit numbers when we moved to 32-bit processors. A 32-bit number is half the size of a 64-bit integer in terms of storage. If I were modeling a database, and I knew the value couldn't go higher than what a 32-bit integer could store; I would use a 32-bit integer for storage purposes. I'd do the same thing with a 16-bit number as well. A 64-bit number takes more space in memory as well; albeit not anything significant given today's personal laptops can ship with 8 GB of memory.
There is no disadvantage of int other than it's a smaller data type. It's like asking, "Where should I store my sugar? In a sugar bowl, or a silo?" Well, that depends on entirely how much sugar you have.
Processor architecture shouldn't have much to do with what size data type you use. Use what fits. When we have 512-bit processors, we'll still have bytes.
EDIT:
To address some comments / edits..
I'm not sure about "There will be no 32-bit desktop CPUs". ARM is currently 32-bit; and has declared little interest in 64-bit; for now. That doesn't fit too well with "Desktop" in your description; but I also think in 5-10 years the landscape of the type of devices we are writing software will drastically change as well. Tablets can't be ignored; people will want C# and Java apps to run on them, considering Microsoft officially ported Windows 8 to ARM.
If you want to start using long
; go ahead. There is no reason not to. If we are only looking at the CPU (ignoring storage size), and making assumptions we are on an x86-64 architecture, then it doesn't make much difference.
Assuming that we are sticking with the x86 architecture; that's true as well. You may end up with a slightly larger stack; depending on whatever framework you are using.
Sorry for the C++ answer.
If the size of the type matters use a sized type:
uint8_t
int32_t
int64_t
If the size doesn't matter use an expressive type:
size_t
ptrdiff_t
ssize_t
I know that D has sized types and size_t
. I'm not sure about Java or C#.
If you're on a 64-bit processor, and you've compiled your code for 64-bit, then at least some of the time, long
is likely to be more efficient because it matches the register size. But whether that will really impact your program much is debatable. Also, if you're using long
all over the place, you're generally going to use more memory - both on the stack and on the heap - which could negatively impact performance. There are too many variables to know for sure how well your program will perform using long
by default instead of int
. There are reasons why it could be faster and reasons why it could be slower. It could be a total wash.
The typical thing to do is to just use int
if you don't care about the size of the integer. If you need a 64-bit integer, then you use long
. If you're trying to use less memory and int
is far more than you need, then you use byte
or short
.
x86_64 CPUs are going to be designed to be efficient at processing 32-bit programs and so it's not like using int
is going to seriously degrade performance. Some things will be faster due to better alignment when you use 64-bit integers on a 64-bit CPU, but other things will be slower due to the increased memory requirements. And there are probably a variety of other factors involved which could definitely affect performance in either direction.
If you really want to know which is going to do better for your particular application in your particular environment, you're going to need to profile it. This is not a case where there is a clear advantage of one over the other.
Personally, I would advise that you follow the typical route of using int
when you don't care about the size of the integer and to use the other types when you do.