Does an Integer variable in C occupy 2 bytes or 4 bytes? What are the factors that it depends on?
Most of the textbooks say integer variables occupy 2 bytes. But whe
The only guarantees are that char
must be at least 8 bits wide, short
and int
must be at least 16 bits wide, and long
must be at least 32 bits wide, and that sizeof (char)
<= sizeof (short)
<= sizeof (int)
<= sizeof (long)
(same is true for the unsigned versions of those types).
int
may be anywhere from 16 to 64 bits wide depending on the platform.
Is the size of C “int” 2 bytes or 4 bytes?
The answer is "yes" / "no" / "maybe" / "maybe not".
The C programming language specifies the following: the smallest addressable unit, known by char
and also called "byte", is exactly CHAR_BIT
bits wide, where CHAR_BIT
is at least 8.
So, one byte in C is not necessarily an octet, i.e. 8 bits. In the past one of the first platforms to run C code (and Unix) had 4-byte int
- but in total int
had 36 bits, because CHAR_BIT
was 9!
int
is supposed to be the natural integer size for the platform that has range of at least -32767 ... 32767
. You can get the size of int
in the platform bytes with sizeof(int)
; when you multiply this value by CHAR_BIT
you will know how wide it is in bits.
While 36-bit machines are mostly dead, there are still platforms with non-8-bit bytes. Just yesterday there was a question about a Texas Instruments MCU with 16-bit bytes, that has a C99, C11-compliant compiler.
On TMS320C28x it seems that char
, short
and int
are all 16 bits wide, and hence one byte. long int
is 2 bytes and long long int
is 4 bytes. The beauty of C is that one can still write an efficient program for a platform like this, and even do it in a portable manner!
The answer to this question depends on which platform you are using.
But irrespective of platform, you can reliably assume the following types:
[8-bit] signed char: -127 to 127
[8-bit] unsigned char: 0 to 255
[16-bit]signed short: -32767 to 32767
[16-bit]unsigned short: 0 to 65535
[32-bit]signed long: -2147483647 to 2147483647
[32-bit]unsigned long: 0 to 4294967295
[64-bit]signed long long: -9223372036854775807 to 9223372036854775807
[64-bit]unsigned long long: 0 to 18446744073709551615
#include <stdio.h>
int main(void) {
printf("size of int: %d", (int)sizeof(int));
return 0;
}
This returns 4, but it's probably machine dependant.
This is a good source for answering this question.
But this question is a kind of a always truth answere "Yes. Both."
It depends on your architecture. If you're going to work on a 16-bit machine or less, it can't be 4 byte (=32 bit). If you're working on a 32-bit or better machine, its length is 32-bit.
To figure out, get you program ready to output something readable and use the "sizeof" function. That returns the size in bytes of your declared datatype. But be carfull using this with arrays.
If you're declaring int t[12];
it will return 12*4 byte. To get the length of this array, just use sizeof(t)/sizeof(t[0])
.
If you are going to build up a function, that should calculate the size of a send array, remember that if
typedef int array[12];
int function(array t){
int size_of_t = sizeof(t)/sizeof(t[0]);
return size_of_t;
}
void main(){
array t = {1,1,1}; //remember: t= [1,1,1,0,...,0]
int a = function(t); //remember: sending t is just a pointer and equal to int* t
print(a); // output will be 1, since t will be interpreted as an int itselve.
}
So this won't even return something different. If you define an array and try to get the length afterwards, use sizeof. If you send an array to a function, remember the send value is just a pointer on the first element. But in case one, you always knows, what size your array has. Case two can be figured out by defining two functions and miss some performance. Define function(array t) and define function2(array t, int size_of_t). Call "function(t)" measure the length by some copy-work and send the result to function2, where you can do whatever you want on variable array-sizes.
There's no specific answer. It depends on the platform. It is implementation-defined. It can be 2, 4 or something else.
The idea behind int
was that it was supposed to match the natural "word" size on the given platform: 16 bit on 16-bit platforms, 32 bit on 32-bit platforms, 64 bit on 64-bit platforms, you get the idea. However, for backward compatibility purposes some compilers prefer to stick to 32-bit int
even on 64-bit platforms.
The time of 2-byte int
is long gone though (16-bit platforms?) unless you are using some embedded platform with 16-bit word size. Your textbooks are probably very old.