Does an Integer variable in C occupy 2 bytes or 4 bytes? What are the factors that it depends on?
Most of the textbooks say integer variables occupy 2 bytes. But whe
C99 N1256 standard draft
http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1256.pdf
The size of int
and all other integer types are implementation defined, C99 only specifies:
5.2.4.2.1 "Sizes of integer types <limits.h>
" gives the minimum sizes:
1 [...] Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown [...]
- UCHAR_MAX 255 // 2 8 − 1
- USHRT_MAX 65535 // 2 16 − 1
- UINT_MAX 65535 // 2 16 − 1
- ULONG_MAX 4294967295 // 2 32 − 1
- ULLONG_MAX 18446744073709551615 // 2 64 − 1
6.2.5 "Types" then says:
8 For any two integer types with the same signedness and different integer conversion rank (see 6.3.1.1), the range of values of the type with smaller integer conversion rank is a subrange of the values of the other type.
and 6.3.1.1 "Boolean, characters, and integers" determines the relative conversion ranks:
1 Every integer type has an integer conversion rank defined as follows:
- The rank of long long int shall be greater than the rank of long int, which shall be greater than the rank of int, which shall be greater than the rank of short int, which shall be greater than the rank of signed char.
- The rank of any unsigned integer type shall equal the rank of the corresponding signed integer type, if any.
- For all integer types T1, T2, and T3, if T1 has greater rank than T2 and T2 has greater rank than T3, then T1 has greater rank than T3
This is one of the points in C that can be confusing at first, but the C standard only specifies a minimum range for integer types that is guaranteed to be supported. int
is guaranteed to be able to hold -32767 to 32767, which requires 16 bits. In that case, int
, is 2 bytes. However, implementations are free to go beyond that minimum, as you will see that many modern compilers make int
32-bit (which also means 4 bytes pretty ubiquitously).
The reason your book says 2 bytes is most probably because it's old. At one time, this was the norm. In general, you should always use the sizeof
operator if you need to find out how many bytes it is on the platform you're using.
To address this, C99 added new types where you can explicitly ask for a certain sized integer, for example int16_t
or int32_t
. Prior to that, there was no universal way to get an integer of a specific width (although most platforms provided similar types on a per-platform basis).
Does an Integer variable in C occupy 2 bytes or 4 bytes?
That depends on the platform you're using, as well as how your compiler is configured. The only authoritative answer is to use the sizeof operator to see how big an integer is in your specific situation.
What are the factors that it depends on?
Range might be best considered, rather than size. Both will vary in practice, though it's much more fool-proof to choose variable types by range than size as we shall see. It's also important to note that the standard encourages us to consider choosing our integer types based on range rather than size, but for now let's ignore the standard practice, and let our curiosity explore sizeof
, bytes and CHAR_BIT
, and integer representation... let's burrow down the rabbit hole and see it for ourselves...
sizeof
, bytes and CHAR_BIT
The following statement, taken from the C standard (linked to above), describes this in words that I don't think can be improved upon.
The
sizeof
operator yields the size (in bytes) of its operand, which may be an expression or the parenthesized name of a type. The size is determined from the type of the operand.
Assuming a clear understanding will lead us to a discussion about bytes. It's commonly assumed that a byte is eight bits, when in fact CHAR_BIT tells you how many bits are in a byte. That's just another one of those nuances which isn't considered when talking about the common two (or four) byte integers.
Let's wrap things up so far:
sizeof
=> size in bytes, andCHAR_BIT
=> number of bits in byteThus, Depending on your system, sizeof (unsigned int)
could be any value greater than zero (not just 2 or 4), as if CHAR_BIT
is 16, then a single (sixteen-bit) byte has enough bits in it to represent the sixteen bit integer described by the standards (quoted below). That's not necessarily useful information, is it? Let's delve deeper...
Integer representation
The C standard specifies the minimum precision/range for all standard integer types (and CHAR_BIT
, too, fwiw) here. From this, we can derive a minimum for how many bits are required to store the value, but we may as well just choose our variables based on ranges. Nonetheless, a huge part of the detail required for this answer resides here. For example, the following that the standard unsigned int
requires (at least) sixteen bits of storage:
UINT_MAX 65535 // 2¹⁶ - 1
Thus we can see that unsigned int
require (at least) 16 bits, which is where you get the two bytes (assuming CHAR_BIT
is 8)... and later when that limit increased to 2³² - 1
, people were stating 4 bytes instead. This explains the phenomena you've observed:
Most of the textbooks say integer variables occupy 2 bytes. But when I run a program printing the successive addresses of an array of integers it shows the difference of 4.
You're using an ancient textbook and compiler which is teaching you non-portable C; the author who wrote your textbook might not even be aware of CHAR_BIT
. You should upgrade your textbook (and compiler), and strive to remember that I.T. is an ever-evolving field that you need to stay ahead of to compete... Enough about that, though; let's see what other non-portable secrets those underlying integer bytes store...
Value bits are what the common misconceptions appear to be counting. The above example uses an unsigned
integer type which typically contains only value bits, so it's easy to miss the devil in the detail.
Sign bits... In the above example I quoted UINT_MAX
as being the upper limit for unsigned int
because it's a trivial example to extract the value 16
from the comment. For signed types, in order to distinguish between positive and negative values (that's the sign), we need to also include the sign bit.
INT_MIN -32768 // -(2¹⁵) INT_MAX +32767 // 2¹⁵ - 1
Padding bits... While it's not common to encounter computers that have padding bits in integers, the C standard allows that to happen; some machines (i.e. this one) implement larger integer types by combining two smaller (signed) integer values together... and when you combine signed integers, you get a wasted sign bit. That wasted bit is considered padding in C. Other examples of padding bits might include parity bits and trap bits.
As you can see, the standard seems to encourage considering ranges like INT_MIN
..INT_MAX
and other minimum/maximum values from the standard when choosing integer types, and discourages relying upon sizes as there are other subtle factors likely to be forgotten such as CHAR_BIT
and padding bits which might affect the value of sizeof (int)
(i.e. the common misconceptions of two-byte and four-byte integers neglects these details).
Mostly it depends on the platform you are using .It depends from compiler to compiler.Nowadays in most of compilers int is of 4 bytes.
If you want to check what your compiler is using you can use sizeof(int)
.
main()
{
printf("%d",sizeof(int));
printf("%d",sizeof(short));
printf("%d",sizeof(long));
}
The only thing c compiler promise is that size of short must be equal or less than int and size of long must be equal or more than int.So if size of int is 4 ,then size of short may be 2 or 4 but not larger than that.Same is true for long and int. It also says that size of short and long can not be same.
This depends on implementation, but usually on x86 and other popular architectures like ARM int
s take 4 bytes. You can always check at compile time using sizeof(int)
or whatever other type you want to check.
If you want to make sure you use a type of a specific size, use the types in <stdint.h>
Is the size of C “int” 2 bytes or 4 bytes?
Does an Integer variable in C occupy 2 bytes or 4 bytes?
C allows "bytes" to be something other than 8 bits per "byte".
CHAR_BIT
number of bits for smallest object that is not a bit-field (byte) C11dr §5.2.4.2.1 1
A value of something than 8 is increasingly uncommon. For maximum portability, use CHAR_BIT
rather than 8. The size of an int
in bits in C is sizeof(int) * CHAR_BIT
.
#include <limits.h>
printf("(int) Bit size %zu\n", sizeof(int) * CHAR_BIT);
What are the factors that it depends on?
The int
bit size is commonly 32 or 16 bits. C specified minimum ranges:
minimum value for an object of type
int
INT_MIN
-32767
maximum value for an object of typeint
INT_MAX
+32767
C11dr §5.2.4.2.1 1
The minimum range for int
forces the bit size to be at least 16 - even if the processor was "8-bit". A size like 64 bits is seen in specialized processors. Other values like 18, 24, 36, etc. have occurred on historic platforms or are at least theoretically possible. Modern coding rarely worries about non-power-of-2 int
bit sizes.
The computer's processor and architecture drive the int
bit size selection.
Yet even with 64-bit processors, the compiler's int
size may be 32-bit for compatibility reasons as large code bases depend on int
being 32-bit (or 32/16).