How the size of int is decided?
Is it true that the size of int will depend on the processor. For 32-bit machine, it will be 32 bits and for 16-bit it\'s 16.
It depends on the implementation. The only thing the C standard guarantees is that
sizeof(char) == 1
and
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
and also some representable minimum values for the types, which imply that char
is at least 8 bits long, int
is at least 16 bit, etc.
So it must be decided by the implementation (compiler, OS, ...) and be documented.
It is depends on the primary compiler. if you using turbo c means the integer size is 2 bytes. else you are using the GNU gccompiler means the integer size is 4 bytes. it is depends on only implementation in C compiler.
The size of integer is basically depends upon the architecture
of your system.
Generally if you have a 16-bit
machine then your compiler
will must support a int of size 2 byte.
If your system is of 32 bit,then the compiler must support for 4 byte for integer.
In more details,
data bus
comes into picture yes,16-bit ,32-bit means nothing but the size of data bus
in your system.x06->16-bit->DOS->turbo c->size of int->2 byte x306->32-bit>windows/Linux->GCC->size of int->4 byte
Making int
as wide as possible is not the best choice. (The choice is made by the ABI designers.)
A 64bit architecture like x86-64 can efficiently operate on int64_t
, so it's natural for long
to be 64 bits. (Microsoft kept long
as 32bit in their x86-64 ABI, for various portability reasons that make sense given the existing codebases and APIs. This is basically irrelevant because portable code that actually cares about type sizes should be using int32_t
and int64_t
instead of making assumptions about int
and long
.)
Having int
be int32_t
actually makes for better, more efficient code in many cases. An array of int
use only 4B per element has only half the cache footprint of an array of int64_t
. Also, specific to x86-64, 32bit operand-size is the default, so 64bit instructions need an extra code byte for a REX prefix. So code density is better with 32bit (or 8bit) integers than with 16 or 64bit. (See the x86 wiki for links to docs / guides / learning resources.)
If a program requires 64bit integer types for correct operation, it won't use int
. (Storing a pointer in an int
instead of an intptr_t
is a bug, and we shouldn't make the ABI worse to accommodate broken code like that.) A programmer writing int
probably expected a 32bit type, since most platforms work that way. (The standard of course only guarantees 16bits).
Since there's no expectation that int
will be 64bit in general (e.g. on 32bit platforms), and making it 64bit will make some programs slower (and almost no programs faster), int
is 32bit in most 64bit ABIs.
Also, there needs to be a name for a 32bit integer type, for int32_t
to be a typedef
for.
It depends on the compiler.
For eg : Try an old turbo C compiler & it would give the size of 16 bits for an int because the word size (The size the processor could address with least effort) at the time of writing the compiler was 16.
Yes. int
size depends on the compiler size.
For 16 bit integer the range of the integer is between -32768 to 32767. For 32 & 64 bit compiler it will increase.