Please help me in understanding the following C Output:
#include
int main() {
float x = 4.0;
printf(\"%f\\n\",x);
printf(\"%d\\n\",x);
floats
are automatically promoted to doubles
when passed as ...
parameters (similarly to how chars
and short ints
are promoted to ints
). When printf()
looks at a format specifier (%d
or %f
or whatever), it grabs and interprets the raw data it has received according to the format specifier (as int
or double
or whatever) and then prints it.
printf("%d\n",x)
is wrong because you are passing a double
to printf()
but lie to it that it's going to be an int
. printf()
makes you pay for the lie. It takes 4 bytes (most likely, but not necessarily 4 bytes) from its parameters, which is the size of an int
, it grabs those bytes from where you have previously put 8 bytes (again, most likely, but not necessarily 8 bytes) of the double
4.0, of which those 4 bytes happen to be all zeroes and then it interprets them as an integer and prints it. Indeed, powers of 2 in the IEEE-754 double precision format normally have 52 zero bits, or more than 6 8-bit bytes that are zero.
Those "most likely, but not necessarily" words mean that the C standard does not mandate a fixed size and range for types and they may vary from compiler to compiler, from OS to OS. These days 4-byte ints
and 8-byte doubles
are the most common (if we consider, e.g. the x86 platform).