#include
int main() {
float a = 1234.5f;
printf(\"%d\\n\", a);
return 0;
}
It displays a 0
!! How is that
The reason is that printf()
is a pretty dumb function. It does not check types at all. If you say the first argument is an int
(and this is what you are saying with %d
), it believes you and it takes just the bytes needed for an int
. In this case, asuming your machine uses four-byte int
and eight-byte double
(the float
is converted to a double
inside printf()
), the first four bytes of a
will be just zeroes, and this gets printed.
The %d
specifier tells printf
to expect an integer. So the first four (or two, depending on the platform) bytes of the float are intepreted as an integer. If they happen to be zero, a zero is printed
The binary representation of 1234.5 is something like
1.00110100101 * 2^10 (exponent is decimal ...)
With a C compiler which represents float
actually as IEEE754 double values, the bytes would be (if I made no mistake)
01000000 10010011 01001010 00000000 00000000 00000000 00000000 00000000
On an Intel (x86) system with little endianess (i.e. the least significant byte coming first), this byte sequence gets reversed so that the first four bytes are zero. That is, what printf
prints out ...
See This Wikipedia article for floating point representation according to IEEE754.
It won't convert automatically float to integer. Because both has different format of storage. So if you want to convert, use (int) typecasting.
#include <stdio.h>
int main() {
float a = 1234.5f;
printf("%d\n", (int)a);
return 0;
}
Because you invoked undefined behaviour: you violated the contract of the printf() method by lying to it about its parameter types, so the compiler is free to do whatever it pleases. It could make the program output "dksjalk is a ninnyhead!!!" and technically it would still be right.
%d
is decimal
%f
is float
see more of these here.
You are getting 0 because floats and integers are represented differently.
Technically speaking there is no the printf
, each library implements its own, and therefore, your method of trying to study printf
's behavior by doing what you are doing is not going to be of much use. You could be trying to study the behavior of printf
on your system, and if so, you should read the documentation, and look at the source code for printf
if it is available for your library.
For example, on my Macbook, I get the output 1606416304
with your program.
Having said that, when you pass a float
to a variadic function, the float
is passed as a double
. So, your program is equivalent to having declared a
as a double
.
To examine the bytes of a double
, you can see this answer to a recent question here on SO.
Let's do that:
#include <stdio.h>
int main(void)
{
double a = 1234.5f;
unsigned char *p = (unsigned char *)&a;
size_t i;
printf("size of double: %zu, int: %zu\n", sizeof(double), sizeof(int));
for (i=0; i < sizeof a; ++i)
printf("%02x ", p[i]);
putchar('\n');
return 0;
}
When I run the above program, I get:
size of double: 8, int: 4
00 00 00 00 00 4a 93 40
So, the first four bytes of the double
turned out to be 0, which may be why you got 0
as the output of your printf
call.
For more interesting results, we can change the program a bit:
#include <stdio.h>
int main(void)
{
double a = 1234.5f;
int b = 42;
printf("%d %d\n", a, b);
return 0;
}
When I run the above program on my Macbook, I get:
42 1606416384
With the same program on a Linux machine, I get:
0 1083394560