can you please explain the o/p behavior of this program.
int main()
{
float a = 12.5;
printf(\"%d\\n\", a);
printf(\"%d\\n\", *(int *)&a);
return
Floating point numbers are stored in IEEE 754 format. When you pass the %d
format specifier to printf()
you're telling it to look at the first sizeof(int)
bytes starting at &a
. Well in the IEEE 754 format this is a bunch of zeros.
I highly recommend you read What Every Computer Scientist Should Know About Floating-Point Arithmetic
Let's see.
float a = 12.5f;
Casting a pointer and casting a value are very different operations. When you cast a float to an int, you ask to transform the float value to an int value, which results in actual transformation of data; when you cast a float pointer to an int pointer, you just override type checks. A pointer is just an integer memory location: casting it to another kind of pointer doesn't involve any transformation.
So, what you see is what the bit pattern of your float looks like when treated as an integer.
The bit patterns most computers use to represent floats are described by the IEEE-754 standard. Integers, on the other hand, are just, well, integers represented in binary base. Therefore, taking the bits of a real number and interpreting them as an integer yields very different results.
I think it prints 12 12 because of %d there is no difference and *&a = a
You don't want to cast the float pointer to an integer pointer. Floats and integers are not stored the same way and if you do that then there is no conversion that will take place, and thus you will get garbage printed to the screen. If however you cast an int-value to a float-value then the compile will convert the float from it's internal type into an integer for you. So you should replace the (int *)
with (int)
.
Also, %d
is for decimal(integer) values. What you want is %f
, which is for float values, for the first printf
.
What you want is this:
#include <stdio.h>
int main()
{
float a = 12.5;
printf("%f\n", a); //changed %d -> %f
printf("%d\n", (int)a); //changed *(int *)& -> (int) for proper conversion
return 0;
}
verified here: http://codepad.org/QD4kzAC9
The difference is that in one case the compiler is doing an implicit cast (I assume you get 12 from the first one) while in the second case it is taking the memory containing the floating point value and interpreting it as an int. It would help if you included what the actual output was.