I tried to print character as a float in printf and got output 0. What is the reason for this.
Also:
char c=\'z\';
printf(\"%f %X\",c,c);
I tried to print character as a float in printf and got output 0. What is the reason for this.
The question is, what value did you expect to see? Why would you expect something other than 0?
The short answer to your question is that the behavior of printf
is undefined if the type of the argument doesn't match the conversion specifier. The %f
conversion specifier expects its corresponding argument to have type double
; if it isn't, all bets are off, and the exact output will vary.
The printf()
function is a variadic function, which means that you can pass a variable number of arguments of unspecified types to it. This also means that the compiler doesn't know what type of arguments the function expects, and so it cannot convert the arguments to the correct types. (Modern compilers can warn you if you get the arguments wrong to printf
, if you invoke it with enough warning flags.)
For historical reasons, you can not pass an integer argument of smaller rank than int
, or a floating type of smaller rank than double
to a variadic function. A float
will be converted to double
and a char
will be converted to int
(or unsigned int
on bizarre implementations) through a process called the default argument promotions.
When printf
parses its parameters (arguments are passed to a function, parameters are what the function receives), it retrieves them using whatever method is appropriate for the type specified by the format string. The "%f"
specifier expects a double
. The "%X"
specifier expects an unsigned int
.
If you pass an int
and printf
tries to retrieve a double
, you invoke undefined behaviour.
If you pass an int
and printf
tries to retrieve an unsigned int
, you invoke undefined behaviour.
Undefined behaviour may include (but is not limited to) printing strange values, crashing your program or (the most insidious of them all) doing exactly what you expect.
Source: n1570 (The final public draft of the current C standard)
Because printf, like any function that work with varargs, eg: int foobar(const char fmt, ...) {}
tries to interpret its parameter to certain type.
If you say "%f"
, then pass c
(as a char
), then printf
will try to read a float
.
You can read more here: var_arg (even if this is C++, it still applies).
You need to use a cast operator like this:
char c = 'z';
printf("%f %X", (float)c, c);
or
printf("%f %X", (double)c, c);
In Xcode, if I do not do this, I get the warning:
Format specifies specifies 'double' but the argument has type 'char', and the output is 0.000000.
To understand the floating point issue, consider reading: http://en.wikipedia.org/wiki/IEEE_floating_point
As for hexadecimal, let me guess.. the output was something like... 99?
This is because of encodings.. the machine has to represent information in some format, and usually that format entails either giving meanings to certain bits in a number, or having a table of symbols to numbers, or both
Floating points are sometimes represented as a (sign,mantissa,exponent) triplet all packed in a 32 or 64 bit number - characters are sometimes represented in a format named ASCII, which establishes which number corresponds to each character you type