I was going through scope rules questions and all and then got a code snippet, below:
#include
int main()
{
int x = 1, y = 2, z = 3;
printf(\
x, y, and z are resolved to most local definitions.
When you use incorrect printf % specifier, the behaviour is undefined.
y is float but you are using %d to print it (in later line).
printf
uses varargs and once you corrupt the stack by using incorrect specifier (%d instead of %f in this case), stack is corrupted and incorrect interpretation of stack data (at incorrect offset) would cause many painful surprises.
Decoding This UB
This is what might be happening on your machine (One possible explanation). Because of default argument promotion, bit pattern (in hex) 0x4034000000000000 is being pushed to stack for 20.0f. Sizeof int on your little-endian machine is 4 bytes. When you print float as int your machine 0x00000000 is consumed and interpreted as int which prints first 0
, later %d
consumes 0x40340000 interpret it as int and prints 1077149696. Final 100 (0x00000064) is left in stack unconsumed and printf returns.
But never rely on this and always write a code for which the behaviour is well defined.