Test the following code:
#include
#include
main()
{
const char *yytext=\"0\";
const float f=(float)atof(yytext);
siz
Why would you think that t should be 0?
Or, more accuractely phrased, "Why would you think that the binary representation of a floating point zero would be the same as the binary representation of an integer zero?"
-O3 is not deemed "sane", -O2 is generally the upper threshold except maybe for some multimedia apps.
Some apps can't even go that far, and die if you go beyond -O1 .
If you have a new enough GCC ( I'm on 4.3 here ), it may support this command
gcc -c -Q -O3 --help=optimizers > /tmp/O3-opts
If you're careful, you'll possibly be able to go through that list and find the given singular optimization you're enabling which causes this bug.
From man gcc
:
The output is sensitive to the effects of previous command line options, so for example it is possible to find out which
optimizations are enabled at -O2 by using:
-O2 --help=optimizers
Alternatively you can discover which binary optimizations are enabled by -O3 by using:
gcc -c -Q -O3 --help=optimizers > /tmp/O3-opts
gcc -c -Q -O2 --help=optimizers > /tmp/O2-opts
diff /tmp/O2-opts /tmp/O3-opts | grep enabled
This is bad C code. Your cast breaks C aliasing rules, and the optimiser is free do things that break this code. You will probably find that GCC has cheduled the size_t read before the floating-point write (to hide fp pipeline latency).
You can set the -fno-strict-aliasing switch, or use a union or a reinterpret_cast to reinterpret the value in a standards-compliant way.