Test the following code:
#include
#include
main()
{
const char *yytext=\"0\";
const float f=(float)atof(yytext);
siz
This is bad C code. Your cast breaks C aliasing rules, and the optimiser is free do things that break this code. You will probably find that GCC has cheduled the size_t read before the floating-point write (to hide fp pipeline latency).
You can set the -fno-strict-aliasing switch, or use a union or a reinterpret_cast to reinterpret the value in a standards-compliant way.