Today I had a discussion with a friend of mine and we debated for a couple of hours about \"compiler optimization\".
I defended the point that sometimes
As I recall, early Delphi 1 had a bug where the results of Min and Max were reversed. There was also an obscure bug with some floating point values only when the floating point value was used within a dll. Admittedly, it has been more than a decade, so my memory may be a bit fuzzy.
I had a problem with .net 4 yesterday with something that looks like...
double x=0.4;
if(x<0.5) { below5(); } else { above5(); }
And it would call above5();
But if I actually use x
somewhere, it would call below5();
double x=0.4;
if(x<0.5) { below5(); } else { System.Console.Write(x); above5(); }
Not the exact same code but similar.
When a bug goes away by disabling optimizations, most of the time it's still your fault
I am responsible for a commercial app, written mostly in C++ - started with VC5, ported to VC6 early, now successfully ported to VC2008. It grew to over 1 Million lines in the last 10 years.
In that time I could confirm a single code generation bug thast occured when agressive optimizations where enabled.
So why am I complaining? Because in the same time, there were dozens of bugs that made me doubt the compiler - but it turned out to be my insufficient understanding of the C++ standard. The standard makes room for optimizations the compiler may or may not make use of.
Over the years on different forums, I've seen many posts blaming the compiler, ultimately turning out to be bugs in the original code. No doubt many of them obscure bugs that need a detailed understanding of concepts used in the standard, but source code bugs nonetheless.
Why I reply so late: stop blaming the compiler before you have confirmed it's actually the compiler's fault.
It can happen. It has even affected Linux.