Today I had a discussion with a friend of mine and we debated for a couple of hours about \"compiler optimization\".
I defended the point that sometimes
Compiler (and runtime) optimization can certainly introduce undesired behaviour - but it at least should only happen if you're relying on unspecified behaviour (or indeed making incorrect assumptions about well-specified behaviour).
Now beyond that, of course compilers can have bugs in them. Some of those may be around optimisations, and the implications could be very subtle - indeed they're likely to be, as obvious bugs are more likely to be fixed.
Assuming you include JITs as compilers, I've seen bugs in released versions of both the .NET JIT and the Hotspot JVM (I don't have details at the moment, unfortunately) which were reproducible in particularly odd situations. Whether they were due to particular optimisations or not, I don't know.
I've never heard of or used a compiler whose directives could not alter the behaviour of a program. Generally this is a good thing, but it does require you to read the manual.
AND I had a recent situation where a compiler directive 'removed' a bug. Of course, the bug is really still there but I have a temporary workaround until I fix the program properly.
It's theoretically possible, sure. But if you don't trust the tools to do what they are supposed to do, why use them? But right away, anyone arguing from the position of
"compilers are built by smart people and do smart things" and thus, can never go wrong.
is making a foolish argument.
So, until you have reason to believe that a compiler is doing so, why posture about it?
Because of exhaustive testing and the relative simplicity of actual C++ code (C++ has under 100 keywords / operators) compiler bugs are relatively rare. Bad programming style often is the only thing encounters them. And usually the compiler will crash or produce an internal compiler error instead. The only exception to this rule is GCC. GCC, especially older versions, had a lot of experimental optimizations enabled in O3
and sometimes even the other O levels. GCC also targets so many backends that this leaves more room for bugs in their intermediate representation.
Aliasing can cause problems with certain optimizations, which is why compilers have an option to disable those optimizations. From Wikipedia:
To enable such optimizations in a predictable manner, the ISO standard for the C programming language (including its newer C99 edition) specifies that it is illegal (with some exceptions) for pointers of different types to reference the same memory location. This rule, known as "strict aliasing", allows impressive increases in performance[citation needed], but has been known to break some otherwise valid code. Several software projects intentionally violate this portion of the C99 standard. For example, Python 2.x did so to implement reference counting,[1] and required changes to the basic object structs in Python 3 to enable this optimisation. The Linux kernel does this because strict aliasing causes problems with optimization of inlined code.[2] In such cases, when compiled with gcc, the option -fno-strict-aliasing is invoked to prevent unwanted or invalid optimizations that could produce incorrect code.
Everything that you can possibly imagine doing with or to a program will introduce bugs.