Most of the conversations around undefined behavior (UB) talk about how there are some platforms that can do this, or some compilers do that.
What i
Nothing is changing but the code, and the UB is not implementation-defined.
Changing the code is sufficient to trigger different behavior from the optimizer with respect to undefined behavior and so code that may have worked can easily break due to seemingly minor changes that expose more optimization opportunities. For example a change that allows a function to be inlined, this is covered well in What Every C Programmer Should Know About Undefined Behavior #2/3 which says:
While this is intentionally a simple and contrived example, this sort of thing happens all the time with inlining: inlining a function often exposes a number of secondary optimization opportunities. This means that if the optimizer decides to inline a function, a variety of local optimizations can kick in, which change the behavior of the code. This is both perfectly valid according to the standard, and important for performance in practice.
Compiler vendors have become very aggressive with optimizations around undefined behavior and upgrades can expose previously unexploited code:
The important and scary thing to realize is that just about any optimization based on undefined behavior can start being triggered on buggy code at any time in the future. Inlining, loop unrolling, memory promotion and other optimizations will keep getting better, and a significant part of their reason for existing is to expose secondary optimizations like the ones above.
"Software that doesn't change, isn't being used."
If you are doing something unusual with pointers, there's probably a way to use casts to define what you want. Because of their nature, they will not be "whatever the compiler did with the UB the first time". For example, when you refer to memory pointed at by an uninitialize pointer, you get a random address that is different every time you run the program.
Undefined behavior generally means you are doing something tricky, and you would be better off doing the task another way. For instance, this is undefined:
printf("%d %d", ++i, ++i);
It's hard to know what the intent would even be here, and should be re-thought.
OS changes, innocuous system changes (different hardware version!), or compiler changes can all cause previously "working" UB to not work.
But it is worse than that.
Sometimes a change to an unrelated compilation unit, or far away code in the same compilation unit, can cause previously "working" UB to not work; as an example, two inline functions or methods with different definitions but the same signature. One is silently discarded during linking; and completely innocuous code changes can change which one is discarded.
The code that is working in one context can suddenly stop working in the same compiler, OS and hardware when you use it in a different context. An example of this is violating strong aliasing; the compiled code might work when called at spot A, but when inlined (possibly at link-time!) the code can change meaning.
Your code, if part of a larger project, could conditionally call some 3rd party code (say, a shell extension that previews an image type in a file open dialog) that changes the state of some flags (floating point precision, locale, integer overflow flags, division by zero behavior, etc). Your code, which worked fine before, now exhibits completely different behavior.
Next, many kinds of undefined behavior are inherently non-deterministic. Accessing the contents of a pointer after it is freed (even writing to it) might be safe 99/100, but 1/100 the page was swapped out, or something else was written there before you got to it. Now you have memory corruption. It passes all your tests, but you lacked complete knowledge of what can go wrong.
By using undefined behavior, you commit yourself to a complete understanding of the C++ standard, everything your compiler can do in that situation, and every way the runtime environment can react. You have to audit the produced assembly, not the C++ source, possibly for the entire program, every time you build it! You also commit everyone who reads that code, or who modifies that code, to that level of knowledge.
It is sometimes still worth it.
Fastest Possible Delegates uses UB and knowledge about calling conventions to be a really fast non-owning std::function
-like type.
Impossibly Fast Delegates competes. It is faster in some situations, slower in others, and is compliant with the C++ standard.
Using the UB might be worth it, for the performance boost. It is rare that you gain something other than performance (speed or memory usage) from such UB hackery.
Another example I've seen is when we had to register a callback with a poor C API that just took a function pointer. We'd create a function (compiled without optimization), copy it to another page, modify a pointer constant within that function, then mark that page as executable, allowing us to secretly pass a pointer along with the function pointer to the callback.
An alternative implementation would be to have some fixed size set of functions (10? 100? 1000? 1 million?) all of which look up a std::function
in a global array and invoke it. This would put a limit on how many such callbacks we install at any one time, but practically was sufficient.
Think about it a different way.
Undefined behavior is ALWAYS bad, and should never be used, because you never know what you will get.
However, you can temper that with
Behavior can be defined by parties other than just the language specification
Thus you should never rely on UB, ever, but you can find alternate sources which state that a certain behavior is DEFINED behavior for your compiler in your circumstances.
Yakk gave great examples regarding the fast delegate classes. In those cases, the author explicitly claims that they are engaging in undefined behavior, according to the spec. However, they then go to explain a business reason why the behavior is better defined than that. For example, they declare that the memory layout of a member function pointer is unlikely to change in Visual Studio because there would be rampant business costs due to incompatibilities which are distasteful to Microsoft. Thus they declare that the behavior is "de facto defined behavior."
Similar behavior can be seen in the typical linux implementation of pthreads (to be compiled by gcc). There are cases where they make assumptions about what optimizations a compiler is allowed to invoke in multithreaded scenarios. Those assumptions are stated plainly in comments in the sourcecode. How is this "de facto defined behavior?" Well, pthreads and gcc go kind of hand in hand. It would be considered unacceptable to add an optimization to gcc which broke pthreads, so nobody will ever do it.
However, you cannot make the same assumption. You may say "pthreads does it, so I should be able to as well." Then, someone makes an optimization, and updates gcc to work with it (perhaps using __sync
calls instead of relying on volatile
). Now pthreads keeps functioning... but your code doesn't anymore.
Also consider the case of MySQL (or was it Postgre?) where they found a buffer overflow error. The overflow had actually been caught in the code, but it did so using undefined behavior, so the latest gcc started optimizing the entire check out.
So, in all, look for an alternate source of defining the behavior, rather than using it while it is undefined. It is totally legit to find a reason why you know 1.0/0.0 equals NaN, rather than causing a floating point trap to occur. But never use that assumption without first proving that it is a valid definition of behavior for you and your compiler.
And please oh please oh please remember that we upgrade compilers every now and then.
Undefined behavior can be altered by things such as the ambient temperature, which causes rotating hard disk latencies to change, which causes thread scheduling to change, which in turn changes the contents of the random garbage that's getting evaluated.
In short, not safe unless the compiler or the OS specifies the behavior (since the language standard didn't).
What you are referring to is more likely implementation defined and not undefined behavior. The former is when the standard doesn't tell you what will happen but it should work the same if you are using the same compiler and the same platform. An example for this is assuming that an int
is 4 bytes long. UB is something more serious. There the standard doesn't say anything. It is possible that for a given compiler and platform it works, but it is also possible that it works only in some of the cases.
An example is using uninitialized values. If you use an uninitialized bool
in an if
, you may get true or false, and it may happen that it is always what you want, but the code will break in several surprising ways.
Another example is dereferencing a null pointer. While it will probably result in a segfault in all cases, but the standard doesn't require the program to even produce the same results every time a program is run.
In summary, if you are doing something that is implementation defined, then you are safe if you are only developing to one platform and you tested that it works. If you are doing something that is undefined behavior, then you are probably not safe in any case. There may be that it works but nothing guarantees it.