Have a look at is simple example:
struct Base { /* some virtual functions here */ };
struct A: Base { /* members, overridden virtual functions */ };
struct B: Ba
If A
and B
are a verbatim copy of each other (except for their names) and are declared in the same context (same namespace, same #defines, no __LINE__
usage), then common C++ compilers (gcc
, clang
) will produce two binary representations which are fully interchangeable.
If A
and B
use the same method signatures but the bodies of corresponding methods differ, it is unsafe to cast A*
to B*
because the optimization pass in the compiler could for example partially inline the body of void B::method()
at the call site b->method()
while the programmer's assumption could be that b->method()
will call A::method()
. Therefore, as soon as the programmer uses an optimizing compiler the behavior of accessing A
through type B*
becomes undefined.
Problem: All compilers are always at least to some extent "optimizing" the source code passed to them, even at -O0
. In cases of behavior not mandated by the C++ standard (that is: undefined behavior), the compiler's implicit assumptions - when all optimizations are turned off - might differ from programmer's assumptions. The implicit assumptions have been made by the developers of the compiler.
Conclusion: If the programmer is able to avoid using an optimizing compiler then it is safe to access A
via B*
. The only issue such a programmer needs to tackle with is that non-optimizing compilers do not exist.
A managed C++ implementation might abort the program when A*
is casted to B*
via reinterpret_cast
, when b->field
is accessed, or when b->method()
is called. Some other managed C++ implementation might try harder to avoid a program crash and so it will resort to temporary duck typing when it sees the program accessing A
via B*
.
Some questions are: