Let\'s say I define a following C++ object:
class AClass
{
public:
AClass() : foo(0) {}
uint32_t getFoo() { return foo; }
void changeFoo() { foo
Is there any practical chance that the values ever obtained by T1 will be different than 0 or 5 with modern computer architectures and compilers? What about other primitive types?
Sure - there is no guarantee that the entire data will be written and read in an atomic manner. In practice, you may end up with a read which occurred during a partial write. What may be interrupted, and when that happens depends on several variables. So in practice, the results could easily vary as size and alignment of types vary. Naturally, that variance may also be introduced as your program moves from platform to platform and as ABIs change. Furthermore, observable results may vary as optimizations are added and other types/abstractions are introduced. A compiler is free to optimize away much of your program; perhaps completely, depending of the scope of the instance (yet another variable which is not considered in the OP).
Beyond optimizers, compilers, and hardware specific pipelines: The kernel can even affect the manner in which this memory region is handled. Does your program Guarantee where the memory of each object resides? Probably not. Your object's memory may exist on separate virtual memory pages -- what steps does your program take to ensure the memory is read and written in a consistent manner for all platforms/kernels? (none, apparently)
In short: If you cannot play by the rules defined by the abstract machine, you should not use the interface of said abstract machine (e.g. you should just understand and use assembly if the specification of C++'s abstract machine is truly inadequate for your needs -- highly improbable).
All the assembler code I investigated so far was using 32-bit memory reads and writes, which seems to save the integrity of the operation.
That's a very shallow definition of "integrity". All you have is (pseudo-)sequential consistency. As well, the compiler needs only to behave as if in such a scenario -- which is far from strict consistency. The shallow expectation means that even if the compiler actually made no breaking optimization and performed reads and writes in accordance with some ideal or intention, that the result would be practically useless -- your program would observe changes typically 'long' after its occurrence.
The subject remains irrelevant, given what specifically you can Guarantee.
In practice, all mainstream 32-bit architectures perform 32-bit reads and writes atomically. You'll never see anything other than 0 or 5.
In practice (for those who did not read the question), any potential problem boils down to whether or not a store operation for an unsigned int
is an atomic operation which, on most (if not all) machines you will likely write code for, it will be.
Note that this is not stated by the standard; it is specific to the architecture you are targeting. I cannot envision a scenario in which a calling thread will red anything other than 0
or 5
.
As to the title... I am unaware of varying degrees of "undefined behavior". UB is UB, it is a binary state.
Undefined behavior means that the compiler can do what ever he wants. He could basically change your program to do what ever he likes, e.g. order a pizza.
See, @Matthieu M. answer for a less sarcastic version than this one. I won't delete this as I think the comments are important for the discussion.
Undefined behavior is guaranteed to be as undefined as the word undefined.
Technically, the observable behavior is pointless because it is simply undefined behavior, the compiler is not needed to show you any particular behavior. It may work as you think it should or not or may burn your computer, anything and everything is possible.
In practice, you will not see anything else than 0
or 5
as far as I know (maybe some weird 16 bits architecture with 32 bits int
where this is not the case).
However whether you actually see 5
at all is not guaranteed.
Suppose I am the compiler.
I see:
while (aObject.getFoo() == 0) {
printf("Sleeping");
sleep(1);
}
I know that:
printf
cannot change aObject
sleep
cannot change aObject
getFoo
does not change aObject
(thanks inline definition)And therefore I can safely transform the code:
while (true) {
printf("Sleeping");
sleep(1);
}
Because there is no-one else accessing aObject
during this loop, according to the C++ Standard.
That is what undefined behavior means: blown up expectations.