Your...
int n = 1;
...ensures n
exists in read/write memory; it's a non-const
variable, so a later attempt to modify it will have defined behaviour. Given such a variable, you can have a mix of const
and/or non-const
pointers and references to it - the constness of each is simply a way for the programmer to guard against accidental change in that "branch" of code. I say "branch" because you can visualise the access given to n
as being a tree in which - once a branch is marked const
, all the sub-branches (further pointers/references to n
whether additional local variables, function parameters etc. initialised therefrom) will need to remain const
, unless of course you explicitly cast that notion of constness away. Casting away const
is safe (if potentially confusing) for variables that are mutable like your n
, because they're ultimately still writing back into a memory address that is modifiable/mutable/non-const
. All the bizarre optimisations and caching you could imagine causing trouble in these scenarios aren't allowed as the Standard requires and guarantees sane behaviour in the case I've just described.
Sadly it's also possible to cast away constness of genuinely inherently const
variables like say const int o = 1;
, and any attempt to modify them will have undefined behaviour. There are many practical reasons for this, including the compiler's right to place them in memory it then marks read only (e.g. see UNIX mprotect(2)
) such that an attempted write will cause a CPU trap/interrupt, or read from the variable whenever the originally-set value is needed (even if the variable's identifier was never mentioned in the code using the value), or use an inlined-at-compile-time copy of the original value - ignoring any runtime change to the variable itself. So, the Standard leaves the behaviour undefined. Even if they happen to be modified as you might have intended, the rest of the program will have undefined behaviour thereafter.
But, that shouldn't be surprising. It's the same situation with types - if you have...
double d = 1;
*(int*)&d = my_int;
d += 1;
...have you have lied to the compiler about the type of d
? Ultimately d
occupies memory that's probably untyped at a hardware level, so all the compiler ever has is a perspective on it, shuffling bit patterns in and out. But, depending on the value of my_int
and the double representation on your hardware, you may have created an invalid combination of bits in d
that don't represent any valid double value, such that subsequent attempts to read the memory back into a CPU register and/or do something with d
such as += 1
have undefined behaviour and might, for example, generate a CPU trap / interrupt.
This is not a bug in C or C++... they're designed to let you make dubious requests of your hardware so that if you know what you're doing you can do some weird but useful things and rarely need to fall back on assembly language to write low level code, even for device drivers and Operating Systems.
Still, it's precisely because casts can be unsafe that a more explicit and targeted casting notation has been introduced in C++. There's no denying the risk - you just need to understand what you're asking for, why it's ok sometimes and not others, and live with it.