About integer numbers downcasts in C, e.g.
An int value 000F\'E000
downcast to short or unsigned short will become E000
.
shor
A cast to a smaller integer type discards the most significant (left-most as you'd write the full binary integer on paper) bits that are not present in the destination type.
An upcast to a larger integer is more complex:
For converting anything to an unsigned type, the value is adjusted modulo TYPE_MAX+1
until it is in range of the unsigned type. Example: -10
converted to uint16_t
which has a range of 0-65535 results in 65536-10
or 65526
.
For converting anything to a signed type: If the original value is in the range of the signed type, that is the result. Otherwise the behaviour is implementation-defined, which includes the possibility of raising a signal. The compiler must document its behaviour for this case.
Example: -10
converted to long long
results in a long long
of value -10
. The bit representation doesn't matter, the rules are based on values.
Your test program contains a lot of undefined behaviour due to using the wrong format specifier in printf
. (In fact every single format specifier is wrong for the argument given). Being undefined behaviour, the output is meaningless so do not try and "learn" from this program. Instead you can study the rules in the Standard or other answers on this site, and read your compiler's documentation.
Downcast cuts the bits, up-cast depends on "signedness". Up-cast on unsigned types adds zero bits to the value, up-cast on signed types replicates the sign bit. In this way, the expression has the same value before and after an upcast.