I know that the C and C++ standards leave many aspects of the language implementation-defined just because if there is an architecture with other characteristics, it would b
According to gcc source code:
CHAR_BIT
is 16
bits for 1750a, dsp16xx architectures.
CHAR_BIT
is 24
bits for dsp56k architecture.
CHAR_BIT
is 32
bits for c4x architecture.
You can easily find more by doing:
find $GCC_SOURCE_TREE -type f | xargs grep "#define CHAR_TYPE_SIZE"
or
find $GCC_SOURCE_TREE -type f | xargs grep "#define BITS_PER_UNIT"
if CHAR_TYPE_SIZE
is appropriately defined.
If target architecture doesn't support floating point instructions, gcc may generate software fallback witch is not the standard compliant by default. More than, special options (like -funsafe-math-optimizations
witch also disables sign preserving for zeros) can be used.