So we had a field issue, and after days of debugging, narrowed down the problem to this particular bit of code, where the processing in a while loop wasn\'t happening :
So it appears that the result of
numberA + 1
was promoted touint32_t
The operands of the addition were promoted to int
before the addition took place, and the result of the addition is of the same type as the effective operands (int
).
Indeed, if int
is 32-bit wide on your compilation platform (meaning that the type that represents uint16_t
has lower “conversion rank” than int
), then numberA + 1
is computed as an int
addition between 1
and a promoted numberA
as part of the integer promotion rules, 6.3.1.1:2 in the C11 standard:
The following may be used in an expression wherever an int or unsigned int may be used: […] An object or expression with an integer type (other than int or unsigned int) whose integer conversion rank is less than or equal to the rank of int and unsigned int.
[…]
If an int can represent all values of the original type […], the value is converted to an int
In your case, unsigned short
which is in all likelihood what uint16_t
is defined as on your platform, has all its values representable as elements of int
, so the unsigned short
value numberA
gets promoted to int
when it occurs in an arithmetic operation.
For arithmetic operators such as +
, the usual arithmetic conversions are applied.
For integers, the first step of those conversions is called the integer promotions, and this promotes any value of type smaller than int
to be an int
.
The other steps don't apply to your example so I shall omit them for conciseness.
In the expression numberA + 1
, the integer promotions are applied. 1
is already an int
so it remains unchanged. numberA
has type uint16_t
which is narrower than int
on your system, so numberA
gets promoted to int
.
The result of adding two int
s is another int
, and 65535 + 1
gives 65536
since you have 32-bit int
s.
So your first printf
outputs this result.
In the line:
numberB = numberA + 1;
the above logic still applies to the +
operator, this is equivalent to:
numberB = 65536;
Since numberB
has an unsigned type, uint16_t
specifically, 65536
is reduced (mod 65536) which gives 0
.
Note that your last two printf
statements cause undefined behaviour; you must use %u
for printing unsigned int
. To cope with different sizes of int
, you can use "%" PRIu32
to get the format specifier for uint32_t
.
When the C language was being developed, it was desirable to minimize the number of kinds of arithmetic compilers had to deal with. Thus, most math operators (e.g. addition) supported only int+int, long+long, and double+double. While the language could have been simplified by omitting int+int (promoting everything to long
instead), arithmetic on long
values generally takes 2-4 times as much code as arithmetic on int
values; since most programs are dominated by arithmetic on int
types, that would have been very costly. Promoting float
to double
, by contrast, will in many cases save code, because it means that only two functions are needed to support float
: convert to double
, and convert from double
. All other floating-point arithmetic operations need only support one floating-point type, and since floating-point math is often done by calling library routines the cost of calling a routine to add two double
values is often the same as the cost to call a routine to add two float
values.
Unfortunately, the C language became widespread on a variety of platforms before anyone really figured out what 0xFFFF + 1 should mean, and by that time there were already some compilers where the expression yielded 65536 and some where it yielded zero. Consequently, writers of standards have endeavored to write them in a fashion that would allow compilers to keep on doing whatever they were doing, but which was rather unhelpful from the standpoint of anyone hoping to write portable code. Thus, on platforms where int
is 32 bits, 0xFFFF+1 will yield 65536, and on platforms where int
is 16 bits, it will yield zero. If on some platform int
happened to be 17 bits, 0xFFFF+1 would authorize the compiler to negate the laws of time and causality [btw, I don't know if any 17-bit platforms, but there are some 32-bit platforms where uint16_t x=0xFFFF; uint16_t y=x*x;
will cause the compiler to garble the behavior of code which precedes it].
Literal 1
in of int
, i.e. in your case int32 type, so operations with int32 and int16 give results of int32.
To have result of numberA + 1
statement as uint16_t
try explicit type cast for 1
, e.g.: numberA + (uint16_t)1