I am quite confused by the following code:
#include #include int main(int argc, char ** argv) { uint16_t a = 413; u
If you throw away the top-end bits of a number (by the explicit cast to a 16 bit unsigned integer) then you're going to have a result that is smaller (within the range of 0 and 2^16-1) than before.