Can unsigned integer incrementation lead to undefined defined behavior?

倾然丶 夕夏残阳落幕 提交于 2020-01-03 16:00:49

问题


After reading the 32 bit unsigned multiply on 64 bit causing undefined behavior? question here on StackOverflow, I began to ponder whether typical arithmetic operations on small unsigned types could lead to undefined behavior according to the C99 standard.

For example, take the following code:

#include <limits.h>

...

unsigned char x = UCHAR_MAX;
unsigned char y = x + 1;

The x variable is initialized to the maximum magnitude for the unsigned char data type. The next line is the issue: the value x + 1 is greater than UCHAR_MAX and cannot be stored in the unsigned char variable y.

I believe the following is what actually occurs.

  • The variable x is first promoted to data type int (section 6.3.1.1/2), then x + 1 is evaluated as data type int.

Suppose there is an implementation where INT_MAX and UCHAR_MAX are the same -- x + 1 would result in a signed integer overflow. Does this mean that incrementing the variable x, despite being an unsigned integer type, can lead to undefined behavior due to a possible signed integer overflow?


回答1:


By my reading of the standard, an implementation which used a 15-bit char could legally store int as a 15-bit magnitude and use a second 15-bit word to store the sign along with 14 bits of padding; in that case, an unsigned char would hold values 0 to 32,767 and an int would hold values from -32,767 to +32,767. Adding 1 to (unsigned char)32767 would indeed be undefined behavior. A similar situation could arise with any larger char size if 32,767 was replaced with UCHAR_MAX.

Such a situation is unlikely, however, compared with the real-world problems associated with unsigned integer multiplication alluded to in the other post.



来源:https://stackoverflow.com/questions/27004694/can-unsigned-integer-incrementation-lead-to-undefined-defined-behavior

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!