Why are C character literals ints instead of chars?

后端 未结 12 862
悲哀的现实
悲哀的现实 2020-11-22 02:24

In C++, sizeof(\'a\') == sizeof(char) == 1. This makes intuitive sense, since \'a\' is a character literal, and sizeof(char) == 1 as d

12条回答
  •  抹茶落季
    2020-11-22 02:37

    The historical reason for this is that C, and its predecessor B, were originally developed on various models of DEC PDP minicomputers with various word sizes, which supported 8-bit ASCII but could only perform arithmetic on registers. (Not the PDP-11, however; that came later.) Early versions of C defined int to be the native word size of the machine, and any value smaller than an int needed to be widened to int in order to be passed to or from a function, or used in a bitwise, logical or arithmetic expression, because that was how the underlying hardware worked.

    That is also why the integer promotion rules still say that any data type smaller than an int is promoted to int. C implementations are also allowed to use one’s-complement math instead of two’s-complement for similar historical reasons. The reason that octal character escapes and octal constants are first-class citizens compared to hex is likewise that those early DEC minicomputers had word sizes divisible into three-byte chunks but not four-byte nibbles.

提交回复
热议问题