Why are C character literals ints instead of chars?

后端 未结 12 861
悲哀的现实
悲哀的现实 2020-11-22 02:24

In C++, sizeof(\'a\') == sizeof(char) == 1. This makes intuitive sense, since \'a\' is a character literal, and sizeof(char) == 1 as d

相关标签:
12条回答
  • 2020-11-22 02:57

    I don't know, but I'm going to guess it was easier to implement it that way and it didn't really matter. It wasn't until C++ when the type could determine which function would get called that it needed to be fixed.

    0 讨论(0)
  • 2020-11-22 02:57

    I didn't know this indeed. Before prototypes existed, anything narrower than an int was converted to an int when using it as a function argument. That may be part of the explanation.

    0 讨论(0)
  • 2020-11-22 02:58

    The original question is "why?"

    The reason is that the definition of a literal character has evolved and changed, while trying to remain backwards compatible with existing code.

    In the dark days of early C there were no types at all. By the time I first learnt to program in C, types had been introduced, but functions didn't have prototypes to tell the caller what the argument types were. Instead it was standardised that everything passed as a parameter would either be the size of an int (this included all pointers) or it would be a double.

    This meant that when you were writing the function, all the parameters that weren't double were stored on the stack as ints, no matter how you declared them, and the compiler put code in the function to handle this for you.

    This made things somewhat inconsistent, so when K&R wrote their famous book, they put in the rule that a character literal would always be promoted to an int in any expression, not just a function parameter.

    When the ANSI committee first standardised C, they changed this rule so that a character literal would simply be an int, since this seemed a simpler way of achieving the same thing.

    When C++ was being designed, all functions were required to have full prototypes (this is still not required in C, although it is universally accepted as good practice). Because of this, it was decided that a character literal could be stored in a char. The advantage of this in C++ is that a function with a char parameter and a function with an int parameter have different signatures. This advantage is not the case in C.

    This is why they are different. Evolution...

    0 讨论(0)
  • 2020-11-22 03:01

    I remember reading K&R and seeing a code snippet that would read a character at a time until it hit EOF. Since all characters are valid characters to be in a file/input stream, this means that EOF cannot be any char value. What the code did was to put the read character into an int, then test for EOF, then convert to a char if it wasn't.

    I realize this doesn't exactly answer your question, but it would make some sense for the rest of the character literals to be sizeof(int) if the EOF literal was.

    int r;
    char buffer[1024], *p; // don't use in production - buffer overflow likely
    p = buffer;
    
    while ((r = getc(file)) != EOF)
    {
      *(p++) = (char) r;
    }
    
    0 讨论(0)
  • 2020-11-22 03:02

    using gcc on my MacBook, I try:

    #include <stdio.h>
    #define test(A) do{printf(#A":\t%i\n",sizeof(A));}while(0)
    int main(void){
      test('a');
      test("a");
      test("");
      test(char);
      test(short);
      test(int);
      test(long);
      test((char)0x0);
      test((short)0x0);
      test((int)0x0);
      test((long)0x0);
      return 0;
    };
    

    which when run gives:

    'a':    4
    "a":    2
    "":     1
    char:   1
    short:  2
    int:    4
    long:   4
    (char)0x0:      1
    (short)0x0:     2
    (int)0x0:       4
    (long)0x0:      4
    

    which suggests that a character is 8 bits, like you suspect, but a character literal is an int.

    0 讨论(0)
  • 2020-11-22 03:02

    Back when C was being written, the PDP-11's MACRO-11 assembly language had:

    MOV #'A, R0      // 8-bit character encoding for 'A' into 16 bit register
    

    This kind of thing's quite common in assembly language - the low 8 bits will hold the character code, other bits cleared to 0. PDP-11 even had:

    MOV #"AB, R0     // 16-bit character encoding for 'A' (low byte) and 'B'
    

    This provided a convenient way to load two characters into the low and high bytes of the 16 bit register. You might then write those elsewhere, updating some textual data or screen memory.

    So, the idea of characters being promoted to register size is quite normal and desirable. But, let's say you need to get 'A' into a register not as part of the hard-coded opcode, but from somewhere in main memory containing:

    address: value
    20: 'X'
    21: 'A'
    22: 'A'
    23: 'X'
    24: 0
    25: 'A'
    26: 'A'
    27: 0
    28: 'A'
    

    If you want to read just an 'A' from this main memory into a register, which one would you read?

    • Some CPUs may only directly support reading a 16 bit value into a 16 bit register, which would mean a read at 20 or 22 would then require the bits from 'X' be cleared out, and depending on the endianness of the CPU one or other would need shifting into the low order byte.

    • Some CPUs may require a memory-aligned read, which means that the lowest address involved must be a multiple of the data size: you might be able to read from addresses 24 and 25, but not 27 and 28.

    So, a compiler generating code to get an 'A' into the register may prefer to waste a little extra memory and encode the value as 0 'A' or 'A' 0 - depending on endianness, and also ensuring it is aligned properly (i.e. not at an odd memory address).

    My guess is that C's simply carried this level of CPU-centric behaviour over, thinking of character constants occupying register sizes of memory, bearing out the common assessment of C as a "high level assembler".

    (See 6.3.3 on page 6-25 of http://www.dmv.net/dec/pdf/macro.pdf)

    0 讨论(0)
提交回复
热议问题