Difference between char and int when declaring character

前端 未结 5 1605
梦谈多话
梦谈多话 2020-12-07 18:49

I just started learning C and am rather confused over declaring characters using int and char.

I am well aware that any characters are made up of integers in the se

相关标签:
5条回答
  • 2020-12-07 19:23

    The char type has multiple roles.

    The first is that it is simply part of the chain of integer types, char, short, int, long, etc., so it's just another container for numbers.

    The second is that its underlying storage is the smallest unit, and all other objects have a size that is a multiple of the size of char (sizeof returns a number that is in units of char, so sizeof char == 1).

    The third is that it plays the role of a character in a string, certainly historically. When seen like this, the value of a char maps to a specified character, for instance via the ASCII encoding, but it can also be used with multi-byte encodings (one or more chars together map to one character).

    0 讨论(0)
  • 2020-12-07 19:28

    Size of an int is 4 bytes on most architectures, while the size of a char is 1 byte.

    0 讨论(0)
  • 2020-12-07 19:40

    Usually you should declare characters as char and use int for integers being capable of holding bigger values. On most systems a char occupies a byte which is 8 bits. Depending on your system this char might be signed or unsigned by default, as such it will be able to hold values between 0-255 or -128-127.

    An int might be 32 bits long, but if you really want exactly 32 bits for your integer you should declare it as int32_t or uint32_t instead.

    0 讨论(0)
  • 2020-12-07 19:41

    I think there's no difference, but you're allocating extra memory you're not going to use. You can also do const long a = 1;, but it will be more suitable to use const char a = 1; instead.

    0 讨论(0)
  • 2020-12-07 19:44

    The difference is the size in byte of the variable, and from there the different values the variable can hold.

    A char is required to accept all values between 0 and 127 (included). So in common environments it occupies exactly one byte (8 bits). It is unspecified by the standard whether it is signed (-128 - 127) or unsigned (0 - 255).

    An int is required to be at least a 16 bits signed word, and to accept all values between -32767 and 32767. That means that an int can accept all values from a char, be the latter signed or unsigned.

    If you want to store only characters in a variable, you should declare it as char. Using an int would just waste memory, and could mislead a future reader. One common exception to that rule is when you want to process a wider value for special conditions. For example the function fgetc from the standard library is declared as returning int:

    int fgetc(FILE *fd);
    

    because the special value EOF (for End Of File) is defined as the int value -1 (all bits to one in a 2-complement system) that means more than the size of a char. That way no char (only 8 bits on a common system) can be equal to the EOF constant. If the function was declared to return a simple char, nothing could distinguish the EOF value from the (valid) char 0xFF.

    That's the reason why the following code is bad and should never be used:

    char c;    // a terrible memory saving...
    ...
    while ((c = fgetc(stdin)) != EOF) {   // NEVER WRITE THAT!!!
        ...
    }
    

    Inside the loop, a char would be enough, but for the test not to succeed when reading character 0xFF, the variable needs to be an int.

    0 讨论(0)
提交回复
热议问题