int8_t vs char ; Which is the best one?

后端 未结 4 494
说谎
说谎 2020-12-13 21:34

My question may be confuse you, I know both are different types (signed char and char), however my company coding guidelines specifies to use

相关标签:
4条回答
  • 2020-12-13 21:49

    They simply make different guarantees:

    char is guaranteed to exist, to be at least 8 bits wide, and to be able to represent either all integers between -127 and 127 inclusive (if signed) or between 0 and 255 (if unsigned).

    int8_t is not guaranteed to exist (and yes, there are platforms on which it doesn’t), but if it exists it is guaranteed to an 8-bit twos-complement signed integer type with no padding bits; thus it is capable of representing all integers between -128 and 127, and nothing else.

    When should you use which? When the guarantees made by the type line up with your requirements. It is worth noting, however, that large portions of the standard library require char * arguments, so avoiding char entirely seems short-sighted unless there’s a deliberate decision being made to avoid usage of those library functions.

    0 讨论(0)
  • 2020-12-13 22:05

    int8_t is specified by the C99 standard to be exactly eight bits wide, and fits in with the other C99 guaranteed-width types. You should use it in new code where you want an exactly 8-bit signed integer. (Take a look at int_least8_t and int_fast8_t too, though.)

    char is still preferred as the element type for single-byte character strings, just as wchar_t should be preferred as the element type for wide character strings.

    0 讨论(0)
  • 2020-12-13 22:11

    int8_t is only appropriate for code that requires a signed integer type that is exactly 8 bits wide and should not compile if there is no such type. Such requirements are far more rare than the number of questions about int8_t and it's brethren indicates. Most requirements for sizes are that the type have at least a particular number of bits. signed char works just fine if you need at least 8 bits; int_least8_t also works.

    0 讨论(0)
  • 2020-12-13 22:15

    The use of int8_t is perfectly good for some circumstances - specifically when the type is used for calculations where a signed 8-bit value is required. Calculations involving strictly sized data [e.g. defined by external requirements to be exactly 8 bit in the result] (I used pixel colour levels in a comment above, but that really would be uint8_t, as negative pixel colours usually don't exist - except perhaps in YUV type colourspace).

    The type int8_t should NOT be used as a replacement of char in for strings. This can lead to compiler errors (or warnings, but we don't really want to have to deal with warnings from the compiler either). For example:

    int8_t *x = "Hello, World!\n";
    
    printf(x);
    

    may well compile fine on compiler A, but give errors or warnings for mixing signed and unsigned char values on compiler B. Or if int8_t isn't even using a char type. That's just like expecting

    int *ptr = "Foo";
    

    to compile in a modern compiler...

    In other words, int8_t SHOULD be used instead of char if you are using 8-bit data for caclulation. It is incorrect to wholesale replace all char with int8_t, as they are far from guaranteed to be the same.

    If there is a need to use char for string/text/etc, and for some reason char is too vague (it can be signed or unsigned, etc), then usign typedef char mychar; or something like that should be used. (It's probably possible to find a better name than mychar!)

    Edit: I should point out that whether you agree with this or not, I think it would be rather foolish to simply walk up to whoever is in charge of this "principle" at the company, point at a post on SO and say "I think you're wrong". Try to understand what the motivation is. There may be more to it than meets the eye.

    0 讨论(0)
提交回复
热议问题