int8_t
is a typedef for an integer type with the required characteristics: pure 2's-complement representation, no padding bits, size of exactly 8 bits.
For most (perhaps all) compilers, that means it's going to be a typedef for signed char
.(Because of a quirk in the definition of the term signed integer type, it cannot be a typedef for plain char
, even if char
happens to be signed).
The >>
operator treats character types specially. Reading a character reads a single input character, not sequence of characters representing some integer value in decimal. So if the next input character is '0'
, the value read will be the character value '0'
, which is probably 48.
Since a typedef
creates an alias for an existing type, not a new distinct type, there's no way for the >>
operator to know that you want to treat int8_t
as an integer type rather than as a character type.
The problem is that in most implementations there is no 8-bit integer type that's not a character type.
The only workaround is to read into an int
variable and then convert to int8_t
(with range checks if you need them).
Incidentally, int8_t
is a signed type; the corresponding unsigned type is uint8_t
, which has a range of 0..255.
(One more consideration: if CHAR_BIT > 8
, which is permitted by the standard, then neither int8_t
nor uint8_t
will be defined at all.)