问题
A tiny piece of code drives me crazy but hopefully you can prevent me from jumping out of the window. Look here:
#include <iostream>
#include <cstdint>
int main()
{
int8_t i = 65;
int8_t j;
std::cout << "i = " << i << std::endl; // the 'A' is ok, same as uchar
std::cout << "Now type in a value for j (use 65 again): " << std::endl;
std::cin >> j;
std::cout << "j = " << j << std::endl;
if (i != j)
std::cout << "What is going on here?????" << std::endl;
else
std::cout << "Everything ok." << std::endl;
return 0;
}
If I use int
instead of int8_t
everything ok. I need this as 8-bit unsigned integers
, not bigger. And btw. with unsigned char
it's the same behaviour - of course - as with int8_t
.
Anyone with a hint?
回答1:
int8_t
is a typedef for an integer type with the required characteristics: pure 2's-complement representation, no padding bits, size of exactly 8 bits.
For most (perhaps all) compilers, that means it's going to be a typedef for signed char
.(Because of a quirk in the definition of the term signed integer type, it cannot be a typedef for plain char
, even if char
happens to be signed).
The >>
operator treats character types specially. Reading a character reads a single input character, not sequence of characters representing some integer value in decimal. So if the next input character is '0'
, the value read will be the character value '0'
, which is probably 48.
Since a typedef
creates an alias for an existing type, not a new distinct type, there's no way for the >>
operator to know that you want to treat int8_t
as an integer type rather than as a character type.
The problem is that in most implementations there is no 8-bit integer type that's not a character type.
The only workaround is to read into an int
variable and then convert to int8_t
(with range checks if you need them).
Incidentally, int8_t
is a signed type; the corresponding unsigned type is uint8_t
, which has a range of 0..255.
(One more consideration: if CHAR_BIT > 8
, which is permitted by the standard, then neither int8_t
nor uint8_t
will be defined at all.)
回答2:
int8_t
and uint8_t
are almost certainly character types (Are int8_t and uint8_t intended to behave like a character?) so std::cin >> j
will read a single character from stdin and interpret it as a character, not as a number.
回答3:
int8_t
is likely the same as char
, which means cin >> j
will simply read a single character ('6'
) from input and store it in j
.
回答4:
int8_t
is defined as a typedef name for signed char
. So operator >>
used with an object of type int8_t
behaves the same way as it would be used for an object of type signed char
回答5:
The _t
types aren't first class types, they are typedef
aliases the observe certain constraints, e.g. int8_t
is a type that can store a signed, 8-bit value.
On most systems, this will mean they are typedef
d to char
. And because they are a typedef and not a first-class type, you are invoking cin.operator<<(char)
and cin.operator>>(char)
.
When you input "65", cin.operator>>(char)
consumes the '6' and places it's ascii value, 54, into variable j
.
To work around this you'll need to use a different type, possibly the easiest method being just to use a larger integer type and apply constraints and then cast down:
int8_t fetchInt8(const char* prompt) {
int in = 0;
for ( ; ; ) { // endless loop.
std::cout << prompt << ": ";
std::cin >> in;
if (in >= std::numeric_limits<int8_t>::min()
&& in <= std::numeric_limits<int8_t>::max()) {
std::cout << "You entered: " << in << '\n';
// exit the loop
break;
}
std::cerr << "Error: Invalid number for an int8\n";
}
return static_cast<int8_t>(in);
}
Note that int8_t is signed, which means it stores -128 thru +127. If you want only positive values, use the uint8_t type.
来源:https://stackoverflow.com/questions/24617889/why-does-int8-t-and-user-input-via-cin-shows-strange-result