I'm learning about function overloading in C++ and came across this:
void display(int a)
{
cout << "int" << endl;
}
void display(unsigned a)
{
cout << "unsigned" << endl;
}
int main()
{
int i = -2147483648;
cout << i << endl; //will display -2147483648
display(-2147483648);
}
From what I understood, any value given in the int
range (in my case int
is 4 byte) will call display(int)
and any value outside this range will be ambiguous (since the compiler cannot decide which function to call). It is valid for the complete range of int
values except its min value i.e. -2147483648
where compilation fails with the error
call of overloaded
display(long int)
is ambiguous
But taking the same value to an int
and printing the value gives 2147483648
. I'm literally confused with this behavior.
Why is this behavior observed only when the most negative number is passed? (The behavior is the same if a short
is used with -32768
- in fact, in any case where the negative number and positive number have the same binary representation)
Compiler used: g++ (GCC) 4.8.5
This is a very subtle error. What you are seeing is a consequence of there being no negative integer literals in C++. If we look at [lex.icon] we get that a integer-literal,
integer-literal
decimal-literal integer-suffixopt
[...]
can be a decimal-literal,
decimal-literal:
nonzero-digit
decimal-literal ’ opt digit
where digit is [0-9]
and nonzero-digit is [1-9]
and the suffix par can be one of u
, U
, l
, L
, ll
, or LL
. Nowhere in here does it include -
as being part of the decimal literal.
In §2.13.2, we also have:
An integer literal is a sequence of digits that has no period or exponent part, with optional separating single quotes that are ignored when determining its value. An integer literal may have a prefix that specifies its base and a suffix that specifies its type. The lexically first digit of the sequence of digits is the most significant. A decimal integer literal (base ten) begins with a digit other than 0 and consists of a sequence of decimal digits.
(emphasis mine)
Which means the -
in -2147483648
is the unary operator -
. That means -2147483648
is actually treated as -1 * (2147483648)
. Since 2147483648
is one too many for your int
it is promoted to a long int
and the ambiguity comes from that not matching.
If you want to get the minimum or maximum value for a type in a portable manner you can use:
std::numeric_limits<type>::min(); // or max()
The expression -2147483648
is actually applying the -
operator to the constant 2147483648
. On your platform, int
can't store 2147483648
, it must be represented by a larger type. Therefore, the expression -2147483648
is not deduced to be signed int
but a larger signed type, signed long int
.
Since you do not provide an overload for long
the compiler is forced to choose between two overloads that are both equally valid. Your compiler should issue a compiler error about ambiguous overloads.
Expanding on others' answers
To clarify why the OP is confused, first: consider the signed int
binary representation of 2147483647
, below.
Next, add one to this number: giving another signed int
of -2147483648
(which the OP wishes to use)
Finally: we can see why the OP is confused when -2147483648
compiles to a long int
instead of a signed int
, since it clearly fits in 32 bits.
But, as the current answers mention, the unary operator (-
) is applied after resolving 2147483648
which is a long int
and does NOT fit in 32 bits.
来源:https://stackoverflow.com/questions/45469214/why-does-the-most-negative-int-value-cause-an-error-about-ambiguous-function-ove