Consider the following:
struct A {
A(float ) { }
A(int ) { }
};
int main() {
A{1.1}; // error: ambiguous
}
This fails to compi
A problem lies with the fact that narrowing conversions can be detected not based on types.
There are very complex ways to generate values at compile time in C++.
Blocking narrowing conversions is a good thing. Making the overload resolution of C++ even more complex than it already is is a bad thing.
Ignoring narrowing conversion rules when determining overload resolution (which makes overload resolution purely about types), and then erroring out when the selected overload results in a narrowing conversion, keeps overload resolution from being even more complex, and adds in a way to detect and prevent narrowing conversions.
Two examples where only one candidate is viable would be template functions that fail "late", during instantiation, and copy-list initialization (where explicit
constructors are considered, but if they are chosen, you get an error). Similarly, having that impact overload resolution would make overload resolution even more complex than it already is.
Now, one might ask, why not fold narrowing conversion purely into the type system?
Making narrowing conversion be purely type-based would be non-viable. Such changes could break huge amounts of "legacy" code that the compiler could prove as being valid. The effort required to sweep a code base is far more worthwhile when most of the errors are actual errors, and not the new compiler version being a jerk.
unsigned char buff[]={0xff, 0x00, 0x1f};
this would fail under a type-based narrowing conversion, as 0xff
is of type int
, and such code is very common.
Had such code required pointless modification of the int
literals to unsigned char
literals, odds are the sweep would have ended with us setting a flag to tell the compiler to shut up about the stupid error.