The C++ Core Guidelines has a narrow
cast that throws if the cast changes the value. Looking at the microsoft implementation of the library:
//
This is checking for overflow. Lets look at
auto foo = narrow<int>(std::numeric_limits<unsigned int>::max())
T
will be int
and U
will be unsigned int
. So
T t = narrow_cast<T>(u);
will give store -1
in t
. When you cast that back in
if (static_cast<U>(t) != u)
the -1
will convert back to std::numeric_limits<unsigned int>::max()
so the check will pass. This isn't a valid cast though as std::numeric_limits<unsigned int>::max()
overflows an int
and is undefined behavior. So then we move on to
if (!details::is_same_signedness<T, U>::value && ((t < T{}) != (u < U{})))
and since the signs aren't the same we evaluate
(t < T{}) != (u < U{})
which is
(-1 < 0) != (really_big_number < 0)
== true != false
== true
So we throw an exception. If we go even farther and wrap back around using so that t
becomes a positive number then the second check will pass but the first one will fail since t
would be positive and that cast back to the source type is still the same positive value which isn't equal to its original value.
if (!details::is_same_signedness<T, U>::value && ((t < T{}) != (u < U{}))) // <-- ???
The above check is for making sure that differing signedness doesn't lead us astray.
The first part checks whether it might be an issue at all, and is included for optimization, so let's get to the point.
As an example, take UINT_MAX (the biggest unsigned int
there is), and cast it to signed
.
Assuming INT_MAX == UINT_MAX / 2
(which is very likely, though not quite guaranteed by the standard), the result will be (signed)-1
, or just -1
, a negative number.
While casting it back will result in the original value, thus it passes the first check, it is not itself the same value, and this check catches the error.