Let\'s say I\'ve got a function that accepts a 64-bit integer, and I want to call
it with a double
with arbitrary numeric value (i.e. it may be very large in
magnit
It turns out this is simpler to do than I thought. Thanks to Michael O'Reilly for the basic idea of this solution.
The heart of the matter is figuring out whether the truncated double will be
representable as an int64_t
. You can do this easily using std::frexp:
#include <cmath>
#include <limits>
static constexpr int64_t kint64min = std::numeric_limits<int64_t>::min();
static constexpr int64_t kint64max = std::numeric_limits<int64_t>::max();
int64_t SafeCast(double d) {
// We must special-case NaN, for which the logic below doesn't work.
if (std::isnan(d)) {
return 0;
}
// Find that exponent exp such that
// d == x * 2^exp
// for some x with abs(x) in [0.5, 1.0). Note that this implies that the
// magnitude of d is strictly less than 2^exp.
//
// If d is infinite, the call to std::frexp is legal but the contents of exp
// are unspecified.
int exp;
std::frexp(d, &exp);
// If the magnitude of d is strictly less than 2^63, the truncated version
// of d is guaranteed to be representable. The only representable integer
// for which this is not the case is kint64min, but it is covered by the
// logic below.
if (std::isfinite(d) && exp <= 63) {
return d;
}
// Handle infinities and finite numbers with magnitude >= 2^63.
return std::signbit(d) ? kint64min : kint64max;
}
boost::numeric_cast
, that's how.
http://www.boost.org/doc/libs/1_56_0/libs/numeric/conversion/doc/html/boost_numericconversion/improved_numeric_cast__.html
How about:
constexpr uint64_t weird_high_limit = (double)kint64max == (double)(kint64max-1);
int64_t clamped = (d >= weird_high_limit + kint64max)? kint64max: (d <= kint64min)? kint64min: int64_t(d);
I think this takes care of all the edge cases. If d < (double)kint64max
, then (exact)d <= (exact)kint64max
. Proof proceeds by contradiction of the fact that (double)kint64max
is the next higher or lower representable value.
Here's a solution that doesn't fit all the criteria, along with analysis for why not. See the accepted answer for a better answer.
// Define constants from the question.
static constexpr int64_t kint64min = std::numeric_limits<int64_t>::min();
static constexpr int64_t kint64max = std::numeric_limits<int64_t>::max();
int64_t SafeCast(double d) {
// Handle NaN specially.
if (std::isnan(d)) return 0;
// Handle out of range below.
if (d <= kint64min) return kint64min;
// Handle out of range above.
if (d >= kint64max) return kint64max;
// At this point we know that d is in range.
return d;
}
I believe this avoids undefined behavior. There is nothing to be wary of with
casting integers to doubles in the range checks. Assuming sanity in the way
that non-representable integers are converted (in particular that the mapping
is monotonic), by the time the range checks are past, we can be sure that d
is in [-2^63, 2^63)
, as required for the implicit cast at the end of the
function.
I'm also confident that this clamps out of range values correctly.
The issue is criteria #2 from the update to my question. Consider an
implementation where kint64max
is not representable as a double, but
kint64max - 1
is. Further, assume that this is an implementation where
casting kint64max
to a double yields the next lower representable value,
i.e. kint64max - 1
. Let d
be 2^63 - 2 (i.e. kint64max - 1
). Then
SafeCast(d)
is kint64max
, because the range check converts kint64max
to
a double, yielding a value equal to d
. But static_cast<int64_t>(d)
is
kint64max - 1
.
Try as I might, I can't find a way to resolve this. Nor can I even write a unit test that checks my criteria, without the unit test executing undefined behavior. I feel like there is a deeper lesson to be learned here—something about the impossibility of detecting whether an action in a system will cause undefined behavior from inside the system itself, without causing undefined behavior.