Can somebody please explain the following behavior:
static void Main(string[] args)
{
checked
{
double d = -1d + long.Min
You are calculating with double
values (-1d
). Floating point numbers do not throw on .NET. checked
does not have influence on them in any way.
But the conversion back to long
is influenced by checked
. one + long.MaxValue
does not fit into the range of double
. -one + long.MinValue
does fit into that range. The reason for that is that signed integers have more negative numbers than positive numbers. long.MinValue
has no positve equivalent. That's why the negative version of your code happens to fit and the positive version does not fit.
The addition operation does not change anything:
Debug.Assert((double)(1d + long.MaxValue) == (double)(0d + long.MaxValue));
Debug.Assert((double)(-1d + long.MinValue) == (double)(-0d + long.MinValue));
The numbers we are calculating are outside of the range where double
is precise. double
can fit integers up to 2^53 precisely. We have rounding errors here. Adding one is the same as adding zero. Essentially, you are computing:
var min = (long)(double)(long.MinValue); //does not overflow
var max = (long)(double)(long.MaxValue); //overflows (compiler error)
The add operation is a red herring. It does not change anything.
I assume this is the question:
I don't undersant why I'm not getting an
OverFlowException
in the last line of code.
Have a look at this line (the 1d
you are using is insignificant and can be removed, the only thing it provided was conversion to double
):
var max = (long)(double)long.MaxValue;
It throws because the closest (I do not know the spec, so I won't go into what "closest" is here) double representation of int64.MaxValue
is larger than the largest int64
, ergo it cannot be converted back.
var min = (long)(double)long.MinValue;
For this line on the other hand the closest double representation is between int64.MinValue
and 0 so it can be converted back to int64
.
What I just said does not hold for all combinations of jitter, hardware etc, but I'm trying to explain what happens. Remember that in your case it is thrown because of the checked
keyword, without it the jitter would just swallow it.
Also I would recommend you to have a look at BitConverter.GetBytes()
to experiment what happens when you go from double
to long
and back with large number, also decimal
and double
is interesting :) (the byte representation is the only representation you can trust btw, don't use the debugger for precision when it comes to double
)
Apparently there is some leeway in the conversion from double to long. If we run the following code:
checked {
double longMinValue = long.MinValue;
var i = 0;
while (true)
{
long test = (long)(longMinValue - i);
Console.WriteLine("Works for " + i++.ToString() + " => " + test.ToString());
}
}
It goes up to Works for 1024 => -9223372036854775808
before failing with an OverflowException
, with the -9223372036854775808 value never changing for i.
If we run the code unchecked, no exception is thrown.
This behavior is not coherent with the documentation on explicit numeric conversions that says:
When you convert from a double or float value to an integral type, the value is truncated. If the resulting integral value is outside the range of the destination value, the result depends on the overflow checking context. In a checked context, an OverflowException is thrown, while in an unchecked context, the result is an unspecified value of the destination type.
But as the example shows, the truncation doesn't occur immediately.