While \"we all know\" that x == y
can be problematic, where x
and y
are floating point values, this question is a bit more specific:
When comparing an int and a float, the int is implicitly cast to a float. This ensures the same loss of precision happens, and so the comparison will happen to always be true. As long as you don't disturb the implicit cast or do arithmetic, the equality should hold. For example, if you write this:
bool AlwaysTrue(int i) {
return i == (float)i;
}
there is an implicit cast, so it's equivalent to this function that should always return true:
bool AlwaysTrue(int i) {
return (float)i == (float)i;
}
but if you write this:
bool SometimesTrue(int i) {
return i == (int)(float)i;
}
then there is no more implicit cast and the loss of precision only happens on the right side. The result may be false. Similarly, if you write this:
bool SometimesTrue(int i) {
return 1 + i == 1 + (float)i;
}
then the loss of precision might not be equivalent on both sides. The result may be false.
Yes, the comparison will always be true, whatever value the int
is.
The int
will be converted to a float
to do the conversion, and the first conversion to float
will always give the same result as the second conversion.
Consider:
int x = [any integer value];
float y = x;
float z = x;
The values of y
and z
will always be the same. If the conversion loses precision, both conversions will lose the precision in exactly the same way.
If you convert the float
back to int
to to the comparison, that's another matter.
Also, note that even if a specific int
value converted to float
always results in the same float
value, that doesn't mean that the float
value has to be unique for that int
value. There are int
values where (float)x == (float)(x+1)
would be true
.
The following experiment reveals that the answer is you do not have that edge case where equality is not true
static void Main(string[] args)
{
Parallel.For(int.MinValue, int.MaxValue, (x) =>
{
float r = x;
// Is the following ALWAYS true?
bool equal = r == x;
if (!equal) Console.WriteLine("Unequal: " + x);
});
Console.WriteLine("Done");
Console.ReadKey();
return;
}
It seems reasonable that the conversions
float f = i;
and
if ((int)f != i)
should follow the same rules. This proves that int -> float and float -> int conversions are a bijection.
NOTE: the experiment code actually doesn't test the edge case int.MaxValue because Parallel.For's to parameter is exclusive, but I tested that value separately and it also passes the test.
My understanding of floating point arithmetic calculations is that they are handled by the CPU, which solely determines your precision. Therefore there is no definite value above which floats lose precision.
I had thought that the x86 architecture, for instance, guaranteed a minimum, but I've been proven wrong.
I ran this code without an exception being thrown:
for (int x = Int16.MinValue; x < Int16.MaxValue; x++)
{
float r = x;
if (r != x)
{
throw new Exception("Failed at: " + x);
}
}
Still waiting on (didn't complete this test because it took too long, never threw an exception though while running):
for (long x = Int64.MinValue; x < Int64.MaxValue; x++)
{
float r = x;
if (r != x)
{
throw new Exception("Failed at: " + x);
}
}
Went back and ran your example with a caveat, this was the output:
[Exception: not equal 16777217 ?= 1.677722E+07 ?= 16777216]
for (int i = 0; i < int.MaxValue; i++)
{
float f = i;
if ((int)f != i) throw new Exception("not equal " + i + " ?= " + f + " ?= " + (int)f);
}