While \"we all know\" that x == y
can be problematic, where x
and y
are floating point values, this question is a bit more specific:>
When comparing an int and a float, the int is implicitly cast to a float. This ensures the same loss of precision happens, and so the comparison will happen to always be true. As long as you don't disturb the implicit cast or do arithmetic, the equality should hold. For example, if you write this:
bool AlwaysTrue(int i) {
return i == (float)i;
}
there is an implicit cast, so it's equivalent to this function that should always return true:
bool AlwaysTrue(int i) {
return (float)i == (float)i;
}
but if you write this:
bool SometimesTrue(int i) {
return i == (int)(float)i;
}
then there is no more implicit cast and the loss of precision only happens on the right side. The result may be false. Similarly, if you write this:
bool SometimesTrue(int i) {
return 1 + i == 1 + (float)i;
}
then the loss of precision might not be equivalent on both sides. The result may be false.