Properly testing two floating-point numbers for equality is something that a lot of people, including me, don\'t fully understand. Today, however, I thought about how some s
Float and double are both in the binary equivalent of scientific notation, with a fixed number of significant bits. If the infinite precision result of a calculation is not exactly representable, the actual result is the closest one that is exactly representable.
There are two big pitfalls with this.
(a + b) + c
is not necessarily the same as a + (b + c)
You need to pick a tolerance for comparisons that is larger than the expected rounding error, but small enough that it is acceptable in your program to treat numbers that are within the tolerance as being equal.
If there is no such tolerance, it means you are using the wrong floating point type, or should not be using floating point at all. 32-bit IEEE 754 has such limited precision that it can be really challenging to find an appropriate tolerance. Usually, 64-bit is a much better choice.