From The Floating-Point-Guide:
Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and
instead I get a weird result like
0.30000000000000004?
Because internally, computers use a
format (binary floating-point) that
cannot accurately represent a number
like 0.1, 0.2 or 0.3 at all.
When the code is compiled or
interpreted, your “0.1” is already
rounded to the nearest number in that
format, which results in a small
rounding error even before the
calculation happens.
In your case, the rounding errors happen when the values you entered are converted by parseFloat()
.
Why do other calculations like 0.1 + 0.4 work correctly?
In that case, the result (0.5) can be
represented exactly as a
floating-point number, and it’s
possible for rounding errors in the
input numbers to cancel each other out
- But that can’t necessarily be relied upon (e.g. when those two numbers were
stored in differently sized floating
point representations first, the
rounding errors might not offset each
other).
In other cases like 0.1 + 0.3, the
result actually isn’t really 0.4, but
close enough that 0.4 is the shortest
number that is closer to the result
than to any other floating-point
number. Many languages then display
that number instead of converting the
actual result back to the closest
decimal fraction.