问题
0.1 + 0.2
// => 0.30000000000000004
0.2 + 0.2
// => 0.4
0.3 + 0.2
// => 0.5
I understand it has to do with floating points but what exactly is happening here?
As per @Eric Postpischil's comment, this isn't a duplicate:
That one only involves why “noise” appears in one addition. This one asks why “noise” appears in one addition and does not appear in another. That is not answered in the other question. Therefore, this is not a duplicate. In fact, the reason for the difference is not due to floating-point arithmetic per se but is due to ECMAScript 2017 7.1.12.1 step 5
回答1:
When converting Number values to strings in JavaScript, the default is to use just enough digits to uniquely distinguish the Number value.1 This means that when a number is displayed as “0.1”, that does not mean it is exactly 0.1, just that it is closer to 0.1 than any other Number value is, so displaying just “0.1” tells you it is this unique Number value, which is 0.1000000000000000055511151231257827021181583404541015625. We can write this in hexadecimal floating-point notation as 0x1.999999999999ap-4. (The p-4
means to multiply the preceding hexadecimal numeral, by two the power of −4, so mathematicians would write it as 1.99999999999916 • 2−4.)
Here are the values that result when you write 0.1
, 0.2
, and 0.3
in source code, and they are converted to JavaScript’s Number format:
- 0.1 → 0x1.999999999999ap-4 = 0.1000000000000000055511151231257827021181583404541015625.
- 0.2 → 0x1.999999999999ap-3 = 0.200000000000000011102230246251565404236316680908203125.
- .3 → 0x1.3333333333333p-2 = 0.299999999999999988897769753748434595763683319091796875.
When we evaluate 0.1 + 0.2
, we are adding 0x1.999999999999ap-4 and 0x1.999999999999ap-3. To do that manually, we can first adjust latter by multiplying its significand (fraction part) by 2 and subtracting one from its exponent, producing 0x3.3333333333334p-4. (You have to do this arithmetic in hexadecimal. A16 • 2 = 1416, so the last digit is 4, and the 1 is carried. Then 916 • 2 = 1216, and the carried 1 makes it 1316. That produces a 3 digit and a 1 carry.) Now we have 0x1.999999999999ap-4 and 0x3.3333333333334p-4, and we can add them. This produces 4.ccccccccccccep-4. That is the exact mathematical result, but it has too many bits for the Number format. We can only have 53 bits in the significand. There are 3 bits in the 4 (1002) and 4 bits in each of the trailing 13 digits, so that is 55 bits total. The computer has to remove 2 bits and round the result. The last digit, E16, is 11102, so the 10 bits have to go. These bits are exactly ½ of the previous bit, so it is a tie between rounding up or down. The rule for breaking ties says to round so the last bit is even, so we round up to make the 11 bits become 100. The E16 becomes 1016, causing a carry to the next digit. The result is 4.cccccccccccd0p-4, which equals 0.3000000000000000444089209850062616169452667236328125.
Now we can see why printing .1 + .2
shows “0.30000000000000004” instead of “0.3”. For the Number value 0.299999999999999988897769753748434595763683319091796875, JavaScript shows “0.3”, because that Number is closer to 0.3 than any other Number is. It differs from 0.3 by about 1.1 at the 17th digit after the decimal point, whereas the result of the addition we have differs from 0.3 by about 4.4 at the 17th digit. So:
- The source code
0.3
produces 0.299999999999999988897769753748434595763683319091796875 and is printed as “0.3”. - The source code
0.1 + 0.2
produces 0.3000000000000000444089209850062616169452667236328125 and is printed as “0.30000000000000004”.
Now consider 0.2 + 0.2
. The result of this is 0.40000000000000002220446049250313080847263336181640625. That is the Number closest to 0.4, so JavaScript prints it as “0.4”.
Finally, consider 0.3 + 0.2
. We are adding 0x1.999999999999ap-3 and 0x1.3333333333333p-2. Again we adjust the second operand, producing 0x2.6666666666666p-3. Then adding produces 0x4.0000000000000p-3, which is 0x1p-1, which is ½ or 0.5. So it is printed as “0.5”.
Another way of looking at it:
- The values for source code
0.1
and0.2
are both a little above 0.1 and 0.2, respectively, and adding them produced a number above 0.3, with errors that reinforced, so the total error was big enough to push the result away from 0.3 enough that JavaScript showed the error. - When adding
0.2 + 0.2
, the errors again reinforce. However, the total error in this case is not enough to push the result too far away from 0.4 that JavaScript displays it differently. - The value for source code
0.3
is a little under 0.3. When added to0.2
, which is a little over 0.2, the errors canceled, yielding exactly 0.5.
Footnote
1 This rules comes from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification.
回答2:
It's not a problem of javascript itself or any other language, but rather a problem of communication between two different races: like human and machine and thinking
power of both. What would seem pretty natural to us (like a word: tree
- when we say that we create some abstract representation of tree in our head) is totally not natural to computer, and the only thing machine can do to refer to a word "tree" is to store it in some representational way easily understandable by machine (any way you want really, someone many years ago picked a binary code with ASCII table and it seem to be solid though right now). So hereafter the machine has a representation of a word tree
stored somewhere there, let's say it's 00000001
, but it doesn't know anything beyond that, for you it has some meaning, for a machine it's just a bunch of zeros and one. If we then say that for every word there might be maximum of 7 bits, because otherwise computer works slowly, then machine would save 0000000
cutting last bit and therefor it would still understand the word tree
in some way. The same goes with numbers, 0.3
is natural for you, but when you see 10101010001010101010101010111
you would immediately like to convert it to decimal system to understand what it stands for because it's not natural to you to see numbers in binary system. And here comes the main point: conversion.
And thus, for you math looks like this:
.1 + .2 => .3
For a machine that uses binary system looks like:
The number 1/10 can be expressed as 0.1 in decimal, but it is 0.0001100110011001100110011001100110011001100110011001….. in binary. Because there's only 53 bit space for a number according to standard, starting from bit 54, the number will be rounded.
x = .1 converted to 00011001...01 and trimmed to 53 bits
y = .2 converted to 00010110...10 and trimmed to 53 bits
z = x + y => 0001100010101... trimmed to 53 bits
result = 0.3000000000000000444089209850062616169452667236328125 converted from z = 0001100010101...
It's like converting euro to dollar, sometimes you will gain a half of a cent and sometimes you will pay a half of a eurocent more for a dollar representation because there is no smaller piece than cent. There might be, but people would get insane with their pockets.
So the real question is Why does 0.1 converted to binary and trimmed + 0.2 converted to binary and trimmed return unpredictable float results in JavaScript while 0.2 converted to binary and trimmed + 0.3 converted to binary and trimmed does not?
and the answer is: because of math and an amount of power given for calculations, analogous to why pi + 1
gives strange result but 2 + 1
does not => you probably putted some representation of pi like 3.1415
because you don't have enough mathematical power (or it's not worth) to make an exact result.
To read more, great piece of math is done here: https://medium.com/dailyjs/javascripts-number-type-8d59199db1b6
来源:https://stackoverflow.com/questions/50778431/why-does-0-1-0-2-return-unpredictable-float-results-in-javascript-while-0-2