问题
Can someone please explain what's happening below: (I use Python 3.3)
1. >>> Decimal("0.1") + Decimal("0.1") + Decimal("0.1") - Decimal("0.3")
Decimal('0.0')
2. >>> Decimal(0.1) + Decimal(0.1) + Decimal(0.1) - Decimal(0.3)
Decimal('2.775557561565156540423631668E-17')
3. >>> Decimal(0.1 + 0.1 + 0.1 - 0.3)
Decimal('5.5511151231257827021181583404541015625E-17')
I know it has to do with floating point limitation, I'd be glad if someone can explain why
- What has the
" "
got to do with the difference between example 1 and 2 above - Why does 2 produce a difference answer from 3 given that both have no
" "
?
回答1:
In a nutshell, neither 0.1
nor 0.3
can be represented exactly as float
:
In [3]: '%.20f' % 0.1
Out[3]: '0.10000000000000000555'
In [4]: '%.20f' % 0.3
Out[4]: '0.29999999999999998890'
Consequently, when you use 0.1
or 0.3
to initialize Decimal()
, the resulting value is approximately 0.1
or 0.3
.
Using strings ("0.1"
or "0.3"
) does not have this problem.
Finally, your second example produces a different result to your third example because, even though both involve implicit rounding, they involve rounding to a different number of decimal places.
来源:https://stackoverflow.com/questions/14572101/using-decimal-in-python