Using “Decimal” in Python

前端 未结 1 1981
梦谈多话
梦谈多话 2021-01-23 05:24

Can someone please explain what\'s happening below: (I use Python 3.3)

1. >>> Decimal(\"0.1\") + Decimal(\"0.1\") + Decimal(\"0.1\") - Decimal(\"0.3\")
         


        
相关标签:
1条回答
  • 2021-01-23 06:08

    In a nutshell, neither 0.1 nor 0.3 can be represented exactly as float:

    In [3]: '%.20f' % 0.1
    Out[3]: '0.10000000000000000555'
    
    In [4]: '%.20f' % 0.3
    Out[4]: '0.29999999999999998890'
    

    Consequently, when you use 0.1 or 0.3 to initialize Decimal(), the resulting value is approximately 0.1 or 0.3.

    Using strings ("0.1" or "0.3") does not have this problem.

    Finally, your second example produces a different result to your third example because, even though both involve implicit rounding, they involve rounding to a different number of decimal places.

    0 讨论(0)
提交回复
热议问题