Why do I see a double variable initialized to some value like 21.4 as 21.399999618530273?

前端 未结 14 2004
名媛妹妹
名媛妹妹 2020-11-21 23:41
double r = 11.631;
double theta = 21.4;

In the debugger, these are shown as 11.631000000000000

相关标签:
14条回答
  • 2020-11-21 23:55

    Refer to General Decimal Arithmetic

    Also take note when comparing floats, see this answer for more information.

    0 讨论(0)
  • 2020-11-21 23:56

    Seems to me that 21.399999618530273 is the single precision (float) representation of 21.4. Looks like the debugger is casting down from double to float somewhere.

    0 讨论(0)
  • 2020-11-21 23:57

    Use the fixed-point decimal type if you want stability at the limits of precision. There are overheads, and you must explicitly cast if you wish to convert to floating point. If you do convert to floating point you will reintroduce the instabilities that seem to bother you.

    Alternately you can get over it and learn to work with the limited precision of floating point arithmetic. For example you can use rounding to make values converge, or you can use epsilon comparisons to describe a tolerance. "Epsilon" is a constant you set up that defines a tolerance. For example, you may choose to regard two values as being equal if they are within 0.0001 of each other.

    It occurs to me that you could use operator overloading to make epsilon comparisons transparent. That would be very cool.


    For mantissa-exponent representations EPSILON must be computed to remain within the representable precision. For a number N, Epsilon = N / 10E+14

    System.Double.Epsilon is the smallest representable positive value for the Double type. It is too small for our purpose. Read Microsoft's advice on equality testing

    0 讨论(0)
  • 2020-11-21 23:58

    Dangers of computer arithmetic

    0 讨论(0)
  • 2020-11-21 23:59

    I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:

    See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.

    So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.

    0 讨论(0)
  • 2020-11-22 00:05

    This is partly platform-specific - and we don't know what platform you're using.

    It's also partly a case of knowing what you actually want to see. The debugger is showing you - to some extent, anyway - the precise value stored in your variable. In my article on binary floating point numbers in .NET, there's a C# class which lets you see the absolutely exact number stored in a double. The online version isn't working at the moment - I'll try to put one up on another site.

    Given that the debugger sees the "actual" value, it's got to make a judgement call about what to display - it could show you the value rounded to a few decimal places, or a more precise value. Some debuggers do a better job than others at reading developers' minds, but it's a fundamental problem with binary floating point numbers.

    0 讨论(0)
提交回复
热议问题