Printing double without losing precision

后端 未结 7 1156
一整个雨季
一整个雨季 2020-12-01 03:44

How do you print a double to a stream so that when it is read in you don\'t lose precision?

I tried:

std::stringstream ss;

double v = 0.1 * 0.1;
ss          


        
相关标签:
7条回答
  • 2020-12-01 04:03

    Don't print floating-point values in decimal if you don't want to lose precision. Even if you print enough digits to represent the number exactly, not all implementations have correctly-rounded conversions to/from decimal strings over the entire floating-point range, so you may still lose precision.

    Use hexadecimal floating point instead. In C:

    printf("%a\n", yourNumber);
    

    C++0x provides the hexfloat manipulator for iostreams that does the same thing (on some platforms, using the std::hex modifier has the same result, but this is not a portable assumption).

    Using hex floating point is preferred for several reasons.

    First, the printed value is always exact. No rounding occurs in writing or reading a value formatted in this way. Beyond the accuracy benefits, this means that reading and writing such values can be faster with a well tuned I/O library. They also require fewer digits to represent values exactly.

    0 讨论(0)
  • 2020-12-01 04:03

    Thanks to ThomasMcLeod for pointing out the error in my table computation

    To guarantee round-trip conversion using 15 or 16 or 17 digits is only possible for a comparatively few cases. The number 15.95 comes from taking 2^53 (1 implicit bit + 52 bits in the significand/"mantissa") which comes out to an integer in the range 10^15 to 10^16 (closer to 10^16).

    Consider a double precision value x with an exponent of 0, i.e. it falls into the floating point range range 1.0 <= x < 2.0. The implicit bit will mark the 2^0 component (part) of x. The highest explicit bit of the significand will denote the next lower exponent (from 0) <=> -1 => 2^-1 or the 0.5 component.

    The next bit 0.25, the ones after 0.125, 0.0625, 0.03125, 0.015625 and so on (see table below). The value 1.5 will thus be represented by two components added together: the implicit bit denoting 1.0 and the highest explicit significand bit denoting 0.5.

    This illustrates that from the implicit bit downward you have 52 additional, explicit bits to represent possible components where the smallest is 0 (exponent) - 52 (explicit bits in significand) = -52 => 2^-52 which according to the table below is ... well you can see for yourselves that it comes out to quite a bit more than 15.95 significant digits (37 to be exact). To put it another way the smallest number in the 2^0 range that is != 1.0 itself is 2^0+2^-52 which is 1.0 + the number next to 2^-52 (below) = (exactly) 1.0000000000000002220446049250313080847263336181640625, a value which I count as being 53 significant digits long. With 17 digit formatting "precision" the number will display as 1.0000000000000002 and this would depend on the library converting correctly.

    So maybe "round-trip conversion in 17 digits" is not really a concept that is valid (enough).

    2^ -1 = 0.5000000000000000000000000000000000000000000000000000
    2^ -2 = 0.2500000000000000000000000000000000000000000000000000
    2^ -3 = 0.1250000000000000000000000000000000000000000000000000
    2^ -4 = 0.0625000000000000000000000000000000000000000000000000
    2^ -5 = 0.0312500000000000000000000000000000000000000000000000
    2^ -6 = 0.0156250000000000000000000000000000000000000000000000
    2^ -7 = 0.0078125000000000000000000000000000000000000000000000
    2^ -8 = 0.0039062500000000000000000000000000000000000000000000
    2^ -9 = 0.0019531250000000000000000000000000000000000000000000
    2^-10 = 0.0009765625000000000000000000000000000000000000000000
    2^-11 = 0.0004882812500000000000000000000000000000000000000000
    2^-12 = 0.0002441406250000000000000000000000000000000000000000
    2^-13 = 0.0001220703125000000000000000000000000000000000000000
    2^-14 = 0.0000610351562500000000000000000000000000000000000000
    2^-15 = 0.0000305175781250000000000000000000000000000000000000
    2^-16 = 0.0000152587890625000000000000000000000000000000000000
    2^-17 = 0.0000076293945312500000000000000000000000000000000000
    2^-18 = 0.0000038146972656250000000000000000000000000000000000
    2^-19 = 0.0000019073486328125000000000000000000000000000000000
    2^-20 = 0.0000009536743164062500000000000000000000000000000000
    2^-21 = 0.0000004768371582031250000000000000000000000000000000
    2^-22 = 0.0000002384185791015625000000000000000000000000000000
    2^-23 = 0.0000001192092895507812500000000000000000000000000000
    2^-24 = 0.0000000596046447753906250000000000000000000000000000
    2^-25 = 0.0000000298023223876953125000000000000000000000000000
    2^-26 = 0.0000000149011611938476562500000000000000000000000000
    2^-27 = 0.0000000074505805969238281250000000000000000000000000
    2^-28 = 0.0000000037252902984619140625000000000000000000000000
    2^-29 = 0.0000000018626451492309570312500000000000000000000000
    2^-30 = 0.0000000009313225746154785156250000000000000000000000
    2^-31 = 0.0000000004656612873077392578125000000000000000000000
    2^-32 = 0.0000000002328306436538696289062500000000000000000000
    2^-33 = 0.0000000001164153218269348144531250000000000000000000
    2^-34 = 0.0000000000582076609134674072265625000000000000000000
    2^-35 = 0.0000000000291038304567337036132812500000000000000000
    2^-36 = 0.0000000000145519152283668518066406250000000000000000
    2^-37 = 0.0000000000072759576141834259033203125000000000000000
    2^-38 = 0.0000000000036379788070917129516601562500000000000000
    2^-39 = 0.0000000000018189894035458564758300781250000000000000
    2^-40 = 0.0000000000009094947017729282379150390625000000000000
    2^-41 = 0.0000000000004547473508864641189575195312500000000000
    2^-42 = 0.0000000000002273736754432320594787597656250000000000
    2^-43 = 0.0000000000001136868377216160297393798828125000000000
    2^-44 = 0.0000000000000568434188608080148696899414062500000000
    2^-45 = 0.0000000000000284217094304040074348449707031250000000
    2^-46 = 0.0000000000000142108547152020037174224853515625000000
    2^-47 = 0.0000000000000071054273576010018587112426757812500000
    2^-48 = 0.0000000000000035527136788005009293556213378906250000
    2^-49 = 0.0000000000000017763568394002504646778106689453125000
    2^-50 = 0.0000000000000008881784197001252323389053344726562500
    2^-51 = 0.0000000000000004440892098500626161694526672363281250
    2^-52 = 0.0000000000000002220446049250313080847263336181640625
    
    0 讨论(0)
  • 2020-12-01 04:04

    It's not correct to say "floating point is inaccurate", although I admit that's a useful simplification. If we used base 8 or 16 in real life then people around here would be saying "base 10 decimal fraction packages are inaccurate, why did anyone ever cook those up?".

    The problem is that integral values translate exactly from one base into another, but fractional values do not, because they represent fractions of the integral step and only a few of them are used.

    Floating point arithmetic is technically perfectly accurate. Every calculation has one and only one possible result. There is a problem, and it is that most decimal fractions have base-2 representations that repeat. In fact, in the sequence 0.01, 0.02, ... 0.99, only a mere 3 values have exact binary representations. (0.25, 0.50, and 0.75.) There are 96 values that repeat and therefore are obviously not represented exactly.

    Now, there are a number of ways to write and read back floating point numbers without losing a single bit. The idea is to avoid trying to express the binary number with a base 10 fraction.

    • Write them as binary. These days, everyone implements the IEEE-754 format so as long as you choose a byte order and write or read only that byte order, then the numbers will be portable.
    • Write them as 64-bit integer values. Here you can use the usual base 10. (Because you are representing the 64-bit aliased integer, not the 52-bit fraction.)

    You can also just write more decimal fraction digits. Whether this is bit-for-bit accurate will depend on the quality of the conversion libraries and I'm not sure I would count on perfect accuracy (from the software) here. But any errors will be exceedingly small and your original data certainly has no information in the low bits. (None of the constants of physics and chemistry are known to 52 bits, nor has any distance on earth ever been measured to 52 bits of precision.) But for a backup or restore where bit-for-bit accuracy might be compared automatically, this obviously isn't ideal.

    0 讨论(0)
  • 2020-12-01 04:06

    The easiest way (for IEEE 754 double) to guarantee a round-trip conversion is to always use 17 significant digits. But that has the disadvantage of sometimes including unnecessary noise digits (0.1 → "0.10000000000000001").

    An approach that's worked for me is to sprintf the number with 15 digits of precision, then check if atof gives you back the original value. If it doesn't, try 16 digits. If that doesn't work, use 17.

    You might want to try David Gay's algorithm (used in Python 3.1 to implement float.__repr__).

    0 讨论(0)
  • 2020-12-01 04:10

    I got interested in this question because I'm trying to (de)serialize my data to & from JSON.

    I think I have a clearer explanation (with less hand waiving) for why 17 decimal digits are sufficient to reconstruct the original number losslessly:

    enter image description here

    Imagine 3 number lines:
    1. for the original base 2 number
    2. for the rounded base 10 representation
    3. for the reconstructed number (same as #1 because both in base 2)

    When you convert to base 10, graphically, you choose the tic on the 2nd number line closest to the tic on the 1st. Likewise when you reconstruct the original from the rounded base 10 value.

    The critical observation I had was that in order to allow exact reconstruction, the base 10 step size (quantum) has to be < the base 2 quantum. Otherwise, you inevitably get the bad reconstruction shown in red.

    Take the specific case of when the exponent is 0 for the base2 representation. Then the base2 quantum will be 2^-52 ~= 2.22 * 10^-16. The closest base 10 quantum that's less than this is 10^-16. Now that we know the required base 10 quantum, how many digits will be needed to encode all possible values? Given that we're only considering the case of exponent = 0, the dynamic range of values we need to represent is [1.0, 2.0). Therefore, 17 digits would be required (16 digits for fraction and 1 digit for integer part).

    For exponents other than 0, we can use the same logic:

        exponent    base2 quant.   base10 quant.  dynamic range   digits needed
        ---------------------------------------------------------------------
        1              2^-51         10^-16         [2, 4)           17
        2              2^-50         10^-16         [4, 8)           17
        3              2^-49         10^-15         [8, 16)          17
        ...
        32             2^-20         10^-7        [2^32, 2^33)       17
        1022          9.98e291      1.0e291    [4.49e307,8.99e307)   17
    

    While not exhaustive, the table shows the trend that 17 digits are sufficient.

    Hope you like my explanation.

    0 讨论(0)
  • 2020-12-01 04:12

    A double has the precision of 52 binary digits or 15.95 decimal digits. See http://en.wikipedia.org/wiki/IEEE_754-2008. You need at least 16 decimal digits to record the full precision of a double in all cases. [But see fourth edit, below].

    By the way, this means significant digits.

    Answer to OP edits:

    Your floating point to decimal string runtime is outputing way more digits than are significant. A double can only hold 52 bits of significand (actually, 53, if you count a "hidden" 1 that is not stored). That means the the resolution is not more than 2 ^ -53 = 1.11e-16.

    For example: 1 + 2 ^ -52 = 1.0000000000000002220446049250313 . . . .

    Those decimal digits, .0000000000000002220446049250313 . . . . are the smallest binary "step" in a double when converted to decimal.

    The "step" inside the double is:

    .0000000000000000000000000000000000000000000000000001 in binary.

    Note that the binary step is exact, while the decimal step is inexact.

    Hence the decimal representation above,

    1.0000000000000002220446049250313 . . .

    is an inexact representation of the exact binary number:

    1.0000000000000000000000000000000000000000000000000001.

    Third Edit:

    The next possible value for a double, which in exact binary is:

    1.0000000000000000000000000000000000000000000000000010

    converts inexactly in decimal to

    1.0000000000000004440892098500626 . . . .

    So all of those extra digits in the decimal are not really significant, they are just base conversion artifacts.

    Fourth Edit:

    Though a double stores at most 16 significant decimal digits, sometimes 17 decimal digits are necessary to represent the number. The reason has to do with digit slicing.

    As I mentioned above, there are 52 + 1 binary digits in the double. The "+ 1" is an assumed leading 1, and is neither stored nor significant. In the case of an integer, those 52 binary digits form a number between 0 and 2^53 - 1. How many decimal digits are necessary to store such a number? Well, log_10 (2^53 - 1) is about 15.95. So at most 16 decimal digits are necessary. Let's label these d_0 to d_15.

    Now consider that IEEE floating point numbers also have an binary exponent. What happens when we increment the exponet by, say, 2? We have multiplied our 52-bit number, whatever it was, by 4. Now, instead of our 52 binary digits aligning perfectly with our decimal digits d_0 to d_15, we have some significant binary digits represented in d_16. However, since we multiplied by something less than 10, we still have significant binary digits represented in d_0. So our 15.95 decimal digits now occuply d_1 to d_15, plus some upper bits of d_0 and some lower bits of d_16. This is why 17 decimal digits are sometimes needed to represent a IEEE double.

    Fifth Edit

    Fixed numerical errors

    0 讨论(0)
提交回复
热议问题