Why do I need 17 significant digits (and not 16) to represent a double?

前端 未结 5 1500
佛祖请我去吃肉
佛祖请我去吃肉 2020-12-29 08:55

Can someone give me an example of a floating point number (double precision), that needs more than 16 significant decimal digits to represent it?

I have found in thi

相关标签:
5条回答
  • 2020-12-29 09:03

    The largest continuous range of integers that can be exactly represented by a double (8 byte IEEE) is -2^53 to 2^53 (-9007199254740992. to 9007199254740992.). The numbers -2^53-1 and 2^53+1 cannot be exactly represented by a double.

    Therefore, no more than 16 significant decimal digits to the left of the decimal point will exactly represent a double in the continuous range.

    0 讨论(0)
  • 2020-12-29 09:08

    The correct answer is the one by Nemo above. Here I am just pasting a simple Fortran program showing an example of the two numbers, that need 17 digits of precision to print, showing, that one does need (es23.16) format to print double precision numbers, if one doesn't want to loose any precision:

    program test
    implicit none
    integer, parameter :: dp = kind(0.d0)
    real(dp) :: a, b
    a = 1.8014398509481982e+16_dp
    b = 1.8014398509481980e+16_dp
    print *, "First we show, that we have two different 'a' and 'b':"
    print *, "a == b:", a == b, "a-b:", a-b
    print *, "using (es22.15)"
    print "(es22.15)", a
    print "(es22.15)", b
    print *, "using (es23.16)"
    print "(es23.16)", a
    print "(es23.16)", b
    end program
    

    it prints:

    First we show, that we have two different 'a' and 'b':
    a == b: F a-b:   2.0000000000000000     
    using (es22.15)
    1.801439850948198E+16
    1.801439850948198E+16
    using (es23.16)
    1.8014398509481982E+16
    1.8014398509481980E+16
    
    0 讨论(0)
  • 2020-12-29 09:09

    Dig into the single and double precision basics and wean yourself of the notion of this or that (16-17) many DECIMAL digits and start thinking in (53) BINARY digits. The necessary examples may be found here at stackoverflow if you spend some time digging.

    And I fail to see how you can award a best answer to anyone giving a DECIMAL answer without qualified BINARY explanations. This stuff is straight-forward but it is not trivial.

    0 讨论(0)
  • 2020-12-29 09:11

    My other answer was dead wrong.

    #include <stdio.h>
    
    int
    main(int argc, char *argv[])
    {
        unsigned long long n = 1ULL << 53;
        unsigned long long a = 2*(n-1);
        unsigned long long b = 2*(n-2);
        printf("%llu\n%llu\n%d\n", a, b, (double)a == (double)b);
        return 0;
    }
    

    Compile and run to see:

    18014398509481982
    18014398509481980
    0
    

    a and b are just 2*(2^53-1) and 2*(2^53-2).

    Those are 17-digit base-10 numbers. When rounded to 16 digits, they are the same. Yet a and b clearly only need 53 bits of precision to represent in base-2. So if you take a and b and cast them to double, you get your counter-example.

    0 讨论(0)
  • 2020-12-29 09:19

    I think the guy on that thread is wrong, and 16 base-10 digits are always enough to represent an IEEE double.

    My attempt at a proof would go something like this:

    Suppose otherwise. Then, necessarily, two distinct double-precision numbers must be represented by the same 16-significant-digit base-10 number.

    But two distinct double-precision numbers must differ by at least one part in 2^53, which is greater than one part in 10^16. And no two numbers differing by more than one part in 10^16 could possibly round to the same 16-significant-digit base-10 number.

    This is not completely rigorous and could be wrong. :-)

    0 讨论(0)
提交回复
热议问题