Why does double in C print fewer decimal digits than C++?

前端 未结 2 1395
一个人的身影
一个人的身影 2021-02-02 05:12

I have this code in C where I\'ve declared 0.1 as double.

#include  
int main() {
    double a = 0.1;

    printf(\"a is %0.56f\\n\", a);
    retu         


        
2条回答
  •  野的像风
    2021-02-02 06:06

    It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.

    I also see that both numbers are equal to 0.1 within 52 bits of precision, so both are correct.

    That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double doesn't store any decimals. It stores bits. The decimals are generated.

提交回复
热议问题