I have this code in C where I\'ve declared 0.1 as double.
#include
int main() {
double a = 0.1;
printf(\"a is %0.56f\\n\", a);
retu
It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.
I also see that both numbers are equal to 0.1
within 52 bits of precision, so both are correct.
That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double
doesn't store any decimals. It stores bits. The decimals are generated.