Double precision - decimal places
From what I have read, a value of data type double has an approximate precision of 15 decimal places. However, when I use a number whose decimal representation repeats, such as 1.0/7.0, I find that the variable holds the value of 0.14285714285714285 - which is 17 places (via the debugger). I would like to know why it is represented as 17 places internally, and why a precision of 15 is always written at ~15? An IEEE double has 53 significant bits (that's the value of DBL_MANT_DIG in <cfloat> ). That's approximately 15.95 decimal digits (log10(2 53 )); the implementation sets DBL_DIG to 15, not