Can anybody explain me how the [.precision]
in printf works with specifier \"%g\"? I\'m quite confused by the following output:
double value = 3
The decimal representation 3122.55 cannot be exactly represented by binary floating point representation.
A double precision binary floating point value can represent approximately 15 significant figures (note not decimal places) of a decimal value correctly; thereafter the digits may not be the same, and at the extremes do not even have any real meaning and will be an artefact of the conversion from the floating point representation to a string of decimal digits.
I've learned that %g uses the shortest representation.
The rule is:
Where P is the precision (or 6 if no precision specified or 1 if precision is zero), and X is the decimal exponent required for E/e style notation then:
The modification of precision for %g
results in the different output of:
printf("%.16g\n", value); //output: 3122.55
printf("%.16e\n", value); //output: 3.1225500000000002e+03
printf("%.16f\n", value); //output: 3122.5500000000001819
despite having the same precision in the format specifier.
The decimal value 3122.55 can't be exactly represented in binary floating point. When you write
double value = 3122.55;
you end up with the closest possible value that can be exactly represented. As it happens, that value is exactly 3122.5500000000001818989403545856475830078125
.
That value to 16 significant figures is 3122.550000000000
. To 17 significant figures, it's 3122.5500000000002
. And so those are the representations that %.16g
and %.17g
give you.
Note that the nearest double
representation of a decimal number is guaranteed to be accurate to at least 15 decimal significant figures. That's why you need to print to 16 or 17 digits to start seeing these apparent inaccuracies in your output in this case - to any smaller number of significant figures, the double
representation is guaranteed to match the original decimal number that you typed.
One final note: you say that
I've learned that
%g
uses the shortest representation.
While this is a popular summary of how %g
behaves, it's also wrong. See What precisely does the %g printf specifier mean? where I discuss this at length, and show an example of %g
using scientific notation even though it's 4 characters longer than not using scientific notation would've been.
%g
uses the shortest representation.
Floating-point numbers usually aren't stored as a number in base 10
, but 2
(performance, size, practicality reasons). However, whatever the base of your representation, there will always be rational numbers that will not be expressible in some arbitrary size limit for the variable to store them.
When you specify %.16g
, you're saying that you want the shortest representation of the number given with a maximum of 16
significant digits.
If the shortest representation has more than 16
digits, printf
will shorten the number string by cutting cut the 2
digit at the very end, leaving you with 3122.550000000000
, which is actually 3122.55
in the shortest form, explaining the result you obtained.
In general, %g
will always give you the shortest result possible, meaning that if the sequence of digits representing your number can be shortened without any loss of precision, it will be done.
To further the example, when you use %.17g
and the 17
th decimal place contains a value different from 0
(2
in particular), you ended up with the full number 3122.5500000000002
.
My question is: why
%.16g
gives the exact number while%.17g
can't?
It's actually the %.17g
which gives you the exact result, while %.16g
gives you only a rounded approximate with an error (when compared to the value in memory).
If you want a more fixed precision, use %f
or %F
instead.