Below is the solution code for an excercise from KN King\'s Modern Programming approach.
#include
int main(void)
{
int i;
float j,x;
The floating-point formats record only a mathematical value. They do not record how much precision the value had when scanned (or otherwise obtained). (For that matter, this is true of the integer formats too.)
When you display a floating-point value, you must specify how many digits to display. The C standard defines a default of six digits for the %f
specifier:1
A double argument representing a floating-point number is converted to decimal notation in the style [−]ffffd.ffffd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6; if the precision is zero and the # flag is not specified, no decimal-point character appears. If a decimal-point character appears, at least one digit appears before it. The value is rounded to the appropriate number of digits.
You specify the precision by writing the number of digits after a period in the format specifications, such as %.2f
:2
The precision takes the form of a period (.) followed either by an asterisk * (described later) or by an optional decimal integer; if only the period is specified, the precision is taken as zero.
(The asterisk is used to specify the number of digits in a int
parameter passed to printf
.)
Notes
1 C 2011 (N1570) 7.21.6.1 8.
2 C 2011 (N1570) 7.21.6.1 4.