Here\'s the code:
typedef std::numeric_limits fl;
int main()
{
std::cout.precision(100);
float f1 = 9999978e3;
std::cout &l
Not in the way you think it's guaranteed, no.
By way of offering a counter-example, for an IEEE754 single precision floating point, the closest number to
9999990000
is
9999989760
What is guaranteed is that your number and the float
, when both are rounded to six significant figures, will be the same. This is be the value of FLT_DIG
on your platform, assuming it implements IEEE754. E.g. the closest float
number to 9999979000
is 9999978496
.
See http://www.exploringbinary.com/floating-point-converter/
You will never get a precise number of digits base 10 because the number isn't be stored base 10 and most base 10 fractions have no perfect representation in base 2. There will almost always be a rounding error, and through repeated adding you can magnify that roundoff error.
For example, 1/5 has this binary pattern:
111110010011001100110011001101
We only care about the last 23 bits (the mantissa) for what you're talking about...
10011001100110011001101
Notice the repeating pattern of 1001
. To truly represent 0.2 that pattern would have to repeat forever instead of ending with a roundoff of 1.
Multiply that number by enough and the roundoff error is magnified.
If you need to have a precise number of decimal digits, then use integer math and handle the rounding yourself in the case of division in a manner satisfactory to you. Or use a bigint library and rational numbers and end up with huge fractions that take forever to compute, but you'll have infinite precision. Though, of course, any number that can't be represented as a rational like the sqrt(6)
or pi
will still have a roundoff error.