I was implementing an algorithm to calculate natural logs in C.
double taylor_ln(int z) {
double sum = 0.0;
double tmp = 1.0;
int i = 1;
whi
The print statement is displaying a rounded value, it is not printing the highest possible precision. So your loop has not really reached zero yet.
(And, as others have mentioned, due to rounding issues it might actually never reach it. Comparing the value against a small limit is therefore more robust than comparing for equality with 0.0.)
Don't use exact equality operations when dealing with floating point numbers. Although your number may look like 0
, it likely to be something like 0.00000000000000000000001
.
You'll see this if you use %.50f
instead of %f
in your format strings. The latter uses a sensible default for decimal places (6 in your case) but the former explicitly states you want a lot.
For safety, use a delta to check if it's close enough, such as:
if (fabs (val) < 0.0001) {
// close enough.
}
Obviously, the delta depends entirely on your needs. If you're talking money, 10-5 may be plenty. If you're a physicist, you should probably choose a smaller value.
Of course, if you're a mathematician, no inaccuracy is small enough :-)
Just because a number displays as "0.000000" does not mean it is equal to 0.0. The decimal display of numbers has less precision than a double can store.
It's possible that your algorithm is getting to a point where it is very close to 0, but the next step moves so little that it rounds to the same thing it was at before, and hence it never gets any closer to 0 (just goes into an infinite loop).
In general, you should not compare floating-point numbers with ==
and !=
. You should always check if they are within a certain small range (usually called epsilon). For example:
while(fabs(tmp) >= 0.0001)
Then it will stop when it gets reasonably close to 0.
The floating point comparison is exact, so 10^-10
is not the same as 0.0
.
Basically, you should be comparing against some tolerable difference, say 10^-7
based on the number of decimals you're writing out, that can be accomplished as:
while(fabs(tmp) > 10e-7)
Plenty of discussion of the cause, but here's an alternative solution:
double taylor_ln(int z)
{
double sum = 0.0;
double tmp, old_sum;
int i = 1;
do
{
old_sum = sum;
tmp = (1.0 / i) * (pow(((z - 1.0) / (z + 1.0)), i));
printf("(1.0 / %d) * (pow(((%d - 1.0) / (%d + 1.0)), %d)) = %f\n",
i, z, z, i, tmp);
sum += tmp;
i += 2;
} while (sum != old_sum);
return sum * 2;
}
This approach focuses on whether each decreasing value of tmp makes a tangible difference to sum. It's easier than working out some threshold from 0 at which tmp becomes insignificant, and probably terminates earlier without changing the result.
Note that when you sum a relatively big number with a relatively small one, the significant digits in the result limit the precision. By way of contrast, if you sum several small ones then add that to the big one, you may then have enough to bump the big one up a little. In your algorithm small tmp values weren't being summed with each other anyway, so there's no accumulation unless each actually affects sum - hence the approach above works without further compromising precision.