问题
To make the problem short let's say I want to compute the expression a / (b - c)
on float
s.
To make sure the result is meaningful, I can check if b
and c
are in equal:
float EPS = std::numeric_limits<float>::epsilon();
if ((b - c) > EPS || (c - b) > EPS)
{
return a / (b - c);
}
but my tests show it is not enough to guarantee either meaningful results nor not failing to provide a result if it is possible.
Case 1:
a = 1.0f;
b = 0.00000003f;
c = 0.00000002f;
Result: The if condition is NOT met, but the expression would produce a correct result 100000008 (as for the floats' precision).
Case 2:
a = 1e33f;
b = 0.000003;
c = 0.000002;
Result: The if condition is met, but the expression produces not a meaningful result +1.#INF00
.
I found it much more reliable to check the result, not the arguments:
const float INF = numeric_limits<float>::infinity();
float x = a / (b - c);
if (-INF < x && x < INF)
{
return x;
}
But what for is the epsilon then and why is everyone saying epsilon is good to use?
回答1:
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).
This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.
If you have not read What Every Computer Scientist Should Know About Floating-Point Arithmetic, it's a good starting point. Further than that, if you are interested in the precision of the result of the division in your example, you have to estimate how imprecise b-c
was made by previous rounding errors, because indeed if b-c
is small, a small absolute error corresponds to a large absolute error on the result. If your concern is only that the division should not overflow, then your test (on the result) is right. There is no reason to test for a null divisor with floating-point numbers, you just test for overflow of the result, which captures both the cases where the divisor is null and where the divisor is so small as to make the result not representable with any precision.
Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.
回答2:
Epsilon is used to determine whether two numbers subject to rounding error are close enough to be considered "equal". Note that it is better to test fabs(b/c - 1) < EPS
than fabs(b-c) < EPS
, and even better — thanks to the design of IEEE floats — to test abs(*(int*)&b - *(int*)&c) < EPSI
(where EPSI is some small integer).
Your problem is of a different nature, and probably warrants testing the result rather than the inputs.
来源:https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations