Can -ffast-math be safely used on a typical project?

ε祈祈猫儿з 提交于 2020-02-03 03:06:13

问题


While answering a question where I suggested -ffast-math, a comment pointed out that it is dangerous.

My personal feeling is that outside scientific calculations, it is OK. I also asume that serious financial applications use fixed point instead of floating point.

Of course if you want to use it in your project the ultimate answer is to test it on your project and see how much it affects it. But I think a general answer can be given by people who tried and have experience with such optimizations:

Can ffast-math be used safely on a normal project?

Given that IEEE 754 floating point has rounding errors, the assumption is that you are already living with inexact calculations.


This answer was particular illuminating on the fact that -ffast-math does much more than reordering operations that would result in a slightly different result (does not check for NaN or zero, disables signed zero just to name a few), but I fail to see what the effects of these would ultimately be in a real code.


I tried to think of typical uses of floating points, and this is what I came up with:

  • GUI (2D, 3D, physics engine, animations)
  • automation (e.g. car electronics)
  • robotics
  • industrial measurements (e.g. voltage)

and school projects, but those don't really matter here.


回答1:


One of the especially dangerous things it does is imply -ffinite-math-only, which allows explicit NaN tests to pretend that no NaNs ever exist. That's bad news for any code that explicitly handles NaNs. It would try to test for NaN, but the test will lie through its teeth and claim that nothing is ever NaN, even when it is.

This can have really obvious results, such as letting NaN bubble up to the user when previously they would have been filtered out at some point. That's bad of course, but probably you'll notice and fix it.

A more insidious problem arises when NaN checks were there for error checking, for something that really isn't supposed to ever be NaN. But perhaps through some bug, bad data, or through other effects of -ffast-math, it becomes NaN anyway. And now you're not checking for it, because by assumption nothing is ever NaN, so isnan is a synonym of false. Things will go wrong, spuriously and long after you've already shipped your software, and you will get an "impossible" error report - you did check for NaN, it's right there in the code, it cannot be failing! But it is, because someone someday added -ffast-math to the flags, maybe you even did it yourself, not knowing fully what it would do or having forgotten that you used a NaN check.

So then we might ask, is that normal? That's getting quite subjective, but I would not say that checking for NaN is especially abnormal. Going fully circular and asserting that it isn't normal because -ffast-math breaks it is probably a bad idea.

It does a lot of other scary things as well, as detailed in other answers.




回答2:


I wouldn't recommend to avoid using this option, but I remind one instance where unexpected floating-point behavior struck back.

The code was saying like this innocent construct:

float X, XMin, Y;
if (X < XMin)
{
    Y= 1 / (XMin - X);
    XMin= X;
}

This was sometimes raising a division by zero error, because when the comparison was carried out, the full 80 bits representation (Intel FPU) was used, while later when the subtraction was performed, values were truncated to the 32 bits representation, possibly being equal.




回答3:


Yes, you can use -ffast-math on normal projects, for an appropriate definition of "normal projects." That includes probably 95% of all programs written.

But then again, 95% of all programs written would not benefit much from -ffast-math either, because they don't do enough floating point math for it to be important.




回答4:


The short answer: No, you cannot safely use -ffast-math except on code designed to be used with it. There are all sorts of important constructs for which it generates completely wrong results. In particular, for arbitrarily large x, there are expressions with correct value x but which will evaluate to 0 with -ffast-math, or vice versa.

As a more relaxed rule, if you're certain the code you're compiling was written by someone who doesn't actually understand floating point math, using -ffast-math probably wrong make the results any more wrong (vs. the programmer's intent) than they already were. Such a programmer will not be performing intentional rounding or other operations that badly break, probably won't be using nans and infinities, etc. The most likely negative consequence is having computations that already had precision problems blow up and get worse. I would argue that this kind of code is already bad enough that you should not be using it in production to begin with, with or without -ffast-math.

From personal experience, I've had enough spurious bug reports from users trying to use -ffast-math (or even who have it buried in their default CFLAGS, uhg!) that I'm strongly leaning towards putting the following fragment in any code with floating point math:

#ifdef __FAST_MATH__
#error "-ffast-math is broken, don't use it"
#endif

If you still want to use -ffast-math in production, you need to actually spend the effort (lots of code review hours) to determine if it's safe. Before doing that, you probably want to first measure whether there's any benefit that would be worth spending those hours, and the answer is likely no.




回答5:


Given that IEEE 754 floating point has rounding errors, the assumption is that you are already living with inexact calculations.

The question you should answer is not whether the program expects inexact computations (it had better expect them, or it will break with or without -ffast-math), but whether the program expects approximations to be exactly those predicted by IEEE 754, and special values that behave exactly as predicted by IEEE 754 as well; or whether the program is designed to work fine with the weaker hypothesis that each operation introduces a small unpredictable relative error.

Many algorithms do not make use of special values (infinities, NaN) and are designed to work well in a computation model in which each operation introduces a small nondeterministic relative error. These algorithms work well with -ffast-math, because they do not use the hypothesis that the error of each operation is exactly the error predicted by IEEE 754. The algorithms also work fine when the rounding mode is other than the default round-to-nearest: the error in the end may be larger (or smaller), but a FPU in round-upwards mode also implements the computation model that these algorithms expect, so they work more or less identically well in these conditions.

Other algorithms (for instance Kahan summation, “double-double” libraries in which numbers are represented as the sum of two doubles) expect the rules to be respected to the letter, because they contain smart shortcuts based on subtle behaviors of IEEE 754 arithmetic. You can recognize these algorithms by the fact that they do not work when the rounding mode is other than expected either. I once asked a question about designing double-double operations that would work in all rounding modes (for library functions that may be pre-empted without a chance to restore the rounding mode): it is extra work, and these adapted implementations still do not work with -ffast-math.




回答6:


Yes, they can be used safely, provided that you know what you are doing. This implies that you understand that they represent magnitudes, not exact values. This means:

  1. You always do a sanity check on any external fp input.
  2. You never divide by 0.
  3. You never check for equality, unless you know it is an integer with an absolute value below the max value of the mantissa.
  4. etc.

In fact, I would argue the converse. Unless you are working in very specific applications where NaNs and denormals have meaning, or if you really need that tiny incremental bit of reproduceability, then -ffast-math should be on by default. That way, your unit tests have a better chance of flushing out errors. Basically, whenever you think fp calculations have either reproduceability or precision, even under ieee, you are wrong.



来源:https://stackoverflow.com/questions/38978951/can-ffast-math-be-safely-used-on-a-typical-project

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!