Why this same code produce two different fp results on different Machines?

送分小仙女□ 提交于 2020-01-24 18:24:13

问题


Here's the code:

#include <iostream>
#include <math.h>

const double ln2per12 = log(2.0) / 12.0;

int main() {
    std::cout.precision(100);
    double target = 9.800000000000000710542735760100185871124267578125;
    double unnormalizatedValue = 9.79999999999063220457173883914947509765625;
    double ln2per12edValue = unnormalizatedValue * ln2per12;
    double errorLn2per12 = fabs(target - ln2per12edValue / ln2per12);
    std::cout << unnormalizatedValue << std::endl;
    std::cout << ln2per12 << std::endl;
    std::cout << errorLn2per12 << " <<<<< its different" << std::endl;
}

If I try on my machine (MSVC), or here (GCC):

errorLn2per12 = 9.3702823278363212011754512786865234375e-12

Instead, here (GCC):

errorLn2per12 = 9.368505970996920950710773468017578125e-12

which is different. Its due to Machine Epsilon? Or Compiler precision flags? Or a different IEEE evaluation?

What's the cause here for this drift? The problem seems in fabs() function (since the other values seems the same).


回答1:


Even without -Ofast, the C++ standard does not require implementations to be exact with log (or sin, or exp, etc.), only that they be within a few ulp (i.e. there may be some inaccuracies in the last binary places). This allows faster hardware (or software) approximations, which each platform/compiler may do differently.

(The only floating point math function that you will always get perfect results from on all platforms is sqrt.)

More annoyingly, you may even get different results between compilation (the compiler may use some internal library to be as precise as float/double allows for constant expressions) and runtime (e.g. hardware-supported approximations).

If you want log to give the exact same result across platforms and compilers, you will have to implement it yourself using only +, -, *, / and sqrt (or find a library with this guarantee). And avoid a whole host of pitfalls along the way.

If you need floating point determinism in general, I strongly recommend reading this article to understand how big of a problem you have ahead of you: https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/



来源:https://stackoverflow.com/questions/54694274/why-this-same-code-produce-two-different-fp-results-on-different-machines

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!