The precision of the long double output is not correct. What might be wrong?

爷,独闯天下 提交于 2019-12-24 10:29:57

问题


I have a long double constant that I am setting either as const or not-const. It is longer (40 digits) than the precision of a long double on my test workstation (19 digits).

When I print it out, it no longer is displayed at 19 digits of precision, but at 16.

Here is the code I am testing:

#include <iostream>
#include <iomanip>
#include <limits>
#include <cstdio>

int main ()
{
    const long double constLog2 = 0.6931471805599453094172321214581765680755;
    long double log2 = 0.6931471805599453094172321214581765680755;    

    std::cout << std::numeric_limits<long double>::digits10 + 1 << std::endl;
    std::cout << "const via cout: " << std::setprecision(19) << constLog2 << std::endl;
    std::cout << "non-const via cout: " << std::setprecision(19) << log2 << std::endl;
    std::fprintf(stdout, "const via printf: %.19Lf\n", constLog2);
    std::fprintf(stdout, "non-const via printf: %.19Lf\n", log2);

    return 0;
}

Compile:

$ g++ -Wall precisionTest.cpp

Output:

$ ./a.out
19
const via cout: 0.6931471805599452862
non-const via cout: 0.6931471805599452862
const via printf: 0.6931471805599452862
non-const via printf: 0.6931471805599452862

I would expect 0.6931471805599453094 but instead get 0.6931471805599452862.

Is there a reason that the 19 digits of precision are cut to 16 digits?

Here is my environment:

$ gcc --version
i686-apple-darwin9-g++-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5490)

I am seeing the same problem with other versions of gcc, e.g.:

$ gcc --version
g++ (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10)

I can look into NTL or other libraries but I'm curious what is causing this. Thanks for your insight.


回答1:


I get this output:

19
const via cout: 0.6931471805599453094
non-const via cout: 0.6931471805599453094
const via printf: 0.6931471805599453094
non-const via printf: 0.6931471805599453094

But i'm using long double literals instead of double literals:

const long double constLog2 = 0.6931471805599453094172321214581765680755L;
long double log2 = 0.6931471805599453094172321214581765680755L;    



回答2:


There are some subtle issues relating to hardware platforms and compile options that might be of interest:

  • https://www.amazon.com/exec/obidos/ASIN/0954161793/networktheory-20
  • 3.17.3 Intel 386 and AMD x86-64 Options (gcc-4.0.1) (archived developer.apple.com documentation)
  • 3.17.12 Intel 386 and AMD x86-64 Options (gcc-4.0.1) (gcc.gnu.org documentation)

These `-m' options are defined for the i386 and x86-64 family of computers:

-m96bit-long-double

-m128bit-long-double

These switches control the size of long double type. The i386 application binary interface specifies the size to be 96 bits, so -m96bit-long-double is the default in 32 bit mode. Modern architectures (Pentium and newer) would prefer long double to be aligned to an 8 or 16 byte boundary. In arrays or structures conforming to the ABI, this would not be possible. So specifying a -m128bit-long-double will align long double to a 16 byte boundary by padding the long double with an additional 32 bit zero.

In the x86-64 compiler, -m128bit-long-double is the default choice as its ABI specifies that long double is to be aligned on 16 byte boundary.

Notice that neither of these options enable any extra precision over the x87 standard of 80 bits for a long double.

Warning: if you override the default value for your target ABI, the structures and arrays containing long double variables will change their size as well as function calling convention for function taking long double will be modified. Hence they will not be binary compatible with arrays or structures in code compiled without that switch.



来源:https://stackoverflow.com/questions/684112/the-precision-of-the-long-double-output-is-not-correct-what-might-be-wrong

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!