C++ 32bit vs 64bit floating limit

坚强是说给别人听的谎言 提交于 2019-12-14 04:17:32

问题


Given the code segment as follow, I just want to know

  • why the maximum value of long double is smaller in 64bit than that in 32bit?
  • why 64-bit version cannot expand as much digits as in 32-bit version to fill the "40" precision output?
  • it seems that the values of LDBL_MIN and LDBL_MAX are equal, is that a bug?

I have looked into the float.h files in my machine but cannot find the explicit definition of these macro constants.

Testing Code (Platform = Win7-64bit)

#include <cfloat>
#include <iomanip>
cout<<"FLT_MAX   ="<< setprecision(40) << FLT_MAX  << endl;
cout<<"DBL_MAX   ="<< setprecision(40) << DBL_MAX  << endl;
cout<<"LDBL_MAX  ="<< setprecision(40) << LDBL_MAX << endl;
cout<<"FLT_MIN   ="<< setprecision(40) << FLT_MIN  << endl;
cout<<"DBL_MIN   ="<< setprecision(40) << DBL_MIN  << endl;
cout<<"LDBL_MIN  ="<< setprecision(40) << LDBL_MIN << endl;

32-bit outcome (MinGW-20120426)

FLT_MAX  =340282346638528859811704183484516925440
DBL_MAX  =1.797693134862315708145274237317043567981e+308
LDBL_MAX =1.189731495357231765021263853030970205169e+4932
FLT_MIN  =1.175494350822287507968736537222245677819e-038
DBL_MIN  =2.225073858507201383090232717332404064219e-308
LDBL_MIN =3.362103143112093506262677817321752602598e-4932

64-bit outcome (MinGW64-TDM 4.6)

FLT_MAX  =340282346638528860000000000000000000000
DBL_MAX  =1.7976931348623157e+308
LDBL_MAX =1.132619801677474e-317
FLT_MIN  =1.1754943508222875e-038
DBL_MIN  =2.2250738585072014e-308
LDBL_MIN =1.132619801677474e-317

Thanks.

[Edit]: Using the latest MinGW64-TGM 4.7.1, the "bugs" of LDBL_MAX, LDBL_MIN seems removed.


回答1:


LDBL_MAX =1.132619801677474e-317 sounds like a bug somewhere. It's a requirement of the standard that every value representable as a double can also be represented as a long double, so it's not permissible for LDBL_MAX < DBL_MAX. Given that you haven't shown your real testing code, I personally would check that before blaming the compiler.

If there really is a (non-bug) difference in long double between the two, then the basis of that difference will be that your 32-bit compiler uses the older x87 floating point operations, which have 80 bit precision, and hence allow for an 80-bit long double.

Your 64-bit compiler uses the newer 64-bit floating point operations in x64. No 80-bit precision, and it doesn't bother switching to x87 instructions to implement a bigger long double.

There's probably more complication to it than that. For example not all x86 compilers necessarily have an 80-bit long double. How they make that decision depends on various things, possibly including the fact that SSE2 has 64-bit floating point ops. But the possibilities are that long double is the same size as double, or that it's bigger.

why 64-bit version cannot expand as much digits as in 32-bit version to fill the "40" precision output?

A double only has about 15 decimal digits of precision. Digits beyond that are sometimes informative, but usually misleading.

I can't remember what the standard says about setprecision, but assuming the implementation is allowed to draw a line where it stops generating digits, the precision of a double is a reasonable place to draw it. As for why one implementation decided to actually do it and the other didn't -- I don't know. Since they're different distributions, they might be using completely different standard libraries.

The same "spurious precision" is why you see 340282346638528859811704183484516925440 for FLT_MAX in one case, but 340282346638528860000000000000000000000 in the other. One compiler (or rather, one library implementation) has gone to the trouble to calculate lots of digits. The other has given up early and rounded.




回答2:


To answer this question I make only a few assumptions: 1) that you tested this only on the 64 bit machine 2) that the compilers are different bit versions of the same sub-version (that is to say, they're practically sister compilers).

That having been said:

From "ISO/IEC 14882 INTERNATIONAL STANDARD First edition 1998-09-01"

3.9.1 Fundamental types

  1. There are three floating point types: float, double, and long double. The type double provides at least as much precision as float, and the type long double provides at least as much precision as double. The set of values of the type float is a subset of the set of values of the type double; the set of values of the type double is a subset of the set of values of the type long double. The value representation of floating-point types is implementation-defined. Integral and floating types are collectively called arithmetic types. Specializations of the standard template numeric_limits (18.2) shall specify the maximum and minimum values of each arithmetic type for an implementation.

Additionally different CPU's will have different effects on the end result as far as precision with higher level numbers. Same goes for compilers. VC++'s compiler won't behave same as borland, nor GCC/G++, and so on.



来源:https://stackoverflow.com/questions/12706368/c-32bit-vs-64bit-floating-limit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!