What is the purpose of max_digits10 and how is it different from digits10?

后端 未结 2 641
南笙
南笙 2020-12-03 05:39

I am confused about what max_digits10 represents. According to its documentation, it is 0 for all integral types. The formula for floating-point types for

相关标签:
2条回答
  • 2020-12-03 06:24

    In my opinion, it is explained sufficiently at the linked site (and the site for digits10):

    digits10 is the (max.) amount of "decimal" digits where numbers
    can be represented by a type in any case, independent of their actual value.
    A usual 4-byte unsigned integer as example: As everybody should know, it has exactly 32bit,
    that is 32 digits of a binary number.
    But in terms of decimal numbers?
    Probably 9.
    Because, it can store 100000000 as well as 999999999.
    But if take numbers with 10 digits: 4000000000 can be stored, but 5000000000 not.
    So, if we need a guarantee for minimum decimal digit capacity, it is 9.
    And that is the result of digits10.

    max_digits10 is only interesting for float/double... and gives the decimal digit count
    which we need to output/save/process... to take the whole precision
    the floating point type can offer.
    Theoretical example: A variable with content 123.112233445566
    If you show 123.11223344 to the user, it is not as precise as it can be.
    If you show 123.1122334455660000000 to the user, it makes no sense because
    you could omit the trailing zeros (because your variable can´t hold that much anyways)
    Therefore, max_digits10 says how many digits precision you have available in a type.

    0 讨论(0)
  • 2020-12-03 06:25

    To put it simple,

    • digits10 is the number of decimal digits guaranteed to survive text → float → text round-trip.
    • max_digits10 is the number of decimal digits needed to guarantee correct float → text → float round-trip.

    There will be exceptions to both but these values give the minimum guarantee. Read the original proposal on max_digits10 for a clear example, Prof. W. Kahan's words and further details. Most C++ implementations follow IEEE 754 for their floating-point data types. For an IEEE 754 float, digits10 is 6 and max_digits10 is 9; for a double it is 15 and 17. Note that both these numbers should not be confused with the actual decimal precision of floating-point numbers.

    Example digits10

    char const *s1 = "8.589973e9";
    char const *s2 = "0.100000001490116119384765625";
    float const f1 = strtof(s1, nullptr);
    float const f2 = strtof(s2, nullptr);
    std::cout << "'" << s1 << "'" << '\t' << std::scientific << f1 << '\n';
    std::cout << "'" << s2 << "'" << '\t' << std::fixed << std::setprecision(27) << f2 << '\n';
    

    Prints

    '8.589973e9'      8.589974e+009
    '0.100000001490116119384765625'   0.100000001490116119384765625
    

    All digits up to the 6th significant digit were preserved, while the 7th digit didn't survive for the first number. However, all 27 digits of the second survived; this is an exception. However, most numbers become different beyond 7 digits and all numbers would be the same within 6 digits.

    In summary, digits10 gives the number of significant digits you can count on in a given float as being the same as the original real number in its decimal form from which it was created i.e. the digits that survived after the conversion into a float.

    Example max_digits10

    void f_s_f(float &f, int p) {
        std::ostringstream oss;
        oss << std::fixed << std::setprecision(p) << f;
        f = strtof(oss.str().c_str(), nullptr);
    }
    
    float f3 = 3.145900f;
    float f4 = std::nextafter(f3, 3.2f);
    std::cout << std::hexfloat << std::showbase << f3 << '\t' << f4 << '\n';
    f_s_f(f3, std::numeric_limits<float>::max_digits10);
    f_s_f(f4, std::numeric_limits<float>::max_digits10);
    std::cout << f3 << '\t' << f4 << '\n';
    f_s_f(f3, 6);
    f_s_f(f4, 6);
    std::cout << f3 << '\t' << f4 << '\n';
    

    Prints

    0x1.92acdap+1   0x1.92acdcp+1
    0x1.92acdap+1   0x1.92acdcp+1
    0x1.92acdap+1   0x1.92acdap+1
    

    Here two different floats, when printed with max_digits10 digits of precision, they give different strings and these strings when read back would give back the original floats they are from. When printed with lesser precision they give the same output due to rounding and hence when read back lead to the same float, when in reality they are from different values.

    In summary, max_digits10 are at least required to disambiguate two floats in their decimal form, so that when converted back to a binary float, we get the original bits again and not of the one slightly before or after it due to rounding errors.

    0 讨论(0)
提交回复
热议问题