Why does .NET decimal.ToString(string) round away from zero, apparently inconsistent with the language spec?

前端 未结 3 1628
一整个雨季
一整个雨季 2021-02-18 15:52

I see that, in C#, rounding a decimal, by default, uses MidpointRounding.ToEven. This is expected, and is what the C# spec dictates. However, given t

相关标签:
3条回答
  • 2021-02-18 16:15

    ToString() by default formats according to the Culture, not according to a computational aspect of the specification. Apparently the Culture for your locale (and most, from the looks of it) expects rounding away from zero.

    If you want different behavior, you can pass an IFormatProvider in to ToString()

    I thought the above, but you are correct that it always rounds away from zero no matter the Culture.


    As also linked by a comment on this answer, here (MS Docs) is official documentation on the behavior. Excerpting from the top of that linked page, and focusing on the last two list items:

    Standard numeric format strings are used to format common numeric types. A standard numeric format string takes the form Axx, where:

    • A is a single alphabetic character called the format specifier. Any numeric format string that contains more than one alphabetic character, including white space, is interpreted as a custom numeric format string. For more information, see Custom Numeric Format Strings.

    • xx is an optional integer called the precision specifier. The precision specifier ranges from 0 to 99 and affects the number of digits in the result. Note that the precision specifier controls the number of digits in the string representation of a number. It does not round the number itself. To perform a rounding operation, use the Math.Ceiling, Math.Floor, or Math.Round method.

      When precision specifier controls the number of fractional digits in the result string, the result string reflects a number that is rounded to a representable result nearest to the infinitely precise result. If there are two equally near representable results:

      • On the .NET Framework and .NET Core up to .NET Core 2.0, the runtime selects the result with the greater least significant digit (that is, using MidpointRounding.AwayFromZero).

      • On .NET Core 2.1 and later, the runtime selects the result with an even least significant digit (that is, using MidpointRounding.ToEven).


    As far as your question ---

    Is there a good reason this is the case? Or is this just an inconsistency in the language?

    --- the answer implied by the change in behavior from Framework to Core 2.1+ is possibly, "No, there was no good reason, so we (Microsoft) went ahead and made the runtime consistent with the language in .NET Core 2.1 and later."

    0 讨论(0)
  • 2021-02-18 16:19

    Most likely because this is the standard way of dealing with currency. The impetus for the creation of decimal was that floating point does a poor job of dealing with currency values, so you would expect it's rules to be more aligned with accounting standards than mathematical correctness.

    0 讨论(0)
  • 2021-02-18 16:28

    If you read the spec carefully, you will see that there is no inconsistency here.

    Here's that paragraph again, with the important parts highlighted:

    The result of an operation on values of type decimal is that which would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the representation. Results are rounded to the nearest representable value, and, when a result is equally close to two representable values, to the value that has an even number in the least significant digit position (this is known as “banker’s rounding”). A zero result always has a sign of 0 and a scale of 0.

    This part of the spec applies to arithmetic operations on decimal; string formatting is not one of those, and even if it were, it wouldn't matter because your examples are low-precision.

    To demonstrate the behaviour referred to in the spec, use the following code:

    Decimal d1 = 0.00000000000000000000000000090m;
    Decimal d2 = 0.00000000000000000000000000110m;
    
    // Prints: 0.0000000000000000000000000004 (rounds down)
    Console.WriteLine(d1 / 2);
    
    // Prints: 0.0000000000000000000000000006 (rounds up)
    Console.WriteLine(d2 / 2);
    

    That's all the spec is talking about. If the result of some calculation would exceed the precision limit of the decimal type (29 digits), banker's rounding is used to determine what the result will be.

    0 讨论(0)
提交回复
热议问题