Why c# decimals can't be initialized without the M suffix?

后端 未结 6 1089
一生所求
一生所求 2020-11-30 11:56
public class MyClass
{
    public const Decimal CONSTANT = 0.50; // ERROR CS0664   
}

produces this error:

error CS0664: Lit

相关标签:
6条回答
  • 2020-11-30 12:38

    The type of a literal without the m suffix is double - it's as simple as that. You can't initialize a float that way either:

    float x = 10.0; // Fail
    

    The type of the literal should be made clear from the literal itself, and the type of variable it's assigned to should be assignable to from the type of that literal. So your second example works because there's an implicit conversion from int (the type of the literal) to decimal. There's no implicit conversion from double to decimal (as it can lose information).

    Personally I'd have preferred it if there'd been no default or if the default had been decimal, but that's a different matter...

    0 讨论(0)
  • 2020-11-30 12:44

    The first example is a double literal. The second example is an integer literal.

    I guess it's not possible to convert double to decimal without possible loss of precision, but it is ok with an integer. So they allow implicit conversion with an integer.

    0 讨论(0)
  • 2020-11-30 12:50

    From http://msdn.microsoft.com/en-us/library/364x0z75.aspx : There is no implicit conversion between floating-point types and the decimal type; therefore, a cast must be used to convert between these two types.

    They do this because double has such a huge range ±5.0 × 10−324 to ±1.7 × 10308 whereas int is only -2,147,483,648 to 2,147,483,647. A decimal's range is (-7.9 x 1028 to 7.9 x 1028) / (100 to 28) so it can hold an int but not a double.

    0 讨论(0)
  • 2020-11-30 12:52

    Every literal is treated as a type. If you do not chose the 'M' suffix it is treated as a double. That you cannot implicitly convert a double to a decimal is quite understandable as it loses precision.

    0 讨论(0)
  • 2020-11-30 12:53

    Its a design choice that the creators of C# made.

    Likely it stems that double can lose precision and they didn't want you to store that loss. int don't have that problem.

    0 讨论(0)
  • 2020-11-30 12:58

    Your answer i a bit lower in the same link you provided, also Here. In Conversions:

    "The integral types are implicitly converted to decimal and the result evaluates to decimal. Therefore you can initialize a decimal variable using an integer literal, without the suffix".

    So, the reason is because of the implicit conversion between int and decimal. And since 0.50 is treated as double, and there is not implicit conversion between double and decimal, you get your error.

    For more details:

    http://msdn.microsoft.com/en-us/library/y5b434w4(v=vs.80).aspx

    http://msdn.microsoft.com/en-us/library/yht2cx7b.aspx

    0 讨论(0)
提交回复
热议问题