When should I use double instead of decimal?

后端 未结 12 1572
Happy的楠姐
Happy的楠姐 2020-11-22 10:51

I can name three advantages to using double (or float) instead of decimal:

  1. Uses less memory.
  2. Faster because flo
相关标签:
12条回答
  • 2020-11-22 11:44

    Choose the type in function of your application. If you need precision like in financial analysis, you have answered your question. But if your application can settle with an estimate your ok with double.

    Is your application in need of a fast calculation or will he have all the time in the world to give you an answer? It really depends on the type of application.

    Graphic hungry? float or double is enough. Financial data analysis, meteor striking a planet kind of precision ? Those would need a bit of precision :)

    0 讨论(0)
  • 2020-11-22 11:45

    I think you've summarised the advantages quite well. You are however missing one point. The decimal type is only more accurate at representing base 10 numbers (e.g. those used in currency/financial calculations). In general, the double type is going to offer at least as great precision (someone correct me if I'm wrong) and definitely greater speed for arbitrary real numbers. The simple conclusion is: when considering which to use, always use double unless you need the base 10 accuracy that decimal offers.

    Edit:

    Regarding your additional question about the decrease in accuracy of floating-point numbers after operations, this is a slightly more subtle issue. Indeed, precision (I use the term interchangeably for accuracy here) will steadily decrease after each operation is performed. This is due to two reasons:

    1. the fact that certain numbers (most obviously decimals) can't be truly represented in floating point form
    2. rounding errors occur, just as if you were doing the calculation by hand. It depends greatly on the context (how many operations you're performing) whether these errors are significant enough to warrant much thought however.

    In all cases, if you want to compare two floating-point numbers that should in theory be equivalent (but were arrived at using different calculations), you need to allow a certain degree of tolerance (how much varies, but is typically very small).

    For a more detailed overview of the particular cases where errors in accuracies can be introduced, see the Accuracy section of the Wikipedia article. Finally, if you want a seriously in-depth (and mathematical) discussion of floating-point numbers/operations at machine level, try reading the oft-quoted article What Every Computer Scientist Should Know About Floating-Point Arithmetic.

    0 讨论(0)
  • 2020-11-22 11:45

    Note: this post is based on information of the decimal type's capabilities from http://csharpindepth.com/Articles/General/Decimal.aspx and my own interpretation of what that means. I will assume Double is normal IEEE double precision.

    Note2: smallest and largest in this post reffer to the magnitude of the number.

    Pros of "decimal".

    • "decimal" can represent exactly numbers that can be written as (sufficiently short) decimal fractions, double cannot. This is important in financial ledgers and similar where it is important that the results exactly match what a human doing the calculations would give.
    • "decimal" has a much larger mantissa than "double". That means that for values within it's normalised range "decimal" will have a much higher precision than double.

    Cons of decimal

    • It will be Much slower (I don't have benchmarks but I would guess at least an order of magnitude maybe more), decimal will not benefit from any hardware acceleration and arithmetic on it will require relatively expensive multiplication/division by powers of 10 (which is far more expensive than multiplication and dividion by powers of 2) to match the exponent before addition/subtraction and to bring the exponent back into range after multiplication/division.
    • decimal will overflow earlier tha double will. decimal can only represent numbers up to ±296-1 . By comparision double can represent numbers up to nearly ±21024
    • decimal will underflow earlier. The smallest numbers representable in decimal are ±10-28 . By comparision double can represent values down to 2-149 (approx 10-45) if subnromal numbers are supported and 2-126 (approx 10-38) if they are not.
    • decimal takes up twice as much memory as double.

    My opinion is that you should default to using "decimal" for money work and other cases where matching human calculation exactly is important and that you should use use double as your default choice the rest of the time.

    0 讨论(0)
  • 2020-11-22 11:54

    You seem spot on with the benefits of using a floating point type. I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs. In those cases, I will "down cast" to double or float, but only do it internally, and carefully try to manage precision loss by limiting the number of significant digits in the mathematical operation being performed.

    In general, if your value is transient (not reused), you're safe to use a floating point type. The real problem with floating point types is the following three scenarios.

    1. You are aggregating floating point values (in which case the precision errors compound)
    2. You build values based on the floating point value (for example in a recursive algorithm)
    3. You are doing math with a very wide number of significant digits (for example, 123456789.1 * .000000000000000987654321)

    EDIT

    According to the reference documentation on C# decimals:

    The decimal keyword denotes a 128-bit data type. Compared to floating-point types, the decimal type has a greater precision and a smaller range, which makes it suitable for financial and monetary calculations.

    So to clarify my above statement:

    I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs.

    I have only ever worked in industries where decimals are favorable. If you're working on phsyics or graphics engines, it's probably much more beneficial to design for a floating point type (float or double).

    Decimal is not infinitely precise (it is impossible to represent infinite precision for non-integral in a primitive data type), but it is far more precise than double:

    • decimal = 28-29 significant digits
    • double = 15-16 significant digits
    • float = 7 significant digits

    EDIT 2

    In response to Konrad Rudolph's comment, item # 1 (above) is definitely correct. Aggregation of imprecision does indeed compound. See the below code for an example:

    private const float THREE_FIFTHS = 3f / 5f;
    private const int ONE_MILLION = 1000000;
    
    public static void Main(string[] args)
    {
        Console.WriteLine("Three Fifths: {0}", THREE_FIFTHS.ToString("F10"));
        float asSingle = 0f;
        double asDouble = 0d;
        decimal asDecimal = 0M;
    
        for (int i = 0; i < ONE_MILLION; i++)
        {
            asSingle += THREE_FIFTHS;
            asDouble += THREE_FIFTHS;
            asDecimal += (decimal) THREE_FIFTHS;
        }
        Console.WriteLine("Six Hundred Thousand: {0:F10}", THREE_FIFTHS * ONE_MILLION);
        Console.WriteLine("Single: {0}", asSingle.ToString("F10"));
        Console.WriteLine("Double: {0}", asDouble.ToString("F10"));
        Console.WriteLine("Decimal: {0}", asDecimal.ToString("F10"));
        Console.ReadLine();
    }
    

    This outputs the following:

    Three Fifths: 0.6000000000
    Six Hundred Thousand: 600000.0000000000
    Single: 599093.4000000000
    Double: 599999.9999886850
    Decimal: 600000.0000000000
    

    As you can see, even though we are adding from the same source constant, the results of the double is less precise (although probably will round correctly), and the float is far less precise, to the point where it has been reduced to only two significant digits.

    0 讨论(0)
  • 2020-11-22 11:55

    If you need to binary interrop with other languages or platforms, then you might need to use float or double, which are standardized.

    0 讨论(0)
  • 2020-11-22 11:57

    Use a double or a float when you don't need precision, for example, in a platformer game I wrote, I used a float to store the player velocities. Obviously I don't need super precision here because I eventually round to an Int for drawing on the screen.

    0 讨论(0)
提交回复
热议问题