Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?

前端 未结 12 2231
时光取名叫无心
时光取名叫无心 2020-11-22 03:45

I\'m curious as to whether or not there is a real difference between the money datatype and something like decimal(19,4) (which is what money uses

相关标签:
12条回答
  • 2020-11-22 03:48

    Never ever should you use money. It is not precise, and it is pure garbage; always use decimal/numeric.

    Run this to see what I mean:

    DECLARE
        @mon1 MONEY,
        @mon2 MONEY,
        @mon3 MONEY,
        @mon4 MONEY,
        @num1 DECIMAL(19,4),
        @num2 DECIMAL(19,4),
        @num3 DECIMAL(19,4),
        @num4 DECIMAL(19,4)
    
        SELECT
        @mon1 = 100, @mon2 = 339, @mon3 = 10000,
        @num1 = 100, @num2 = 339, @num3 = 10000
    
        SET @mon4 = @mon1/@mon2*@mon3
        SET @num4 = @num1/@num2*@num3
    
        SELECT @mon4 AS moneyresult,
        @num4 AS numericresult
    

    Output: 2949.0000 2949.8525

    To some of the people who said that you don't divide money by money:

    Here is one of my queries to calculate correlations, and changing that to money gives wrong results.

    select t1.index_id,t2.index_id,(avg(t1.monret*t2.monret)
        -(avg(t1.monret) * avg(t2.monret)))
                /((sqrt(avg(square(t1.monret)) - square(avg(t1.monret))))
                *(sqrt(avg(square(t2.monret)) - square(avg(t2.monret))))),
    current_timestamp,@MaxDate
                from Table1 t1  join Table1 t2  on t1.Date = traDate
                group by t1.index_id,t2.index_id
    
    0 讨论(0)
  • 2020-11-22 03:49

    Everything is dangerous if you don't know what you are doing

    Even high-precision decimal types can't save the day:

    declare @num1 numeric(38,22)
    declare @num2 numeric(38,22)
    set @num1 = .0000006
    set @num2 = 1.0
    select @num1 * @num2 * 1000000
    

    1.000000 <- Should be 0.6000000


    The money types are integers

    The text representations of smallmoney and decimal(10,4) may look alike, but that doesn't make them interchangeable. Do you cringe when you see dates stored as varchar(10)? This is the same thing.

    Behind the scenes, money/smallmoney are just a bigint/int The decimal point in the text representation of money is visual fluff, just like the dashes in a yyyy-mm-dd date. SQL doesn't actually store those internally.

    Regarding decimal vs money, pick whatever is appropriate for your needs. The money types exist because storing accounting values as integer multiples of 1/10000th of unit is very common. Also, if you are dealing with actual money and calculations beyond simple addition and subtraction, you shouldn't be doing that at the database level! Do it at the application level with a library that supports Banker's Rounding (IEEE 754)

    0 讨论(0)
  • 2020-11-22 03:55

    As a counter point to the general thrust of the other answers. See The Many Benefits of Money…Data Type! in SQLCAT's Guide to Relational Engine

    Specifically I would point out the following

    Working on customer implementations, we found some interesting performance numbers concerning the money data type. For example, when Analysis Services was set to the currency data type (from double) to match the SQL Server money data type, there was a 13% improvement in processing speed (rows/sec). To get faster performance within SQL Server Integration Services (SSIS) to load 1.18 TB in under thirty minutes, as noted in SSIS 2008 - world record ETL performance, it was observed that changing the four decimal(9,2) columns with a size of 5 bytes in the TPC-H LINEITEM table to money (8 bytes) improved bulk inserting speed by 20% ... The reason for the performance improvement is because of SQL Server’s Tabular Data Stream (TDS) protocol, which has the key design principle to transfer data in compact binary form and as close as possible to the internal storage format of SQL Server. Empirically, this was observed during the SSIS 2008 - world record ETL performance test using Kernrate; the protocol dropped significantly when the data type was switched to money from decimal. This makes the transfer of data as efficient as possible. A complex data type needs additional parsing and CPU cycles to handle than a fixed-width type.

    So the answer to the question is "it depends". You need to be more careful with certain arithmetical operations to preserve precision but you may find that performance considerations make this worthwhile.

    0 讨论(0)
  • 2020-11-22 03:55

    All the previous posts bring valid points, but some don't answer the question precisely.

    The question is: Why would someone prefer money when we already know it is a less precise data type and can cause errors if used in complex calculations?

    You use money when you won't make complex calculations and can trade this precision for other needs.

    For example, when you don't have to make those calculations, and need to import data from valid currency text strings. This automatic conversion works only with MONEY data type:

    SELECT CONVERT(MONEY, '$1,000.68')
    

    I know you can make your own import routine. But sometimes you don't want to recreate a import routine with worldwide specific locale formats.

    Another example, when you don't have to make those calculations (you need just to store a value) and need to save 1 byte (money takes 8 bytes and decimal(19,4) takes 9 bytes). In some applications (fast CPU, big RAM, slow IO), like just reading huge amount of data, this can be faster too.

    0 讨论(0)
  • 2020-11-22 03:56

    I just saw this blog entry: Money vs. Decimal in SQL Server.

    Which basically says that money has a precision issue...

    declare @m money
    declare @d decimal(9,2)
    
    set @m = 19.34
    set @d = 19.34
    
    select (@m/1000)*1000
    select (@d/1000)*1000
    

    For the money type, you will get 19.30 instead of 19.34. I am not sure if there is an application scenario that divides money into 1000 parts for calculation, but this example does expose some limitations.

    0 讨论(0)
  • 2020-11-22 03:58

    You shouldn't use money when you need to do multiplications / divisions on the value. Money is stored in the same way an integer is stored, whereas decimal is stored as a decimal point and decimal digits. This means that money will drop accuracy in most cases, while decimal will only do so when converted back to its original scale. Money is fixed point, so its scale doesn't change during calculations. However because it is fixed point when it gets printed as a decimal string (as opposed to as a fixed position in a base 2 string), values up to the scale of 4 are represented exactly. So for addition and subtraction, money is fine.

    A decimal is represented in base 10 internally, and thus the position of the decimal point is also based on the base 10 number. Which makes its fractional part represent its value exactly, just like with money. The difference is that intermediate values of decimal can maintain precision up to 38 digits.

    With a floating point number, the value is stored in binary as if it were an integer, and the decimal (or binary, ahem) point's position is relative to the bits representing the number. Because it is a binary decimal point, base 10 numbers lose precision right after the decimal point. 1/5th, or 0.2, cannot be represented precisely in this way. Neither money nor decimal suffer from this limitation.

    It is easy enough to convert money to decimal, perform the calculations, and then store the resulting value back into a money field or variable.

    From my POV, I want stuff that happens to numbers to just happen without having to give too much thought to them. If all calculations are going to get converted to decimal, then to me I'd just want to use decimal. I'd save the money field for display purposes.

    Size-wise I don't see enough of a difference to change my mind. Money takes 4 - 8 bytes, whereas decimal can be 5, 9, 13, and 17. The 9 bytes can cover the entire range that the 8 bytes of money can. Index-wise (comparing and searching should be comparable).

    0 讨论(0)
提交回复
热议问题