Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?

前端 未结 12 2232
时光取名叫无心
时光取名叫无心 2020-11-22 03:45

I\'m curious as to whether or not there is a real difference between the money datatype and something like decimal(19,4) (which is what money uses

相关标签:
12条回答
  • 2020-11-22 04:00

    I found a reason about using decimal over money in accuracy subject.

    DECLARE @dOne   DECIMAL(19,4),
            @dThree DECIMAL(19,4),
            @mOne   MONEY,
            @mThree MONEY,
            @fOne   FLOAT,
            @fThree FLOAT
    
     SELECT @dOne   = 1,
            @dThree = 3,    
            @mOne   = 1,
            @mThree = 3,    
            @fOne   = 1,
            @fThree = 3
    
     SELECT (@dOne/@dThree)*@dThree AS DecimalResult,
            (@mOne/@mThree)*@mThree AS MoneyResult,
            (@fOne/@fThree)*@fThree AS FloatResult
    

    DecimalResult > 1.000000

    MoneyResult > 0.9999

    FloatResult > 1

    Just test it and make your decision.

    0 讨论(0)
  • 2020-11-22 04:04

    Well, I like MONEY! It's a byte cheaper than DECIMAL, and the computations perform quicker because (under the covers) addition and subtraction operations are essentially integer operations. @SQLMenace's example—which is a great warning for the unaware—could equally be applied to INTegers, where the result would be zero. But that's no reason not to use integers—where appropriate.

    So, it's perfectly 'safe' and appropriate to use MONEY when what you are dealing with is MONEY and use it according to mathematical rules that it follows (same as INTeger).

    Would it have been better if SQL Server promoted division and multiplication of MONEY's into DECIMALs (or FLOATs?)—possibly, but they didn't choose to do this; nor did they choose to promote INTegers to FLOATs when dividing them.

    MONEY has no precision issue; that DECIMALs get to have a larger intermediate type used during calculations is just a 'feature' of using that type (and I'm not actually sure how far that 'feature' extends).

    To answer the specific question, a "compelling reason"? Well, if you want absolute maximum performance in a SUM(x) where x could be either DECIMAL or MONEY, then MONEY will have an edge.

    Also, don't forget it's smaller cousin, SMALLMONEY—just 4 bytes, but it does max out at 214,748.3647 - which is pretty small for money—and so is not often a good fit.

    To prove the point around using larger intermediate types, if you assign the intermediate explicitly to a variable, DECIMAL suffers the same problem:

    declare @a decimal(19,4)
    declare @b decimal(19,4)
    declare @c decimal(19,4)
    declare @d decimal(19,4)
    
    select @a = 100, @b = 339, @c = 10000
    
    set @d = @a/@b
    
    set @d = @d*@c
    
    select @d
    

    Produces 2950.0000 (okay, so at least DECIMAL rounded rather than MONEY truncated—same as an integer would.)

    0 讨论(0)
  • 2020-11-22 04:06

    We've just come across a very similar issue and I'm now very much a +1 for never using Money except in top level presentation. We have multiple tables (effectively a sales voucher and sales invoice) each of which contains one or more Money fields for historical reasons, and we need to perform a pro-rata calculation to work out how much of the total invoice Tax is relevant to each line on the sales voucher. Our calculation is

    vat proportion = total invoice vat x (voucher line value / total invoice value)
    

    This results in a real world money / money calculation which causes scale errors on the division part, which then multiplies up into an incorrect vat proportion. When these values are subsequently added, we end up with a sum of the vat proportions which do not add up to the total invoice value. Had either of the values in the brackets been a decimal (I'm about to cast one of them as such) the vat proportion would be correct.

    When the brackets weren't there originally this used to work, I guess because of the larger values involved, it was effectively simulating a higher scale. We added the brackets because it was doing the multiplication first, which was in some rare cases blowing the precision available for the calculation, but this has now caused this much more common error.

    0 讨论(0)
  • 2020-11-22 04:09

    SQLMenace said money is inexact. But you don't multiply/divide money by money! How much is 3 dollars times 50 cents? 150 dollarcents? You multiply/divide money by scalars, which should be decimal.

    DECLARE
    @mon1 MONEY,
    @mon4 MONEY,
    @num1 DECIMAL(19,4),
    @num2 DECIMAL(19,4),
    @num3 DECIMAL(19,4),
    @num4 DECIMAL(19,4)
    
    SELECT
    @mon1 = 100,
    @num1 = 100, @num2 = 339, @num3 = 10000
    
    SET @mon4 = @mon1/@num2*@num3
    SET @num4 = @num1/@num2*@num3
    
    SELECT @mon4 AS moneyresult,
    @num4 AS numericresult
    

    Results in the correct result:

    moneyresult           numericresult
    --------------------- ---------------------------------------
    2949.8525             2949.8525

    money is good as long as you don't need more than 4 decimal digits, and you make sure your scalars - which do not represent money - are decimals.

    0 讨论(0)
  • 2020-11-22 04:11

    I want to give a different view of MONEY vs. NUMERICAL, largely based my own expertise and experience... My point of view here is MONEY, because I have worked with it for a considerable long time and never really used NUMERICAL much...

    MONEY Pro:

    • Native Data Type. It uses a native data type (integer) as the same as a CPU register (32 or 64 bit), so the calculation doesn't need unnecessary overhead so it's smaller and faster... MONEY needs 8 bytes and NUMERICAL(19, 4) needs 9 bytes (12.5% bigger)...

      MONEY is faster as long as it is used for it was meant to be (as money). How fast? My simple SUM test on 1 million data shows that MONEY is 275 ms and NUMERIC 517 ms... That is almost twice as fast... Why SUM test? See next Pro point

    • Best for Money. MONEY is best for storing money and do operations, for example, in accounting. A single report can run millions of additions (SUM) and a few multiplications after the SUM operation is done. For very big accounting applications it is almost twice as fast, and it is extremely significant...
    • Low Precision of Money. Money in real life doesn't need to be very precise. I mean, many people may care about 1 cent USD, but how about 0.01 cent USD? In fact, in my country, banks no longer care about cents (digit after decimal comma); I don't know about US bank or other country...

    MONEY Con:

    • Limited Precision. MONEY only has four digits (after the comma) precision, so it has to be converted before doing operations such as division... But then again money doesn't need to be so precise and is meant to be used as money, not just a number...

    But... Big, but here is even your application involved real-money, but do not use it in lots of SUM operations, like in accounting. If you use lots of divisions and multiplications instead then you should not use MONEY...

    0 讨论(0)
  • 2020-11-22 04:13

    I realise that WayneM has stated he knows that money is specific to SQL Server. However, he is asking if there are any reasons to use money over decimal or vice versa and I think one obvious reason still ought to be stated and that is using decimal means it's one less thing to worry about if you ever have to change your DBMS - which can happen.

    Make your systems as flexible as possible!

    0 讨论(0)
提交回复
热议问题