Difference between numeric, float and decimal in SQL Server

后端 未结 8 1301
半阙折子戏
半阙折子戏 2020-11-22 04:27

What are the differences between numeric, float and decimal datatypes and which should be used in which situations?

For any ki

8条回答
  •  情话喂你
    2020-11-22 05:20

    The case for Decimal

    What it the underlying need?

    It arises from the fact that, ultimately, computers represent, internally, numbers in binary format. That leads, inevitably, to rounding errors.

    Consider this:

    0.1 (decimal, or "base 10") = .00011001100110011... (binary, or "base 2")
    

    The above ellipsis [...] means 'infinite'. If you look at it carefully, there is an infinite repeating pattern (='0011')

    So, at some point the computer has to round that value. This leads to accumulation errors deriving from the repeated use of numbers that are inexactly stored.

    Say that you want to store financial amounts (which are numbers that may have a fractional part). First of all, you cannot use integers obviously (integers don't have a fractional part). From a purely mathematical point of view, the natural tendency would be to use a float. But, in a computer, floats have the part of a number that is located after a decimal point - the "mantissa" - limited. That leads to rounding errors.

    To overcome this, computers offer specific datatypes that limit the binary rounding error in computers for decimal numbers. These are the data type that should absolutely be used to represent financial amounts. These data types typically go by the name of Decimal. That's the case in C#, for example. Or, DECIMAL in most databases.

提交回复
热议问题