What are the differences between numeric
, float
and decimal
datatypes and which should be used in which situations?
For any ki
What it the underlying need?
It arises from the fact that, ultimately, computers represent, internally, numbers in binary format. That leads, inevitably, to rounding errors.
Consider this:
0.1 (decimal, or "base 10") = .00011001100110011... (binary, or "base 2")
The above ellipsis [...] means 'infinite'. If you look at it carefully, there is an infinite repeating pattern (='0011')
So, at some point the computer has to round that value. This leads to accumulation errors deriving from the repeated use of numbers that are inexactly stored.
Say that you want to store financial amounts (which are numbers that may have a fractional part). First of all, you cannot use integers obviously (integers don't have a fractional part).
From a purely mathematical point of view, the natural tendency would be to use a float
. But, in a computer, floats have the part of a number that is located after a decimal point - the "mantissa" - limited. That leads to rounding errors.
To overcome this, computers offer specific datatypes that limit the binary rounding error in computers for decimal numbers. These are the data type that
should absolutely be used to represent financial amounts. These data types typically go by the name of Decimal
. That's the case in C#, for example. Or, DECIMAL
in most databases.