What are the differences between numeric
, float
and decimal
datatypes and which should be used in which situations?
For any ki
Guidelines from MSDN: Using decimal, float, and real Data
The default maximum precision of numeric and decimal data types is 38. In Transact-SQL, numeric is functionally equivalent to the decimal data type. Use the decimal data type to store numbers with decimals when the data values must be stored exactly as specified.
The behavior of float and real follows the IEEE 754 specification on approximate numeric data types. Because of the approximate nature of the float and real data types, do not use these data types when exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. Instead, use the integer, decimal, money, or smallmoney data types. Avoid using float or real columns in WHERE clause search conditions, especially the = and <> operators. It is best to limit float and real columns to > or < comparisons.
Decimal has a fixed precision while float has variable precision.
EDIT (failed to read entire question): Float(53) (aka real) is a double-precision (64-bit) floating point number in SQL Server. Regular Float is a single-precision (32-bit) floating point number. Double is a good combination of precision and simplicty for a lot of calculations. You can create a very high precision number with decimal -- up to 136-bit -- but you also have to be careful that you define your precision and scale correctly so that it can contain all your intermediate calculations to the necessary number of digits.