What are the differences between numeric
, float
and decimal
datatypes and which should be used in which situations?
For any ki
Decimal has a fixed precision while float has variable precision.
EDIT (failed to read entire question): Float(53) (aka real) is a double-precision (64-bit) floating point number in SQL Server. Regular Float is a single-precision (32-bit) floating point number. Double is a good combination of precision and simplicty for a lot of calculations. You can create a very high precision number with decimal -- up to 136-bit -- but you also have to be careful that you define your precision and scale correctly so that it can contain all your intermediate calculations to the necessary number of digits.