Why when I save a value of say 40.54 in SQL Server to a column of type Real does it return to me a value that is more like 40.53999878999 instead of 40.54? I\'ve seen this
To add a clarification, a floating point numbers stored in a computer behaves as described by other posts here, because as described, it is stored in binary format. This means that unless it's value (both the mantissa and exponent components of the value) are powers of two, and cannot be represented exactly.
Some systems, on the other hand store fractional numbers in decimal (SQL Server Decimal, and Numeric data types, and Oracle Number datatype for example,) and then their internal representation is, therefore, exact for any number that is a power of 10. But then numbers that are not powers of 10 cannot be represented exactly.
Floating point numbers use binary fractions, and they don't correspond exactly to decimal fractions.
For money, it's better to either store number of cents as integer, or use a decimal number type. For example, Decimal(8,2) stores 8 digits including 2 decimals (xxxxxx.xx), i.e. to cent precision.
In a nutshell, it's for pretty much the same reason that one-third cannot be exactly expressed in decimal. Have a look at David Goldberg's classic paper "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for details.
Have a look at What Every Computer Scientist Should Know About Floating Point Arithmetic.
Floating point numbers in computers don't represent decimal fractions exactly. Instead, they represent binary fractions. Most fractional numbers don't have an exact representation as a binary fraction, so there is some rounding going on. When such a rounded binary fraction is translated back to a decimal fraction, you get the effect you describe.
For storing money values, SQL databases normally provide a DECIMAL type that stores exact decimal digits. This format is slightly less efficient for computers to deal with, but it is quite useful when you want to avoid decimal rounding errors.