Why is the data stored in a Float datatype considered to be an approximate value?
问题 I've never understood why a float datatype is considered an approximation while a decimal datatype is considered exact. I'm looking for a good explanation, thanks. 回答1: well, you're right - it's misleading to make such a blanket statement. to understand completely you need to grasp two things. first, decimal is intended for storing (exactly) decimal values with a fixed number of decimal places. typically, money (where the decimals are cents, for example). that's a very specific use case. it's