What is the relationship between digits of significance and precision loss in floating point numbers?
问题 So I have been trying to wrap by head around the relation between the number of significant digits in a floating point number and the relative loss of precision, but I just can't seem to make sense of it. I was reading an article earlier that said to do the following: Set a float to a value of 2147483647. You will see that its value is actually 2147483648 Subtract 64 from the float and you will see that the operation is correct Subtract 65 from the float and you will see that you actually now