How to calculate float type precision and does it make sense?
问题 I have a problem understanding the precision of float type. The msdn writes that precision from 6 to 9 digits. But I note that precision depends from on the size of the number: float smallNumber = 1.0000001f; Console.WriteLine(smallNumber); // 1.0000001 bigNumber = 100000001f; Console.WriteLine(bigNumber); // 100000000 The smallNumber is more precise than big, I understand IEEE754, but I don't understand how MSDN calculate precision, and does it make sense? Also, you can play with the