I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# progra
try sum VERY big and VERY small number. small one will be consumed and result will be same as large number.
double
is accurate to ~15 digits. You need more precision to really start hitting problems with only a few floating point operations.
Here's an example based on a prior question that demonstrates float arithmetic not working out exactly as you would think.
float f = (13.45f * 20);
int x = (int)f;
int y = (int)(13.45f * 20);
Console.WriteLine(x == y);
In this case, false
is printed to the screen. Why? Because of where the math is performed versus where the cast to int is happening. For x, the math is performed in one statement and stored to f, then it is being cast to an integer. For y, the value of the calculation is never stored before the cast. (In x, some precision is lost between the calculation and the cast, not the case for y.)
For an explanation behind what's specifically happening in float math, see this question/answer. Why differs floating-point precision in C# when separated by parantheses and when separated by statements?