Jon's answer is of course correct. None of the answers however have said how you can ensure that floating point arithmetic is done in the amount of precision guaranteed by the specification and not more.
C# automatically truncates any float back to its canonical 32 or 64 bit representation under the following circumstances:
- You put in a redundant explicit cast:
x + y
might have x and y as higher-precision numbers that are then added. But (double)((double)x+(double)y)
ensures that everything is truncated to 64 bit precision before and after the math happens.
- Any store to an instance field of a class, static field, array element, or dereferenced pointer always truncates. (Stores to locals, parameters and temporaries are not guaranteed to truncate; they can be enregistered. Fields of a struct might be on the short-term pool which can also be enregistered.)
These guarantees are not made by the language specification, but implementations should respect these rules. The Microsoft implementations of C# and the CLR do.
It is a pain to write the code to ensure that floating point arithmetic is predictable in C# but it can be done. Note that doing so will likely slow down your arithmetic.
Complaints about this awful situation should be addressed to Intel, not Microsoft; they're the ones who designed chips that make doing predictable arithmetic slower.
Also, note that this is a frequently asked question. You might consider closing this as a duplicate of:
Why differs floating-point precision in C# when separated by parantheses and when separated by statements?
Why does this floating-point calculation give different results on different machines?
Casting a result to float in method returning float changes result
(.1f+.2f==.3f) != (.1f+.2f).Equals(.3f) Why?
Coercing floating-point to be deterministic in .NET?
C# XNA Visual Studio: Difference between "release" and "debug" modes?
C# - Inconsistent math operation result on 32-bit and 64-bit
Rounding Error in C#: Different results on different PCs
Strange compiler behavior with float literals vs float variables