I have a float holding a very important value, which has to be VERY exact.
The problem I have is I\'m changing the value of the float ALWAYS only + and - (No divisi
Float and double values are stored in binary (base 2).
Therefore, they cannot accurately represent numbers like .3
that have no finite-length representation in binary.
Similarly, a decimal
, which is stored in base 10, cannot accurately represent numbers like 1/3
that have no finite-length representation in decimal.
You need an arbitrary-precision arithmetic library.
The problem is that with float some fractional numbers cannot be exactly represented. Consider using the decimal data type if you only use + and -, you shouldn't have that problem, since decimal uses a base 10 internally.
Just like an int
variable can only hold integers in a certain range, a float
can only hold certain values. 0.05
is not one of them.
If you set an int
variable to (say) 3.4, it won't actually hold the value 3.4; it will hold that value converted to a representable int
value: 3.
Similarly, if you set a float
variable to 0.05
, it won't get that exact value; it will instead get that value converted to the closest value representable as a float
. This is what you are seeing.
Floating point variables in many (if not most) languages only hold an imprecise approximation of the actual value. You can solve the issue in C# by using the decimal
data type. See this SO question.