I know this has been discussed time and time again, but I can\'t seem to get even the most simple example of a one-step division of doubles to result in the expected, unroun
It has nothing to do with how 'simple' or 'small' the double
numbers are. Strictly speaking, neither 0.7
or 0.025
may be stored as exactly those numbers in computer memory, so performing calculations on them may provide interesting results if you're after heavy precision.
So yes, use decimal
or round.