If you put the following code in your compiler the result is a bit bizar:
decimal x = (276/304)*304;
double y = (276/304)*304;
Console.WriteLine("decima
276/304 = 69/76 is a recurring "decimal" in both base 10 and base 2.
So the result gets rounded off, and multiplying by the denominator may not result in the orginal numerator. A more commonly-cited example of this situation is 1/3*3 = 0.33333333*3 = 0.99999999.
That the double
version gives the exact answer is just a coincidence. The rounding error in the multiplication just happens to cancel out the rounding error in the division.
If this result is confusing, it may be because you've heard that "double
has rounding errors and decimal
is exact". But decimal
is only exact at representing decimal fractions like 0.1 (which is 0.0 0011 0011... in binary). When you have a factor of 19 in the denominator, it doesn't help you.
Well, mathematically 0.99999... == 1. Have a look at http://en.wikipedia.org/wiki/0.999... I know that programtically it poses some problems, but it's not totally a floating-point issue.
Well, floating point precision isn't 100%.
See for example: http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm