Is my understanding correct that with Ruby BigDecimal
types (even with varying precision and scale lengths) should calculate accurately or should I anticipate f
There are two common pitfalls when working with floating point arithmetic.
The first problem is that Ruby floating points have fixed precision. In practice this will either be 1) no problem for you or 2) disastrous, or 3) something in between. Consider the following:
# float
1.0e+25 - 9999999999999999900000000.0
#=> 0.0
# bigdecimal
BigDecimal("1.0e+25") - BigDecimal("9999999999999999900000000.0")
#=> 100000000
A precision difference of 100 million! Pretty serious, right?
Except the precision error is only about 0.000000000000001% of the original number. It really is up to you to decide if this is a problem or not. But the problem is removed by using BigDecimal
because it has arbitrary precision. Your only limit is memory available to Ruby.
The second problem is that floating points cannot express all fractions accurately. In particular, they have problems with decimal fractions, because floats in Ruby (and most other languages) are binary floating points. For example, the decimal fraction 0.2
is an eternally-repeating binary fraction (0.001100110011...
). This can never be stored accurately in a binary floating point, no matter what the precision is.
This can make a big difference when you're rounding numbers. Consider:
# float
(0.29 * 50).round
#=> 14 # not correct
# bigdecimal
(BigDecimal("0.29") * 50).round
#=> 15 # correct
A BigDecimal
can describe decimal fractions precisely. However, there are fractions that cannot be described precisely with a decimal fraction either. For example 1/9
is an eternally-repeating decimal fraction (0.1111111111111...
).
Again, this will bite you when you round a number. Consider:
# bigdecimal
(BigDecimal("1") / 9 * 9 / 2).round
#=> 0 # not correct
In this case, using decimal floating points will still give a rounding error.
Some conclusions:
BigDecimal
also works well if you need arbitrary precision floating points, and don't really care if they are decimal or binary floating points.