I just read on MDN that one of the quirks of JS\'s handling of numbers due to everything being \"double-precision 64-bit format IEEE 754 values\" is that when you d
You need a bit of error control.
Make a little double comparing method:
int CompareDouble(Double a,Double b) {
Double eplsilon = 0.00000001; //maximum error allowed
if ((a < b + epsilon) && (a > b - epsilon)) {
return 0;
}
else if (a < b + epsilon)
return -1;
}
else return 1;
}
Understanding rounding errors in floating point arithmetic is not for the faint-hearted! Basically, calculations are done as though there were infinity bits of precision available. The result is then rounded according to rules laid down in the relevant IEEE specifications.
This rounding can throw up some funky answers:
Math.floor(Math.log(1000000000) / Math.LN10) == 8 // true
This an an entire order of magnitude out. That's some rounding error!
For any floating point architecture, there is a number that represents the smallest interval between distinguishable numbers. It is called EPSILON.
It will be a part of the EcmaScript standard in the near future. In the meantime, you can calculate it as follows:
function epsilon() {
if ("EPSILON" in Number) {
return Number.EPSILON;
}
var eps = 1.0;
// Halve epsilon until we can no longer distinguish
// 1 + (eps / 2) from 1
do {
eps /= 2.0;
}
while (1.0 + (eps / 2.0) != 1.0);
return eps;
}
You can then use it, something like this:
function numericallyEquivalent(n, m) {
var delta = Math.abs(n - m);
return (delta < epsilon());
}
Or, since rounding errors can accumulate alarmingly, you may want to use delta / 2
or delta * delta
rather than delta
.
There are libraries that seek to solve this problem but if you don't want to include one of those (or can't for some reason, like working inside a GTM variable) then you can use this little function I wrote:
Usage:
var a = 194.1193;
var b = 159;
a - b; // returns 35.11930000000001
doDecimalSafeMath(a, '-', b); // returns 35.1193
Here's the function:
function doDecimalSafeMath(a, operation, b, precision) {
function decimalLength(numStr) {
var pieces = numStr.toString().split(".");
if(!pieces[1]) return 0;
return pieces[1].length;
}
// Figure out what we need to multiply by to make everything a whole number
precision = precision || Math.pow(10, Math.max(decimalLength(a), decimalLength(b)));
a = a*precision;
b = b*precision;
// Figure out which operation to perform.
var operator;
switch(operation.toLowerCase()) {
case '-':
operator = function(a,b) { return a - b; }
break;
case '+':
operator = function(a,b) { return a + b; }
break;
case '*':
case 'x':
precision = precision*precision;
operator = function(a,b) { return a * b; }
break;
case '÷':
case '/':
precision = 1;
operator = function(a,b) { return a / b; }
break;
// Let us pass in a function to perform other operations.
default:
operator = operation;
}
var result = operator(a,b);
// Remove our multiplier to put the decimal back.
return result/precision;
}
In situations like these you would tipically rather make use of an epsilon estimation.
Something like (pseudo code)
if (abs(((.2 + .1) * 10) - 3) > epsilon)
where epsilon is something like 0.00000001, or whatever precision you require.
Have a quick read at Comparing floating point numbers
(Math.floor(( 0.1+0.2 )*1000))/1000
This will reduce the precision of float numbers but solves the problem if you are not working with very small values. For example:
.1+.2 =
0.30000000000000004
after the proposed operation you will get 0.3 But any value between:
0.30000000000000000
0.30000000000000999
will be also considered 0.3
1.2 + 1.1 may be ok but 0.2 + 0.1 may not be ok.
This is a problem in virtually every language that is in use today. The problem is that 1/10 cannot be accurately represented as a binary fraction just like 1/3 cannot be represented as a decimal fraction.
The workarounds include rounding to only the number of decimal places that you need and either work with strings, which are accurate:
(0.2 + 0.1).toFixed(4) === 0.3.toFixed(4) // true
or you can convert it to numbers after that:
+(0.2 + 0.1).toFixed(4) === 0.3 // true
or using Math.round:
Math.round(0.2 * X + 0.1 * X) / X === 0.3 // true
where X
is some power of 10 e.g. 100 or 10000 - depending on what precision you need.
Or you can use cents instead of dollars when counting money:
cents = 1499; // $14.99
That way you only work with integers and you don't have to worry about decimal and binary fractions at all.
The situation of representing numbers in JavaScript may be a little bit more complicated than it used to. It used to be the case that we had only one numeric type in JavaScript:
This is no longer the case - not only there are currently more numerical types in JavaScript today, more are on the way, including a proposal to add arbitrary-precision integers to ECMAScript, and hopefully, arbitrary-precision decimals will follow - see this answer for details:
Another relevant answer with some examples of how to handle the calculations: