A thought struck me as I was writing a piece of JavaScript code that processed some floating point values. What is the decimal point symbol in JavaScript? Is it always
According to the specification, a DecimalLiteral is defined as:
DecimalLiteral ::
DecimalIntegerLiteral . DecimalDigitsopt ExponentPartopt
. DecimalDigits ExponentPartopt
DecimalIntegerLiteral ExponentPartopt
and for satisfying the parseFloat argument:
So numberString becomes the longest prefix of trimmedString that satisfies the syntax of a StrDecimalLiteral, meaning the first parseable literal string number it finds in the input. Only the .
can be used to specify a floating-point number. If you're accepting inputs from different locales, use a string replace:
function parseLocalNum(num) {
return +(num.replace(",", "."));
}
The function uses the unary operator instead of parseFloat because it seems to me that you want to be strict about the input. parseFloat("1ABC")
would be 1
, whereas using the unary operator +"1ABC"
returns NaN
. This makes it MUCH easier to validate the input. Using parseFloat is just guessing that the input is in the correct format.
use:
theNumber.toLocaleString();
to get a properly formatted string with the right decimal and thousands separators
As far as I'm aware, javascript itself only knows about the .
separator for decimals. At least one person whose judgement I trust on JS things concurs:
http://www.merlyn.demon.co.uk/js-maths.htm#DTS