I want to count the number of decimal places (ignoring trailing zeros) in a Float (or NSDecimalNumber) for example:
1.45000 => 2
5.98 => 2
1.00 => 0
0.8
This is actually really hard due to floating point not being representable precisely in a decimal format. For example the nearest 64 bit IEEE754 floating point to 5.98 is
5.980000000000000426325641456060111522674560546875
Presumably in this case you want the answer to be 2.
The easiest thing to do is to use your favourite converter to a string, formatted to 15 significant figures (for a double precision type) and inspect the output. It's not particularly fast, but it will be reliable. For a 32 bit floating point type, use 7 significant figures.
That said, if you can use a decimal type from the get-go then do that.
What about this approach? According to Here both Float and Double are BinaryFloatingPoint. So:
public extension Numeric where Self: BinaryFloatingPoint {
/// Returns the number of decimals. It will be always greater than 0
var numberOfDecimals: Int {
let integerString = String(Int(self))
//Avoid conversion issue
let stringNumber: String
if self is Double {
stringNumber = String(Double(self))
}
else {
stringNumber = String(Float(self))
}
let decimalCount = stringNumber.count - integerString.count - 1
return decimalCount
}
}
For me, there is quite easy solution that works on any region and device because iOS will handle possible binary errors for you automatically:
extension Double {
func decimalCount() -> Int {
if self == Double(Int(self)) {
return 0
}
let integerString = String(Int(self))
let doubleString = String(Double(self))
let decimalCount = doubleString.count - integerString.count - 1
return decimalCount
}
}
Edit: it should work same for Double or Float
Doing this with Decimal is fairly straightforward, provided you correctly create your Decimal. Decimals are stored as significand * 10^exponent
. significand
is normalized to the smallest integer possible. So for 1230, the significand is 123 and the exponent is 1. For 1.23 the significand is also 123 and the exponent is -2. That leads us to:
extension Decimal {
var significantFractionalDecimalDigits: Int {
return max(-exponent, 0)
}
}
However, you must be very careful constructing your Decimal. If you construct it from a Double, you will already have applied binary rounding errors. So for example:
let n = Decimal(0.111) // 0.11100000000000002048 because you passed a Double
significantFractionalDecimalDigits // 20
vs.
let n = Decimal(string: "0.111")!
n.significantFractionalDecimalDigits // 3 what you meant
Keep in mind of course that Decimal has a maximum number of significant digits, so it may still apply rounding.
let n = Decimal(string: "12345678901235678901234567890.1234567890123456789")!
n.significantFractionalDecimalDigits // 9 ("should" be 19)
And if you're walking down this road at all, you really must read the Floating Point Guide and the canonical StackOverflow question: Is floating point math broken?
This is terrible