I am getting the following unexpected result when I do arithmetic with small numbers in Python:
>>> sys.float_info
sys.float_info(max=1.797693134862
The precision of floats is higher near 0 than it is near 1.
Float density halves at regular intervals, it looks something like this:
Here is the next float after 1.
, in the direction of 0.
:
>>> import math
>>> math.nextafter(1., 0.)
0.9999999999999999
>>> format(math.nextafter(1., 0.), ".32f") # let's see more decimal places
'0.99999999999999988897769753748435'
The mathematically correct value of 1 - 10-17 is 0.99999999999999999 (there are seventeen nines), I'll call this number n. Like almost all numbers, n can't be represented exactly with a float.
0.99999999999999999 # n
0.00000000000000001 # distance between n and 1, i.e. 10^-17
0.00000000000000010102230246251565... # distance between n and nextafter(1., 0.)
So you see, 1 - 10-17 is about 10 times further from nextafter(1., 0.)
than it is from 1.
. When the expression 1. - 1.e-17
is evaluated by the interpreter it gives you back the closest possible result, which is 1.
exactly. It wouldn't make sense to return any other float, that would be even further away from the "real" result (pardon the pun).
Note: math.nextafter is available in Python 3.9+. In earlier versions you can use numpy.nextafter
similarly.
Related question -> Increment a Python floating point value by the smallest possible amount