问题
I have a numpy 2d array [medium/large sized - say 500x500]. I want to find the eigenvalues of the element-wise exponent of it. The problem is that some of the values are quite negative (-800,-1000, etc), and their exponents underflow (meaning they are so close to zero, so that numpy treats them as zero). Is there anyway to use arbitrary precision in numpy?
The way I dream it:
import numpy as np
np.set_precision('arbitrary') # <--- Missing part
a = np.array([[-800.21,-600.00],[-600.00,-1000.48]])
ex = np.exp(a) ## Currently warns about underflow
eigvals, eigvecs = np.linalg.eig(ex)
I have searched for a solution with gmpy and mpmath to no avail. Any idea will be welcome.
回答1:
SymPy can calculate arbitrary precision:
from sympy import exp, N, S
from sympy.matrices import Matrix
data = [[S("-800.21"),S("-600.00")],[S("-600.00"),S("-1000.48")]]
m = Matrix(data)
ex = m.applyfunc(exp).applyfunc(lambda x:N(x, 100))
vecs = ex.eigenvects()
print vecs[0][0] # eigen value
print vecs[1][0] # eigen value
print vecs[0][2] # eigen vect
print vecs[1][2] # eigen vect
output:
-2.650396553004310816338679447269582701529092549943247237903254759946483528035516341807463648841185335e-261
2.650396553004310816338679447269582701529092549943247237903254759946483528035516341807466621962539464e-261
[[-0.9999999999999999999999999999999999999999999999999999999999999999999999999999999999999994391176386872]
[ 1]]
[[1.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000560882361313]
[ 1]]
you can change 100 in N(x, 100) to other precision, but, as I tried 1000, the calculation of eigen vect failed.
回答2:
On 64-bit systems, there's a numpy.float128
dtype. (I believe there's a float96
dtype on 32-bit systems, as well) While numpy.linalg.eig
doesn't support 128-bit floats, scipy.linalg.eig
(sort of) does.
However, none of this is going to matter, in the long run. Any general solver for an eigenvalue problem is going to be iterative, rather than exact, so you're not gaining anything by keeping the extra precision! np.linalg.eig
works for any shape, but never returns an exact solution.
If you're always solving 2x2 matrices, it's trivial to write your own solver that should be more exact. I'll show an example of this at the end...
Regardless, forging ahead into pointlessly precise memory containers:
import numpy as np
import scipy as sp
import scipy.linalg
a = np.array([[-800.21,-600.00],[-600.00,-1000.48]], dtype=np.float128)
ex = np.exp(a)
print ex
eigvals, eigvecs = sp.linalg.eig(ex)
# And to test...
check1 = ex.dot(eigvecs[:,0])
check2 = eigvals[0] * eigvecs[:,0]
print 'Checking accuracy..'
print check1, check2
print (check1 - check2).dot(check1 - check2), '<-- Should be zero'
However, you'll notice that what you get is identical to just doing np.linalg.eig(ex.astype(np.float64)
. In fact, I'm fairly sure that's what scipy
is doing, while numpy
raises an error rather than doing it silently. I could be quite wrong, though...
If you don't want to use scipy, one workaround is to rescale things after the exponentiation but before solving for the eigenvalues, cast them as "normal" floats, solve for the eigenvalues, and then recast things as float128's afterwards and rescale.
E.g.
import numpy as np
a = np.array([[-800.21,-600.00],[-600.00,-1000.48]], dtype=np.float128)
ex = np.exp(a)
factor = 1e300
ex_rescaled = (ex * factor).astype(np.float64)
eigvals, eigvecs = np.linalg.eig(ex_rescaled)
eigvals = eigvals.astype(np.float128) / factor
# And to test...
check1 = ex.dot(eigvecs[:,0])
check2 = eigvals[0] * eigvecs[:,0]
print 'Checking accuracy..'
print check1, check2
print (check1 - check2).dot(check1 - check2), '<-- Should be zero'
Finally, if you're only solving 2x2 or 3x3 matrices, you can write your own solver, which will return an exact value for those shapes of matrices.
import numpy as np
def quadratic(a,b,c):
sqrt_part = np.lib.scimath.sqrt(b**2 - 4*a*c)
root1 = (-b + sqrt_part) / (2 * a)
root2 = (-b - sqrt_part) / (2 * a)
return root1, root2
def eigvals(matrix_2x2):
vals = np.zeros(2, dtype=matrix_2x2.dtype)
a,b,c,d = matrix_2x2.flatten()
vals[:] = quadratic(1.0, -(a+d), (a*d-b*c))
return vals
def eigvecs(matrix_2x2, vals):
a,b,c,d = matrix_2x2.flatten()
vecs = np.zeros_like(matrix_2x2)
if (b == 0.0) and (c == 0.0):
vecs[0,0], vecs[1,1] = 1.0, 1.0
elif c != 0.0:
vecs[0,:] = vals - d
vecs[1,:] = c
elif b != 0:
vecs[0,:] = b
vecs[1,:] = vals - a
return vecs
def eig_2x2(matrix_2x2):
vals = eigvals(matrix_2x2)
vecs = eigvecs(matrix_2x2, vals)
return vals, vecs
a = np.array([[-800.21,-600.00],[-600.00,-1000.48]], dtype=np.float128)
ex = np.exp(a)
eigvals, eigvecs = eig_2x2(ex)
# And to test...
check1 = ex.dot(eigvecs[:,0])
check2 = eigvals[0] * eigvecs[:,0]
print 'Checking accuracy..'
print check1, check2
print (check1 - check2).dot(check1 - check2), '<-- Should be zero'
This one returns a truly exact solution, but will only work for 2x2 matrices. It's the only solution that actually benefits from the extra precision, however!
回答3:
As far as I know, numpy does not support higher than double precision (float64), which is the default if not specified.
Try using this: http://code.google.com/p/mpmath/
List of features (among others)
Arithmetic:
- Real and complex numbers with arbitrary precision
- Unlimited exponent sizes / magnitudes
回答4:
I have no experience with numpy in particular, but adding a decimal point with a set amount of zeros may help. For example, use 1.0000 instead of 1. In normal python scripts where I've had this problem, this has helped, so unless your issue is caused by an oddity in numpy and has nothing to do with python, this should help.
Good luck!
回答5:
So you need 350 digits of precision. You will not get that with IEEE floating point numbers (what numpy is using). You can get it with the bc program:
$ bc -l
bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
scale=350
e(-800)
.<hundreds of zeros>00366
来源:https://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-algebra