I prepare a matrix of random numbers, calculate its inverse and matrix multiply it with the original matrix. This, in theory, gives the unit matrix. How can I let nump
Your problem can be reduced to a common float-comparison problem. The correct way to compare such arrays would be:
EPS = 1e-8 # for example
(np.abs(numpy.dot(A, A_inv) - E) < EPS).all()
In the end, you can round your answer with
m = np.round(m, decimals=10)
or check to see if they're very different:
np.abs(A*A.I - i).mean() < 1e-10
if you want to kill off the tiny numbers.
I would implement this with the numpy.matrix
class.
import numpy
size = 100
A = numpy.matrix(numpy.random.randint(0,10,(size,)*2))
E = numpy.eye(size)
print A * A.I
print np.abs(A * A.I - E).mean() < 1e-10
While getting True
would be didactically appealing, it would also be divorced from the realities of floating-point computations.
When dealing with the floating point, one necessarily has to be prepared not only for inexact results, but for all manner of other numerical issues that arise.
I highly recommend reading What Every Computer Scientist Should Know About Floating-Point Arithmetic.
In your particular case, to ensure that A * inv(A)
is close enough to the identity matrix, you could compute a matrix norm of numpy.dot(A, A_inv) - E
and ensure that it is small enough.
As a side note, you don't have to use a loop to populate A
and E
. Instead, you could just use
A = numpy.random.randint(0, 10, (size,size))
E = numpy.eye(size)
Agreeing with most of the points already made. However I would suggest that rather than looking at the individual off-diagonal elements, you take their rms sum; this reflects in some sense the "energy" that leaked into the off-diagonal terms as a result of imperfect calculations. If you then divide this RMS number by the sum of the diagonal terms, you get a metric of just how well the inverse worked. For example, the following code:
import numpy
import matplotlib.pyplot as plt
from numpy import mean, sqrt
N = 1000
R = numpy.zeros(N)
for size in range(50,N,50):
A = numpy.zeros((size, size))
E = numpy.zeros((size, size))
for i in range(size):
for j in range(size):
A[i][j]+=numpy.random.randint(10)
if i == j:
E[i][j]=1
A_inv = numpy.linalg.linalg.inv(A)
D = numpy.dot(A, A_inv) - E
S = sqrt(mean(D**2))
R[size] = S/size
print "size: ", size, "; rms is ", S/size
plt.plot(range(50,N,50), R[range(50, N, 50)])
plt.ylabel('RMS fraction')
plt.show()
Shows that the rms error is pretty stable with size of the array all the way up to a size of 950x950 (it does slow down a bit...). However, it's never "exact", and there are some outliers (presumably when the matrix is more nearly singular - this can happen with random matrices.)
Example plot (every time you run it, it will look a bit different):