I\'m about to write some code that computes the determinant of a square matrix (nxn), using the Laplace algorithm (Meaning recursive algorithm) as written Wikipedia\'s Lapla
Are you sure that your minor
returns the a new object and not a reference to your original matrix object? I used your exact determinant method and implemented a minor
method for your class, and it works fine for me.
Below is a quick/dirty implementation of your matrix class, since I don't have your implementation. For brevity I have chosen to implement it for square matrices only, which in this case shouldn't matter as we are dealing with determinants. Pay attention to det
method, that is the same as yours, and minor
method (the rest of the methods are there to facilitate the implementation and testing):
class matrix:
def __init__(self, n):
self.data = [0.0 for i in range(n*n)]
self.dim = n
@classmethod
def rand(self, n):
import random
a = matrix(n)
for i in range(n):
for j in range(n):
a[i,j] = random.random()
return a
@classmethod
def eye(self, n):
a = matrix(n)
for i in range(n):
a[i,i] = 1.0
return a
def __repr__(self):
n = self.dim
for i in range(n):
print str(self.data[i*n: i*n+n])
return ''
def __getitem__(self,(i,j)):
assert i < self.dim and j < self.dim
return self.data[self.dim*i + j]
def __setitem__(self, (i, j), val):
assert i < self.dim and j < self.dim
self.data[self.dim*i + j] = float(val)
#
def minor(self, i,j):
n = self.dim
assert i < n and j < n
a = matrix(self.dim-1)
for k in range(n):
for l in range(n):
if k == i or l == j: continue
if k < i:
K = k
else:
K = k-1
if l < j:
L = l
else:
L = l-1
a[K,L] = self[k,l]
return a
def det(self, i=0):
n = self.dim
if n == 1:
return self[0,0]
d = 0
for j in range(n):
d += ((-1)**(i+j))*(self[i,j])*((self.minor(i,j)).det())
return d
def __mul__(self, v):
n = self.dim
a = matrix(n)
for i in range(n):
for j in range(n):
a[i,j] = v * self[i,j]
return a
__rmul__ = __mul__
Now for testing
import numpy as np
a = matrix(3)
# same matrix from the Wikipedia page
a[0,0] = 1
a[0,1] = 2
a[0,2] = 3
a[1,0] = 4
a[1,1] = 5
a[1,2] = 6
a[2,0] = 7
a[2,1] = 8
a[2,2] = 9
a.det() # returns 0.0
# trying with numpy the same matrix
A = np.array(a.data).reshape([3,3])
print np.linalg.det(A) # returns -9.51619735393e-16
The residual in case of numpy is because it calculates the determinant through (Gaussian) elimination method rather than the Laplace expansion. You can also compare the results on random matrices to see that the difference between your determinant function and numpy's doesn't grow beyond float
precision:
import numpy as np
a = 10*matrix.rand(4)
A = np.array( a.data ).reshape([4,4])
print (np.linalg.det(A) - a.det())/a.det() # varies between zero and 1e-14