Scipy and Numpy have between them three different functions for finding eigenvectors for a given square matrix, these are:
The special behaviour of the third one has to do with the Lanczos algorithm, which works very well with sparse matrices. The documentation of scipy.sparse.linalg.eig
says it uses a wrapper for ARPACK, which in turn uses "the Implicitly Restarted Arnoldi Method (IRAM) or, in the case of symmetric matrices, the corresponding variant of the Lanczos algorithm." (1).
Now, the Lanczos algorithm has the property that it works better for large eigenvalues (in fact, it uses the maximum eigenvalue):
In practice, this simple algorithm does not work very well for computing very many of the eigenvectors because any round-off error will tend to introduce slight components of the more significant eigenvectors back into the computation, degrading the accuracy of the computation. (2)
So, whereas the Lanczos algorithm is only an approximation, I guess the other two methods use algos to find the exact eigenvalues -- and seemingly all of them, which probably depends on the algorithms used, too.
Here's an answer the non-routine specific part of your question:
In principle, the NumPy and SciPy linalg()
routines should be the same. Both use LAPACK and BLAS routines internally. The implementation in `´scipy.sparse`` uses a specific algorithm that works well for sparse matrices (ie. a matrices with mostly zero entries). Do not use this if your matrix is dense.
Note that technically, the eig()
in SciPy/NumPy be different implementations due to the fact that both packages can be built with different implementations of Lapack/BLAS. Common choices here would be standard Lapack/BLAS as available from netlib, ATLAS, Intel MKL or OpenBLAS.