code :
import numpy
from matplotlib.mlab import PCA
file_name = \"store1_pca_matrix.txt\"
ori_data = numpy.loadtxt(file_name,dtype=\'float\', comments=\'#\', de
This may be due to the singular nature of your input datamatrix (which you are feeding to PCA)
Even if your data is correct, it may happen because it runs out of memory. In my case, moving from a 32-bit machine to a 64-bit machine with bigger memory solved the problem.
I do not have an answer to this question but I have the reproduction scenario with no nans and infs. Unfortunately the datataset is pretty large (96MB gzipped).
import numpy as np
from StringIO import StringIO
from scipy import linalg
import urllib2
import gzip
url = 'http://physics.muni.cz/~vazny/gauss/X.gz'
X = np.loadtxt(gzip.GzipFile(fileobj=StringIO(urllib2.urlopen(url).read())), delimiter=',')
linalg.svd(X, full_matrices=False)
which rise:
LinAlgError: SVD did not converge
on:
>>> np.__version__
'1.8.1'
>>> import scipy
>>> scipy.__version__
'0.10.1'
but did not raise an exception on:
>>> np.__version__
'1.8.2'
>>> import scipy
>>> scipy.__version__
'0.14.0'
This can happen when there are inf or nan values in the data.
Use this to remove nan values:
ori_data.dropna(inplace=True)
Following on @c-chavez answer, what worked for me was first replacing inf and -inf to nan, then removing nan. For example:
data = data.replace(np.inf, np.nan).replace(-np.inf, np.nan).dropna()
This happened to me when I accidentally resized an image dataset to (0, 64, 3). Try checking the shape of your dataset to see if one of the dimensions is 0.