eigenvalue

Numpy - Modal matrix and diagonal Eigenvalues

寵の児 提交于 2019-12-11 10:40:40
问题 I wrote a simple Linear Algebra code in Python Numpy to calculate the Diagonal of EigenValues by calculating $M^{-1}.A.M$ (M is the Modal Matrix) and it's working strange. Here's the Code : import numpy as np array = np.arange(16) array = array.reshape(4, -1) print(array) [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] eigenvalues, eigenvectors = np.linalg.eig(array) print eigenvalues [ 3.24642492e+01 -2.46424920e+00 1.92979794e-15 -4.09576009e-16] print eigenvectors [[-0.11417645 -0

Using ARPACK solving eigenvalueproblem, but getting inconsistent results with Matlab

一曲冷凌霜 提交于 2019-12-11 07:43:23
问题 I'm new to ARPACK, I downloaded a script like the following import time import numpy as np from scipy.linalg import eigh from scipy.sparse.linalg import eigs np.set_printoptions(suppress=True) n=30 rstart=0 rend=n A=np.zeros(shape=(n,n)) # first row if rstart == 0: A[0, :2] = [2, -1] rstart += 1 # last row if rend == n: A[n-1, -2:] = [-1, 2] rend -= 1 # other rows for i in range(rstart, rend): A[i, i-1:i+2] = [-1, 2, -1] A[0,8]=30 start_time = time.time() evals_large, evecs_large = eigs(A, 10

.Internal(La_rs()) returns negative values on some installations but not others

旧时模样 提交于 2019-12-11 07:38:40
问题 This is a continuation from a previous question: Rfast hd.eigen() returns NAs but base eigen() does not I have been having a problem with .Internal(La_rs((x)) returning different results on different machines. I suspect it may have something to do with number formatting, because on the same machine, if I save as a CSV and re-open, I don't get negatives anymore: On Clear Linux install: > load("input_to_La_rs.Rdata") > r <- .Internal(La_rs(as.matrix(x), only.values = FALSE)) > sum(r$values < 0)

How to obtain the eigenvalues after performing Multidimensional scaling?

不羁岁月 提交于 2019-12-11 02:13:47
问题 I am interested in taking a look at the Eigenvalues after performing Multidimensional scaling. What function can do that ? I looked at the documentation, but it does not mention Eigenvalues at all. Here is a code sample: mds = manifold.MDS(n_components=100, max_iter=3000, eps=1e-9, random_state=seed, dissimilarity="precomputed", n_jobs=1) results = mds.fit(wordDissimilarityMatrix) # need a way to get the Eigenvalues 回答1: I also couldn't find it from reading the documentation. I suspect they

SymPy could not compute the eigenvalues of this matrix

拜拜、爱过 提交于 2019-12-10 19:15:52
问题 I want to compute the second eigenvalue of a Laplacian matrix to check if the corresponding graph is connected or not, but when I try to use SymPy's eigenvals , a lot of times it happens that it throws an error MatrixError: Could not compute eigenvalues for Matrix([[1.00000000000000, 0.0, 0.0, 0.0, -1.00000000000000, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 1.00000000000000, 0.0, 0.0, 0.0, -1.00000000000000, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.00000000000000, 0.0, 0.0, 0.0, 0.0, 0.0, -1.00000000000000,

Eigenvector (Spectral) Decomposition

此生再无相见时 提交于 2019-12-10 11:29:52
问题 I am trying to find a program in C code that will allow me to compute a eigenvalue (spectral) decomposition for a square matrix. I am specifically trying to find code where the highest eigenvalue (and therefore its associated eigenvalue) are located int the first column. The reason I need the output to be in this order is because I am trying to compute eigenvector centrality and therefore I only really need to calculate the eigenvector associated with the highest eigenvalue. Thanks in advance

Matlab Codgen eig() function - strange behaviour

孤街浪徒 提交于 2019-12-07 23:15:50
问题 First, don't be fooled by the long post, there is not a lot of code just an observation of results so there are few example matrices. This is a bit related to this question: Matlab Codegen Eig Function - Is this a Bug? I know that mex/C/C++ translated eig() function may not return the same eigenvectors when using the same function in MATLAB and that's fine, but i am puzzled with results I'm getting. First this simple example: Output % c = diagonal matrix of eigenvalues % b = matrix whose

The fastest way to calculate eigenvalues of large matrices

爷,独闯天下 提交于 2019-12-07 17:24:50
问题 Until now I used numpy.linalg.eigvals to calculate the eigenvalues of quadratic matrices with at least 1000 rows/columns and, for most cases, about a fifth of its entries non-zero (I don't know if that should be considered a sparse matrix). I found another topic indicating that scipy can possibly do a better job. However, since I have to calculate the eigenvalues for hundreds of thousands of large matrices of increasing size (possibly up to 20000 rows/columns and yes, I need ALL of their

Sparse eigenvalues using eigen3/sparse

南笙酒味 提交于 2019-12-07 06:33:19
问题 Is there an distinct and effective way of finding eigenvalues and eigenvectors of a real, symmetrical, very large, let's say 10000x10000, sparse matrix in Eigen3 ? There is an eigenvalue solver for dense matrices but that doesn't make use of the property of the matrix e.g. it's symmetry. Furthermore I don't want to store the matrix in dense. Or (alternative) is there a better (+better documented) library to do that? 回答1: Armadillo will do this using eigs_sym Note that computing all the

Power iteration

≡放荡痞女 提交于 2019-12-06 11:45:41
问题 I'm trying to understand the power iteration to calculate the eigenvalues of a matrix. I followed the algorithm from en.wikipedia.org/wiki/Power_iteration#The_method: from math import sqrt def powerIteration(A): b = [random() for i in range(len(A))] tmp = [0] * len(A) for iteration in range(10000): for i in range(0, len(A)): tmp[i] = 0 for j in range(0, len(A)): tmp[i] += A[i][j] * b[j] normSq = 0 for k in range(0, len(A)): normSq += tmp[k] * tmp[k] norm = sqrt(normSq) for i in range(len(A)):