eigenvalue

Computing N smallest eigenvalues of Sparse Matrix in Python

匆匆过客 提交于 2019-12-04 22:36:58
问题 I'd like to find the N smallest eigenvalues of a sparse matrix in Python. I've tried using the scipy.sparse.linalg.eigen.arpack package, but it is very slow at computing the smallest eigenvalues. I read somewhere that there is a shift-invert mode, but when I try using it, I receive an error message telling me that the shift-invert mode is not yet supported. Any ideas as to how I should proceed? 回答1: SciPy Versions Comparing the documentation of scipy.sparse.linalg.eigs from SciPy v0.9 with

low RAM consuming c++ eigen solver

两盒软妹~` 提交于 2019-12-04 11:41:36
I'm newbie in C++ programming , but I have a task to compute eigenvalues and eigenvectors (standard eigenproblem Ax=lx ) for symmetric matrices (and hermitian)) for very large matrix of size: binomial(L,L/2) where L is about 18-22. Now I'm testing it on machine which has about 7.7 GB ram available, but at final I'll have access to PC with 64GB RAM. I've started with Lapack++ . At the beginning my project assume to solve this problem only for symmetrical real matrices. This library was great. Very quick and small RAM consuming. It has an option to compute eigenvectors and place into input

Eigen Values and Eigen Vectors Matlab

谁说我不能喝 提交于 2019-12-04 05:37:38
问题 I have a matrix A A = [ 124.6,95.3,42.7 ; 95.3,55.33,2.74 ; 42.7,2.74,33.33 ] The eigenvalues and vectors: [V,D] = eig(A) How do I show the eigenvalues are mutually perpendicular? I've tried that if the dot product of the eigenvalues are zero, this demonstrates they are mutually perpendicular, but how would you compute this in MATLAB? I tried the following code transpose(diag(D)) * diag(D) %gives 4.1523e+04 Also, how can I verify the definition of eigenvalues and vector holds: A e_i - L_i e_i

What is the difference between 'eig' and 'eigs'?

风流意气都作罢 提交于 2019-12-04 00:27:04
问题 I've searched a lot for this but I can't find any answer about how the two methods 'eig' and 'eigs' differ. What is the difference between the eigenvalues and eigenvectors received from them? 回答1: They use different algorithms, tailored to different problems and different goals. eig is a good, fast, general use eigenvalue/vector solver. It is appropriate for use when your matrix is of a realistic size that fits well in memory, and when you need all of the eigenvalues/vectors. Sparse matrices

how to implement eigenvalue calculation with MapReduce/Hadoop?

冷暖自知 提交于 2019-12-03 12:25:38
问题 It is possible because PageRank was a form of eigenvalue and that is why MapReduce introduced. But there seems problems in actual implementation, such as every slave computer have to maintain a copy of the matrix? 回答1: PageRank solves the dominant eigenvector problem by iteratively finding the steady-state discrete flow condition of the network. If NxM matrix A describes the link weight (amount of flow) from node n to node m, then p_{n+1} = A . p_{n} In the limit where p has converged to a

What is the fastest way to calculate first two principal components in R?

拈花ヽ惹草 提交于 2019-12-03 07:08:11
问题 I am using princomp in R to perform PCA. My data matrix is huge (10K x 10K with each value up to 4 decimal points). It takes ~3.5 hours and ~6.5 GB of Physical memory on a Xeon 2.27 GHz processor. Since I only want the first two components, is there a faster way to do this? Update : In addition to speed, Is there a memory efficient way to do this ? It takes ~2 hours and ~6.3 GB of physical memory for calculating first two components using svd(,2,) . 回答1: You sometimes gets access to so-called

What is the fastest way to calculate first two principal components in R?

筅森魡賤 提交于 2019-12-02 20:42:12
I am using princomp in R to perform PCA. My data matrix is huge (10K x 10K with each value up to 4 decimal points). It takes ~3.5 hours and ~6.5 GB of Physical memory on a Xeon 2.27 GHz processor. Since I only want the first two components, is there a faster way to do this? Update : In addition to speed, Is there a memory efficient way to do this ? It takes ~2 hours and ~6.3 GB of physical memory for calculating first two components using svd(,2,) . Dirk Eddelbuettel You sometimes gets access to so-called 'economical' decompositions which allow you to cap the number of eigenvalues /

Eigen Values and Eigen Vectors Matlab

半城伤御伤魂 提交于 2019-12-02 10:03:38
I have a matrix A A = [ 124.6,95.3,42.7 ; 95.3,55.33,2.74 ; 42.7,2.74,33.33 ] The eigenvalues and vectors: [V,D] = eig(A) How do I show the eigenvalues are mutually perpendicular? I've tried that if the dot product of the eigenvalues are zero, this demonstrates they are mutually perpendicular, but how would you compute this in MATLAB? I tried the following code transpose(diag(D)) * diag(D) %gives 4.1523e+04 Also, how can I verify the definition of eigenvalues and vector holds: A e_i - L_i e_i = 0 The above equation: for i equal 1 to 3. For a real, symmetric matrix all eigenvales are positive

eigenvalue and eigenvectors in python vs matlab

醉酒当歌 提交于 2019-12-01 09:42:45
I have noticed there is a difference between how matlab calculates the eigenvalue and eigenvector of a matrix, where matlab returns the real valued while numpy's return the complex valued eigen valus and vector. For example: for matrix: A= 1 -3 3 3 -5 3 6 -6 4 Numpy: w, v = np.linalg.eig(A) w array([ 4. +0.00000000e+00j, -2. +1.10465796e-15j, -2. -1.10465796e-15j]) v array([[-0.40824829+0.j , 0.24400118-0.40702229j, 0.24400118+0.40702229j], [-0.40824829+0.j , -0.41621909-0.40702229j, -0.41621909+0.40702229j], [-0.81649658+0.j , -0.66022027+0.j , -0.66022027-0.j ]]) Matlab: [E, D] = eig(A) E -0

eigenvalue and eigenvectors in python vs matlab

混江龙づ霸主 提交于 2019-12-01 07:26:34
问题 I have noticed there is a difference between how matlab calculates the eigenvalue and eigenvector of a matrix, where matlab returns the real valued while numpy's return the complex valued eigen valus and vector. For example: for matrix: A= 1 -3 3 3 -5 3 6 -6 4 Numpy: w, v = np.linalg.eig(A) w array([ 4. +0.00000000e+00j, -2. +1.10465796e-15j, -2. -1.10465796e-15j]) v array([[-0.40824829+0.j , 0.24400118-0.40702229j, 0.24400118+0.40702229j], [-0.40824829+0.j , -0.41621909-0.40702229j, -0