Could we get different solutions for eigenVectors from a matrix?

有些话、适合烂在心里 提交于 2019-11-27 09:37:47

Eigenvectors are NOT unique, for a variety of reasons. Change the sign, and an eigenvector is still an eigenvector for the same eigenvalue. In fact, multiply by any constant, and an eigenvector is still that. Different tools can sometimes choose different normalizations.

If an eigenvalue is of multiplicity greater than one, then the eigenvectors are again not unique, as long as they span the same subspace.

As woodchips points out (+1), eigenvectors are unique only up to a linear transformation. This fact is readily apparent from the definition, ie an eigenvector/eigenvalue pair solve the characteristic function A*v = k*v, where A is the matrix, v is the eigenvector, and k is the eigenvalue.

Let's consider a much simpler example than your (horrendous looking) question:

M = [1, 2, 3; 4, 5, 6; 7, 8, 9];
[EigVec, EigVal] = eig(M);

Matlab yields:

EigVec =
-0.2320   -0.7858    0.4082
-0.5253   -0.0868   -0.8165
-0.8187    0.6123    0.4082

while Mathematica yields:

EigVec = 
0.2833    -1.2833    1
0.6417    -0.1417    -2
1         1          1

From the Matlab documentation:

"For eig(A), the eigenvectors are scaled so that the norm of each is 1.0.".

Mathematica on the other hand is clearly scaling the eigenvectors so that so the final element is unity.

Even just eyeballing the outputs I've given, you can start to see the relationships emerge (in particular, compare the third eigenvector from both outputs).

By the way, I suggest you edit your question to have a more simple input matrix M, such as the one I've used here. This will make it much more readable for anyone who visits this page in the future. It is actually not that bad a question, but the way it is currently formatted will likely cause it to be down-voted.

veeresh

I completely agree with Mr.Colin T Bowers, that MATHEMATICA does the normalization so that last value of EigenVectors become one. Using MATLAB if anybody want to produce EigenVectors result like MATHEMATICA then we can tell MATLAB Normalize the last value of EigenVectors result to 1 using following normalization step.

M = [1, 2, 3; 4, 5, 6; 7, 8, 9];

[EigVec, EigVal] = eig(M);

sf=1./EigVec(end,:); %get the last value of each eigen vector and inverse for scale factor

sf=repmat(sf,size(EigVec,1),1); % Repeat Scale value of each element in the vector

Normalize_EigVec=EigVec.*sf;

Normalize_EigVec =

    0.2833   -1.2833    1.0000
    0.6417   -0.1417   -2.0000
    1.0000    1.0000    1.0000

As Rody points out the normalization Mathematica uses is to make the last element unity. The other eig functions like the QZ algorithm (which you have to use in Matlab coder for instance since Cholesky isn't supported), don't nomalize the way Matlab does for [V, lam] = eig(C). EX: [V,lam]= eig(C,eye(size(C)),'qz');

From the documentation http://www.mathworks.com/help/techdoc/ref/eig.html

Note: For eig(A), the eigenvectors are scaled so that the norm of each is 1.0. For eig(A,B), eig(A,'nobalance'), and eig(A,B,flag), the eigenvectors are not normalized. Also note that if A is symmetric, eig(A,'nobalance') ignores the nobalance option since A is already balanced.

For [V, lam]=eig(C); the eigenvectors are scaled so that the norm of each is 1.0. That’s what we need here. Matlab does that for the Cholesky formulation, so, how does one re-normalize the eigenvectors produced by QZ so they have that same scale? Like so:

W = V;
for i = 1:size(V,2) % for each column
    V(:,i) = V(:,i) / norm(V(:,i), 2);  % Normalize column i
end

This will find the length of each vector and divide the elements by that length to scale the vector. Mathamatica basically does the same thing, making the last element 1 instead of normalizing the vector. http://www.fundza.com/vectors/normalize/

Note, the vectors and values are not in the same order necessarily, so you may still need to sort them. Matlab's Cholesky algorithm produces the items in a sort order like so:

lam=diag(lam);
[sorted_lam,index]=sort(lam);
for cont=1:length(sorted_lam)
   sorted_V(:,cont)=V(:,index(cont));
end
W=sorted_W;
lam = diag(sorted_lam);

And even after doing this the signs may not be pointed in the same direction (the eigenvectors are still eigenvectors if they are multiplied times -1). Note the same sorting has to be applied to the lambda (eigenvalues) or those will be out of order.

The typical convention is to filp the signs of a column if the first element in the column is negative.

One thing you could do is flip the signs if more than 2 are negative:

%FLIP SIGNS IF MORE THAN 2 ARE NEGATIVE
W=sorted_W;
for i = 1:size(W,2) % for each column in V
    A = W(:,i);
    s=sign(A);
    inegatif=sum(s(:)==-1);
    if(inegatif>1)
        W(:,i) = -W(:,i);
    end
end

But this only really helps if the elements aren't close to 0 because if they are close to 0 a different algorithm might find the value on the other side of the 0 instead, but it's better than nothing.

One final thing, for the 'B' value (the Generalized eigenvalue problem input matrix), I am using 'eye(size(C))'. Is there an optimum way to select 'B' to improve this algorithm and make it give answers closer to those of Cholesky or be more accurate? You can use any (real matrix of the same size) value as B including A again or A' (A is the Input Matrix), but what is a 'good choice?' maybe A', I noticed for some inputs a 3x3 of -1 seems to give close to the same answers as 'chol'?

https://www.mathworks.com/help/matlab/ref/eig.html?searchHighlight=eig&s_tid=doc_srchtitle#inputarg_B

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!