Is this the right way of projecting the training set into the eigespace? MATLAB

梦想与她 提交于 2019-12-04 02:00:26

问题


I have computed PCA using the following :

function [signals,V] = pca2(data) 
[M,N] = size(data); 
data = reshape(data, M*N,1);
% subtract off the mean for each dimension 
mn = mean(data,2); 
data = bsxfun(@minus, data, mean(data,1));     
% construct the matrix Y 
Y = data'*data / (M*N-1); 
[V D] = eigs(Y, 10);   % reduce to 10 dimension
% project the original data 
signals = data * V;

My question is:

Is "signals" is the projection of the training set into the eigenspace?

I saw in "Amir Hossein" code that "centered image vectors" that is "data" in the above code needs to be projected into the "facespace" by multiplying in the eigenspace basis's. I don't really understand why is the projection done using centered image vectors? Isn't "signals" enough for classification??


回答1:


By signals, I assume you mean to ask why are we subtracting the mean from raw vector form of image.

If you think about PCA; it is trying to give you best direction where the data varies most. However, as your images contain pixel probably only positive values those pixels will always be on positive which will mislead, especially, your first and most important eigenvector. You can search more about second moment matrix. But I will share a bad paint image that explains it. Sorry about my drawing.

Please ignore the size of stars;

Stars: Your data

Red Line: Eigenvectors;

As you can easily see in 2D, centering the data can give better direction for your principal component. If you skip this step, your first eigenvector will bias on mean and cause poorer results.



来源:https://stackoverflow.com/questions/21427303/is-this-the-right-way-of-projecting-the-training-set-into-the-eigespace-matlab

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!