selection of features using PCA

前端 未结 3 637
伪装坚强ぢ
伪装坚强ぢ 2020-12-30 15:47

I am doing unsupervised classification. For this I have 8 features (Variance of Green, Std. div. of Green , Mean of Red, Variance of Red, Std. div. of Red, Mean of Hue, Vari

相关标签:
3条回答
  • 2020-12-30 16:36

    From the pcacov docs:

    COEFF is a p-by-p matrix, with each column containing coefficients for one principal component. The columns are in order of decreasing component variance.

    Since explained shows that only the first component really contributes a significant amount to explained variance, you should look at the first column of PC to see which original features it uses:

    0.0038
    0.0755
    0.7008 <---
    0.0007 
    0.0320 
    0.7065 <---
    0.0026 
    0.0543 
    

    It turns out, in your example, that the 3rd and 6th features (indicated with <-- ) are the main contributors to the first principal components. You could say that these features are the most important ones.

    Similarly, based on the fact that the 1st, 4th and 7th features only get large weights in some of the last columns of PC, one could conclude that they are relatively unimportant.

    However, for this sort of per-feature analysis, PCA might not be the best fit; you could derive such information from the standard deviations of the original features just as well.

    0 讨论(0)
  • 2020-12-30 16:44

    Your problem is the same as the COLUMNSELECT problem discussed by Mahoney and Drineas in "CUR matrix decompositions for improved data analysis".

    They first compute the leverage scores for each dimension and then selects 3 of them randomly using the leverage scores as weights. Alternatively, you can select the largest ones. Here's the script for your problem:

    I first got a real nature image from the web and resized it to the dimensions you ask. The image is as follows:

    img

    %# Example data from real image of size 179x8
    %# You can skip it for your own data
    features = im2double(rgb2gray(imread('img.png')));
    
    %# m samples, n dimensions
    [m,n] = size(features);
    

    Then, compute the centralized data:

    %# Remove the mean
    features = features - repmat(mean(features,2), 1, size(features,2));
    

    I use SVD to compute PCA since it gives you both the principal components and the coefficients. If the samples are in columns, then U holds the principal components. Check the second page of this paper for the relationship.

    %# Compute the SVD
    [U,S,V] = svd(features);
    

    The key idea here is that we want to get the dimensions having most of the variation. And an assumption is that there's some noise in data. We select only the dominant eigenvectors, e.g. representing the 95% of the data.

    %# Compute the number of eigenvectors representing
    %#  the 95% of the variation
    coverage = cumsum(diag(S));
    coverage = coverage ./ max(coverage);
    [~, nEig] = max(coverage > 0.95);
    

    Then the leverage scores are computed using nEig of the principal components. That is, we take the norm of the nEig coefficients.

    %# Compute the norms of each vector in the new space
    norms = zeros(n,1);
    for i = 1:n
        norms(i) = norm(V(i,1:nEig))^2;
    end
    

    Then, we can sort the leverage scores:

    %# Get the largest 3
    [~, idx] = sort(norms);
    idx(1:3)'
    

    and get the indices of the vectors with the largest leverage scores:

    ans =
       6     8     5
    

    You can check the paper for more details.

    But, keep in mind that PCA-based technique is good if you have many many dimensions. In your case, the search space is very small. My advice is to search exhaustively in the space and get the best selection as @amit recommends.

    0 讨论(0)
  • 2020-12-30 16:46

    PCA is actually generating a set of new features, each is a linear transformation from the original elements.

    Thus, the vector you get cannot directly be translated to the features you need to chose in order to get this variance- it just creates a new feature based on the originals.
    In your case, you get:

    New_Feature = 0.038*F1 + 0.0755*F2 + 0.7008*F3 + ... + 0.0543*F8
    

    This New_Feature gives you 94.9471% information gain, despite the dimensionality reduction.
    (And if you do the same for the next principle comopnents and use them as well, you obviously increase your information gain)

    If you need to get a subset of the originals, and not create new features - I would have used other methods instead of PCA.

    Genetic Algorithms are usually pretty good for subset selection, and if your set of features only include 8 features - you could also consider brute force search - there are only 28=256 possible subsets. It might be possible in some cases to try all subsets and see what gives you the best performance.

    0 讨论(0)
提交回复
热议问题