selection of features using PCA

前端 未结 3 639
伪装坚强ぢ
伪装坚强ぢ 2020-12-30 15:47

I am doing unsupervised classification. For this I have 8 features (Variance of Green, Std. div. of Green , Mean of Red, Variance of Red, Std. div. of Red, Mean of Hue, Vari

3条回答
  •  生来不讨喜
    2020-12-30 16:44

    Your problem is the same as the COLUMNSELECT problem discussed by Mahoney and Drineas in "CUR matrix decompositions for improved data analysis".

    They first compute the leverage scores for each dimension and then selects 3 of them randomly using the leverage scores as weights. Alternatively, you can select the largest ones. Here's the script for your problem:

    I first got a real nature image from the web and resized it to the dimensions you ask. The image is as follows:

    img

    %# Example data from real image of size 179x8
    %# You can skip it for your own data
    features = im2double(rgb2gray(imread('img.png')));
    
    %# m samples, n dimensions
    [m,n] = size(features);
    

    Then, compute the centralized data:

    %# Remove the mean
    features = features - repmat(mean(features,2), 1, size(features,2));
    

    I use SVD to compute PCA since it gives you both the principal components and the coefficients. If the samples are in columns, then U holds the principal components. Check the second page of this paper for the relationship.

    %# Compute the SVD
    [U,S,V] = svd(features);
    

    The key idea here is that we want to get the dimensions having most of the variation. And an assumption is that there's some noise in data. We select only the dominant eigenvectors, e.g. representing the 95% of the data.

    %# Compute the number of eigenvectors representing
    %#  the 95% of the variation
    coverage = cumsum(diag(S));
    coverage = coverage ./ max(coverage);
    [~, nEig] = max(coverage > 0.95);
    

    Then the leverage scores are computed using nEig of the principal components. That is, we take the norm of the nEig coefficients.

    %# Compute the norms of each vector in the new space
    norms = zeros(n,1);
    for i = 1:n
        norms(i) = norm(V(i,1:nEig))^2;
    end
    

    Then, we can sort the leverage scores:

    %# Get the largest 3
    [~, idx] = sort(norms);
    idx(1:3)'
    

    and get the indices of the vectors with the largest leverage scores:

    ans =
       6     8     5
    

    You can check the paper for more details.

    But, keep in mind that PCA-based technique is good if you have many many dimensions. In your case, the search space is very small. My advice is to search exhaustively in the space and get the best selection as @amit recommends.

提交回复
热议问题