Weka's PCA is taking too long to run

后端 未结 3 738
隐瞒了意图╮
隐瞒了意图╮ 2021-01-31 12:44

I am trying to use Weka for feature selection using PCA algorithm.

My original feature space contains ~9000 attributes, in 2700 samples.
I tried to reduce dimensiona

相关标签:
3条回答
  • 2021-01-31 13:00

    It looks like you're using the default configuration for the PCA, which judging by the long runtime, it is likely that it is doing way too much work for your purposes.

    Take a look at the options for PrincipalComponents.

    1. I'm not sure if -D means they will normalize it for you or if you have to do it yourself. You want your data to be normalized (centered about the mean) though, so I would do this yourself manually first.
    2. -R sets the amount of variance you want accounted for. Default is 0.95. The correlation in your data might not be good so try setting it lower to something like 0.8.
    3. -A sets the maximum number of attributes to include. I presume the default is all of them. Again, you should try setting it to something lower.

    I suggest first starting out with very lax settings (e.g. -R=0.1 and -A=2) then working your way up to acceptable results.

    0 讨论(0)
  • 2021-01-31 13:13

    Best

    for the construction of your covariance matrix, you can use the following formula which is also used by matlab. It is faster then the apache library.

    Whereby Matrix is an m x n matrix. (m --> #databaseFaces)

    0 讨论(0)
  • After deepening in the WEKA code, the bottle neck is creating the covariance matrix, and then calculating the eigenvectors for this matrix. Even trying to switch to sparsed matrix implementation (I used COLT's SparseDoubleMatrix2D) did not help.

    The solution I came up with was first reduce the dimensionality using a first fast method (I used information gain ranker, and filtering based on document frequencey), and then use PCA on the reduced dimensionality to reduce it farther.

    The code is more complex, but it essentially comes down to this:

    Ranker ranker = new Ranker();
    InfoGainAttributeEval ig = new InfoGainAttributeEval();
    Instances instances = SamplesManager.asWekaInstances(trainSet);
    ig.buildEvaluator(instances);
    firstAttributes = ranker.search(ig,instances);
    candidates = Arrays.copyOfRange(firstAttributes, 0, FIRST_SIZE_REDUCTION);
    instances = reduceDimenstions(instances, candidates)
    PrincipalComponents pca = new PrincipalComponents();
    pca.setVarianceCovered(var);
    ranker = new Ranker();
    ranker.setNumToSelect(numFeatures);
    selection = new AttributeSelection();
    selection.setEvaluator(pca);
    selection.setSearch(ranker);
    selection.SelectAttributes(instances );
    instances = selection.reduceDimensionality(wekaInstances);
    

    However, this method scored worse then using a greedy information gain and a ranker, when I cross-validated for estimated accuracy.

    0 讨论(0)
提交回复
热议问题