When using multiple classifiers - How to measure the ensemble's performance? [SciKit Learn]

后端 未结 2 1778
说谎
说谎 2021-02-06 06:28

I have a classification problem (predicting whether a sequence belongs to a class or not), for which I decided to use multiple classification methods, in order to help filter ou

相关标签:
2条回答
  • 2021-02-06 06:58

    You can use a linear regression for stacking. For each 10-fold, you can split the data with:

    • 8 training sets
    • 1 validation set
    • 1 test set

    Optimise the hyper-parameters for each algorithm using the training set and validation set, then stack yours predictions by using a linear regression - or a logistic regression - over the validation set. Your final model will be p = a_o + a_1 p_1 + … + a_k p_K, where K is the number of classifier, p_k is the probability given by model k and a_k is the weight of the model k. You can also directly use the predicted outcomes, if the model doesn't give you probabilities.

    If yours models are the same, you can optimise for the parameters of the models and the weights in the same time.

    If you have obvious differences, you can do different bins with different parameters for each. For example one bin could be short sequences and the other long sequences. Or different type of proteins.

    You can use the metric whatever metric you want, as long as it makes sens, like for not blended algorithms.

    You may want to look at the 2007 Belkor solution of the Netflix challenges, section Blending. In 2008 and 2009 they used more advances technics, it may also be interesting for you.

    0 讨论(0)
  • 2021-02-06 07:02

    To evaluate the performance of the ensemble, simply follow the same approach as you would normally. However, you will want to get the 10 fold data set partitions first, and for each fold, train all of your ensemble on that same fold, measure the accuracy, rinse and repeat with the other folds and then compute the accuracy of the ensemble. So the key difference is to not train the individual algorithms using k fold cross-validation when evaluating the ensemble. The important thing is not to let the ensemble see the test data either directly or by letting one of it's algorithms see the test data.

    Note also that RF and Extra Trees are already ensemble algorithms in their own right.

    An alternative approach (again making sure the ensemble approach) is to take the probabilities and \ or labels output by your classifiers, and feed them into another classifier (say a DT, RF, SVM, or whatever) that produces a prediction by combining the best guesses from these other classifiers. This is termed "Stacking"

    0 讨论(0)
提交回复
热议问题