When using multiple classifiers - How to measure the ensemble's performance? [SciKit Learn]

后端 未结 2 1780
说谎
说谎 2021-02-06 06:28

I have a classification problem (predicting whether a sequence belongs to a class or not), for which I decided to use multiple classification methods, in order to help filter ou

2条回答
  •  春和景丽
    2021-02-06 06:58

    You can use a linear regression for stacking. For each 10-fold, you can split the data with:

    • 8 training sets
    • 1 validation set
    • 1 test set

    Optimise the hyper-parameters for each algorithm using the training set and validation set, then stack yours predictions by using a linear regression - or a logistic regression - over the validation set. Your final model will be p = a_o + a_1 p_1 + … + a_k p_K, where K is the number of classifier, p_k is the probability given by model k and a_k is the weight of the model k. You can also directly use the predicted outcomes, if the model doesn't give you probabilities.

    If yours models are the same, you can optimise for the parameters of the models and the weights in the same time.

    If you have obvious differences, you can do different bins with different parameters for each. For example one bin could be short sequences and the other long sequences. Or different type of proteins.

    You can use the metric whatever metric you want, as long as it makes sens, like for not blended algorithms.

    You may want to look at the 2007 Belkor solution of the Netflix challenges, section Blending. In 2008 and 2009 they used more advances technics, it may also be interesting for you.

提交回复
热议问题