Are the k-fold cross-validation scores from scikit-learn's `cross_val_score` and `GridsearchCV` biased if we include transformers in the pipeline?

后端 未结 2 1886
别跟我提以往
别跟我提以往 2021-01-12 17:13

Data pre-processers such as StandardScaler should be used to fit_transform the train set and only transform (not fit) the test set. I expect the same fit/transform process a

相关标签:
2条回答
  • 2021-01-12 17:33

    Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting. To avoid it, it is common practice when performing a (supervised) machine learning experiment to hold out part of the available data as a test set X_test, y_test

    A solution to this problem is a procedure called cross-validation (CV for short). A test set should still be held out for final evaluation, but the validation set is no longer needed when doing CV. In the basic approach, called k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:

    A model is trained using of the folds as training data; the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy). The performance measure reported by k-fold cross-validation is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as is the case when fixing an arbitrary validation set), which is a major advantage in problems such as inverse inference where the number of samples is very small.

    More over if your model is already biased from starting we have to make it balance by SMOTE /Oversampling of Less Target Variable/Under-sampling of High target variable.

    0 讨论(0)
  • 2021-01-12 17:42

    No, sklearn doesn't do fit_transform with entire dataset.

    To check this, I subclassed StandardScaler to print the size of the dataset sent to it.

    class StScaler(StandardScaler):
        def fit_transform(self,X,y=None):
            print(len(X))
            return super().fit_transform(X,y)
    

    If you now replace StandardScaler in your code, you'll see dataset size passed in first case is actually bigger.

    But why does the accuracy remain exactly same? I think this is because LogisticRegression is not very sensitive to feature scale. If we instead use a classifier that is very sensitive to scale, like KNeighborsClassifier for example, you'll find accuracy between two cases start to vary.

    X,y = load_breast_cancer(return_X_y=True)
    X_sc = StScaler().fit_transform(X)
    lr = KNeighborsClassifier(n_neighbors=1)
    cross_val_score(lr, X_sc,y, cv=5)
    

    Outputs:

    569
    [0.94782609 0.96521739 0.97345133 0.92920354 0.9380531 ]
    

    And the 2nd case,

    pipe = Pipeline([
        ('sc', StScaler()),
        ('lr', KNeighborsClassifier(n_neighbors=1))
    ])
    print(cross_val_score(pipe, X, y, cv=5))
    

    Outputs:

    454
    454
    456
    456
    456
    [0.95652174 0.97391304 0.97345133 0.92920354 0.9380531 ]
    

    Not big change accuracy-wise, but change nonetheless.

    0 讨论(0)
提交回复
热议问题