问题
I am using recursive feature elimination with cross-validation (rfecv)
with GridSearchCV
with RandomForest
classifier as follows using pipeline and without using pipeline.
My code with pipeline is as follows.
X = df[my_features_all]
y = df['gold_standard']
#get development and testing sets
x_train, x_test, y_train, y_test = train_test_split(X, y, random_state=0)
from sklearn.pipeline import Pipeline
#cross validation setting
k_fold = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
#this is the classifier used for feature selection
clf_featr_sele = RandomForestClassifier(random_state = 42, class_weight="balanced")
rfecv = RFECV(estimator=clf_featr_sele, step=1, cv=k_fold, scoring='roc_auc')
param_grid = {'n_estimators': [200, 500],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [3,4,5]
}
#you can have different classifier for your final classifier
clf = RandomForestClassifier(random_state = 42, class_weight="balanced")
CV_rfc = GridSearchCV(estimator=clf, param_grid=param_grid, cv= k_fold, scoring = 'roc_auc', verbose=10, n_jobs = 5)
pipeline = Pipeline([('feature_sele',rfecv),('clf_cv',CV_rfc)])
pipeline.fit(x_train, y_train)
The result is (with pipeline):
Optimal features: 29
Best hyperparameters: {'max_depth': 3, 'max_features': 'auto', 'n_estimators': 500}
Best score: 0.714763
My code without pipeline is as follows.
X = df[my_features_all]
y = df['gold_standard']
#get development and testing sets
x_train, x_test, y_train, y_test = train_test_split(X, y, random_state=0)
#cross validation setting
k_fold = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
clf = RandomForestClassifier(random_state = 42, class_weight="balanced")
rfecv = RFECV(estimator=clf, step=1, cv=k_fold, scoring='roc_auc')
param_grid = {'estimator__n_estimators': [200, 500],
'estimator__max_features': ['auto', 'sqrt', 'log2'],
'estimator__max_depth' : [3,4,5]
}
CV_rfc = GridSearchCV(estimator=rfecv, param_grid=param_grid, cv= k_fold, scoring = 'roc_auc', verbose=10, n_jobs = 5)
CV_rfc.fit(x_train, y_train)
The result is (without pipeline):
Optimal features: 4
Best hyperparameters: {'max_depth': 3, 'max_features': 'auto', 'n_estimators': 500}
Best score: 0.756835
Even though, the concept of both approaches is similar I get different results and different selected features (as shown above in results sections). However, I get the same hyperparameter values.
I am just wondering why this difference happens. What approach (without using pipeline or with using pipeline?) is the most suitable to perform the aforementioned task?
I am happy to provide more details if needed.
回答1:
In with pipeline case,
Feature selection (RFECV
) is carried out with base model (RandomForestClassifier(random_state = 42, class_weight="balanced")
) before applying the grid_searchCV
on final estimator.
In without pipeline case,
For each combination of hyperparameter, the corresponding estimator is used for feature selection (RFECV
). Hence, it would be time consuming.
来源:https://stackoverflow.com/questions/55671530/why-do-i-get-different-values-with-pipline-and-without-pipline-in-sklearn-in-pyt