Avoiding overfitting with H2OGradientBoostingEstimator

旧巷老猫 提交于 2019-12-11 07:52:57

问题


It appears that the difference between cross-validation and training AUC ROC with H2OGradientBoostingEstimator remains high despite my best attempts using min_split_improvement.

Using the same data with GradientBoostingClassifier(min_samples_split=10) results in no overfitting, but I can find no analogue of min_samples_split.

Prepare Data

from sklearn.datasets import make_classification
X, y = make_classification(n_samples=10000, n_features=40,
                           n_clusters_per_class=10,
                           n_informative=25,
                           random_state=12, shuffle=False)

features = ["x%02d" % (i) for i in range(X.shape[1])]
df = pd.DataFrame(X, columns=features)
df["y"] = y
nfolds = 5

import h2o
h2o.init()

h2of = h2o.H2OFrame(df)
h2of["y"] = h2of["y"].asfactor()

Run modeling

def print_h2o_auc(m):
    print("{m} train: {a:.2%} xv: {x:.2%}".format(
        m=m.model_id, a=m.auc(), x=float(m.cross_validation_metrics_summary().as_data_frame().set_index("").loc["auc","mean"])))

from h2o.estimators.gbm import H2OGradientBoostingEstimator

for msi in [0.00001, 0.0001, 0.001, 0.01, 0.1]:
    m = H2OGradientBoostingEstimator(
        model_id="gbm %g" % (msi),
        ntrees=100, max_depth=3, min_rows=100, min_split_improvement=msi,
        nfolds=5, fold_assignment="stratified",
        keep_cross_validation_predictions=True, seed=1)
    m.train(x=features, y="y", training_frame=h2of)
    print_h2o_auc(m)

Prints

gbm 1e-05 train: 84.35% xv: 77.12%
gbm 0.0001 train: 84.35% xv: 77.12%
gbm 0.001 train: 82.71% xv: 76.53%
gbm 0.01 train: 68.06% xv: 65.49%
gbm 0.1 train: 50.00% xv: 50.00%

IOW, the difference in performance remains significant (even though it does decline).

What else can I try to reduce overfitting?

来源:https://stackoverflow.com/questions/53656780/avoiding-overfitting-with-h2ogradientboostingestimator

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!