Over-Sampling Class Imbalance Train/Test Split “Found input variables with inconsistent numbers of samples” Solution?

旧城冷巷雨未停 提交于 2020-01-06 02:25:46

问题


Trying to follow this article to perform over-sampling for imbalanced classification. My class ratio is about 8:1.

https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets/notebook

I am confused on the pipeline + coding structure.

  • Should you over-sample after train/test splitting?
    • If so, how do you deal with the fact that the target label is dropped from X? I tried keeping it and then performed the over-sampling then dropped labels on X_train/X_test and replaced the new training set in my pipeline however i get error "Found input variables with inconsistent numbers of samples" because the shapes are inconsistent since the new over-sampling df is doubled with a 50/50 label distribution.

I understand the issue however how does one solve this problem when wanting to perform over-sampling to reduce class imbalance?


    X = df
    #X = df.drop("label", axis=1)
    y = df["label"]

    X_train,\
    X_test,\
    y_train,\
    y_test = train_test_split(X,\
                              y,\
                              test_size=0.2,\
                              random_state=11,\
                              shuffle=True,\
                              stratify=target)

    target_count = df.label.value_counts()
    print('Class 1:', target_count[0])
    print('Class 0:', target_count[1])
    print('Proportion:', round(target_count[0] / target_count[1], 2), ': 1')

    target_count.plot(kind='bar', title='Count (target)');

    # Class count
    count_class_index_0, count_class_index_1 = X_train.label.value_counts()

    # Divide by class
    count_class_index_0 = X_train[X_train['label'] == '1']
    count_class_index_1 = X_train[X_train['label'] == '0']

    df_class_1_over = df_class_1.sample(count_class_index_0, replace=True)
    df_test_over = pd.concat([count_class_index_0, df_class_1_over], axis=0)

    print('Random over-sampling:')
    print(df_test_over.label.value_counts())

    Random over-sampling:
    1    12682
    0      12682

    df_test_over.label.value_counts().plot(kind='bar', title='Count (target)')

    # drop label for new X_train and X_test
    X_train_OS = df_test_over.drop("label", axis=1)
    X_test = X_test.drop("label", axis=1)

    print(X_train_OS.shape)
    print(X_test.shape)

    print(y_train.shape)
    print(y_test.shape)

    (25364, 9)
    (3552, 9)
    (14207,)
    (3552,)

    cat_transformer = Pipeline(steps=[
        ('cat_imputer', SimpleImputer(strategy='constant', fill_value='missing')),
        ('cat_ohe', OneHotEncoder(handle_unknown='ignore'))])

    num_transformer = Pipeline(steps=[
        ('num_imputer', SimpleImputer(strategy='constant', fill_value=0)),
        ('num_scaler', StandardScaler())])

    text_transformer_0 = Pipeline(steps=[
        ('text_bow', CountVectorizer(lowercase=True,\
                                     token_pattern=SPLIT_PATTERN,\
                                     stop_words=stopwords))])
    # SelectKBest()
    # TruncatedSVD()

    text_transformer_1 = Pipeline(steps=[
        ('text_bow', CountVectorizer(lowercase=True,\
                                     token_pattern=SPLIT_PATTERN,\
                                     stop_words=stopwords))])
    # SelectKBest()
    # TruncatedSVD()

    FE = ColumnTransformer(
        transformers=[
            ('cat', cat_transformer, CAT_FEATURES),
            ('num', num_transformer, NUM_FEATURES),
            ('text0', text_transformer_0, TEXT_FEATURES[0]),
            ('text1', text_transformer_1, TEXT_FEATURES[1])])

    pipe = Pipeline(steps=[('feature_engineer', FE),
                         ("scales", MaxAbsScaler()),
                         ('rand_forest', RandomForestClassifier(n_jobs=-1, class_weight='balanced'))])

    random_grid = {"rand_forest__max_depth": [3, 10, 100, None],\
                  "rand_forest__n_estimators": sp_randint(10, 100),\
                  "rand_forest__max_features": ["auto", "sqrt", "log2", None],\
                  "rand_forest__bootstrap": [True, False],\
                  "rand_forest__criterion": ["gini", "entropy"]}

    strat_shuffle_fold = StratifiedKFold(n_splits=5,\
      random_state=123,\
      shuffle=True)

    cv_train = RandomizedSearchCV(pipe, param_distributions=random_grid, cv=strat_shuffle_fold)
    cv_train.fit(X_train_OS, y_train)

    from sklearn.metrics import classification_report, confusion_matrix
    preds = cv_train.predict(X_test)
    print(confusion_matrix(y_test, preds))
    print(classification_report(y_test, preds))


回答1:


The problem you are having here gets very easily (and arguably more elegantly) solved by SMOTE. It's easy to use and allows you to keep the X_train, X_test, y_train, y_test syntax from train_test_split because it will perform the oversampling both on X and y at the same time.

from imblearn.over_sampling import SMOTE

X_train, X_test, y_train, y_test = train_test_split(X,y)
sm = SMOTE(random_state=42)
X_resampled, y_resampled = sm.fit_resample(X_train, y_train)



回答2:


So I believe I solved my own question ... the problem was how I was splitting the data ... I normally always follow the standard X_train, X_test, y_train, y_test train_test_split however it was causing the row count mismatch in the X_train and y_train when over-sampling so I did this instead and everything appears to be working. Please let me know if anyone has any recommendations! Thanks!

features = df_
target = df_l["label"]

train_set, test_set = train_test_split(features, test_size=0.2,\
                          random_state=11,\
                          shuffle=True)

print(train_set.shape)
print(test_set.shape)

(11561, 10)
(2891, 10)

count_class_1, count_class_0 = train_set.label.value_counts()

# Divide by class
df_class_1 = train_set[train_set['label'] == 1]
df_class_0 = train_set[train_set['label'] == 0]

df_class_0_over = df_class_0.sample(count_class_1, replace=True)
df_train_OS = pd.concat([df_class_1, df_class_0_over], axis=0)

print('Random over-sampling:')
print(df_train_OS.label.value_counts())

1      10146
0    10146

df_train_OS.label.value_counts().plot(kind='bar', title='Count (target)');

X_train_OS = df_train_OS.drop("label", axis=1)
y_train_OS = df_train_OS["label"]
X_test = test_set.drop("label", axis=1)
y_test = test_set["label"]

print(X_train_OS.shape)
print(y_train_OS.shape)
print(X_test.shape)
print(y_test.shape)

(20295, 9)
(20295,)
(2891, 9)
(2891,)


来源:https://stackoverflow.com/questions/55814015/over-sampling-class-imbalance-train-test-split-found-input-variables-with-incon

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!