GridSearchCV/RandomizedSearchCV with LSTM

邮差的信 提交于 2019-12-04 05:37:44

问题


I am stuck on the trying to tune hyperparameters for LSTM via RandomizedSearchCV.

My code is below:

X_train = X_train.reshape((X_train.shape[0], 1, X_train.shape[1]))
X_test = X_test.reshape((X_test.shape[0], 1, X_test.shape[1]))

print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)

from imblearn.pipeline import Pipeline
from keras.initializers import RandomNormal

def create_model(activation_1='relu', activation_2='relu',
                 neurons_input = 1, neurons_hidden_1=1,
                 optimizer='Adam' , 
                 #input_shape = (X_train.shape[1], X_train.shape[2])
                 #input_shape=(X_train.shape[0],X_train.shape[1]) #input shape should be timesteps, features
                                                                 ):

  model = Sequential()
  model.add(LSTM(neurons_input, activation=activation_1, input_shape=(X_train.shape[1], X_train.shape[2]),
                  kernel_initializer=RandomNormal(mean=0.0, stddev=0.05, seed=42), 
                  bias_initializer=RandomNormal(mean=0.0, stddev=0.05, seed=42)))

  model.add(Dense(2, activation='sigmoid'))

  model.compile (loss = 'sparse_categorical_crossentropy', optimizer=optimizer)
  return model


clf=KerasClassifier(build_fn=create_model, epochs=10, verbose=0)

param_grid = {
    'clf__neurons_input':   [20, 25, 30, 35],
    'clf__batch_size': [40,60,80,100], 
    'clf__optimizer': ['Adam', 'Adadelta']}



pipe = Pipeline([
    ('oversample', SMOTE(random_state=12)),
    ('clf', clf)
    ])

my_cv = TimeSeriesSplit(n_splits=5).split(X_train)

rs_keras = RandomizedSearchCV(pipe, param_grid, cv=my_cv, scoring='f1_macro', 
                              refit='f1_macro', verbose=3,n_jobs=1, random_state=42)
rs_keras.fit(X_train, y_train)

I keep having an error:

Found array with dim 3. Estimator expected <= 2.

which makes sense, as both GridSearch and RandomizedSearch need [n_samples, n_features] type of array. Does anyone have an experience or suggestion on how to deal with this limitation?

Thank you.

Here is the full traceback of the error:

Traceback (most recent call last):

  File "<ipython-input-2-b0be4634c98a>", line 1, in <module>
    runfile('Scratch/prediction_lstm.py', wdir='/Simulations/2017-2018/Scratch')

  File "\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 786, in runfile
    execfile(filename, namespace)

  File "\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "Scratch/prediction_lstm.py", line 204, in <module>
    rs_keras.fit(X_train, y_train)

  File "Anaconda3\lib\site-packages\sklearn\model_selection\_search.py", line 722, in fit
    self._run_search(evaluate_candidates)

  File "\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py", line 1515, in _run_search
    random_state=self.random_state))

  File "\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py", line 711, in evaluate_candidates
    cv.split(X, y, groups)))

  File "\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 917, in __call__
    if self.dispatch_one_batch(iterator):

  File "\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 759, in dispatch_one_batch
    self._dispatch(tasks)

  File "\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 716, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)

  File "\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 182, in apply_async
    result = ImmediateResult(func)

  File "\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 549, in __init__
    self.results = batch()

  File "\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 225, in __call__
    for func, args, kwargs in self.items]

  File "\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 225, in <listcomp>
    for func, args, kwargs in self.items]

  File "\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py", line 528, in _fit_and_score
    estimator.fit(X_train, y_train, **fit_params)

  File "\Anaconda3\lib\site-packages\imblearn\pipeline.py", line 237, in fit
    Xt, yt, fit_params = self._fit(X, y, **fit_params)

  File "\Anaconda3\lib\site-packages\imblearn\pipeline.py", line 200, in _fit
    cloned_transformer, Xt, yt, **fit_params_steps[name])

  File "\Anaconda3\lib\site-packages\sklearn\externals\joblib\memory.py", line 342, in __call__
    return self.func(*args, **kwargs)

  File "\Anaconda3\lib\site-packages\imblearn\pipeline.py", line 576, in _fit_resample_one
    X_res, y_res = sampler.fit_resample(X, y, **fit_params)

  File "\Anaconda3\lib\site-packages\imblearn\base.py", line 80, in fit_resample
    X, y, binarize_y = self._check_X_y(X, y)

  File "\Anaconda3\lib\site-packages\imblearn\base.py", line 138, in _check_X_y
    X, y = check_X_y(X, y, accept_sparse=['csr', 'csc'])

  File "\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 756, in check_X_y
    estimator=estimator)

  File "\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 570, in check_array
    % (array.ndim, estimator_name))

ValueError: Found array with dim 3. Estimator expected <= 2.

回答1:


This problem is not due to scikit-learn. RandomizedSearchCV does not check the shape of input. That is the work of the individual Transformer or Estimator to establish that the passed input is of correct shape. As you can see from the stack trace, that error is created by imblearn because SMOTE requires data to be 2-D to work.

To avoid that, you can reshape the data manually after SMOTE and before passing it to the LSTM. There are multiple ways to achieve this.

1) You pass 2-D data (without explicitly reshaping as you are doing currently in the following lines):

X_train = X_train.reshape((X_train.shape[0], 1, X_train.shape[1]))
X_test = X_test.reshape((X_test.shape[0], 1, X_test.shape[1]))

to your pipeline and after the SMOTE step, before your clf, reshape the data into 3-D and then pass it to clf.

2) You pass your current 3-D data to the pipeline, transform it into 2-D to be used with SMOTE. SMOTE will then output new oversampled 2-D data which you then again reshape into 3-D.

I think the better option will be 1. Even in that, you can either:

  • use your custom class to transform the data from 2-D to 3-D like the following:

    pipe = Pipeline([
        ('oversample', SMOTE(random_state=12)),
    
        # Check out custom scikit-learn transformers
        # You need to impletent your reshape logic in "transform()" method
        ('reshaper', CustomReshaper(),
        ('clf', clf)
    ])
    
  • or use the already available Reshape class. I am using Reshape.

So the modifier code would be (See the comments):

# Remove the following two lines, so the data is 2-D while going to "RandomizedSearchCV". 

#    X_train = X_train.reshape((X_train.shape[0], 1, X_train.shape[1]))
#    X_test = X_test.reshape((X_test.shape[0], 1, X_test.shape[1]))


from keras.layers import Reshape

def create_model(activation_1='relu', activation_2='relu',
                 neurons_input = 1, neurons_hidden_1=1,
                 optimizer='Adam' ,):

  model = Sequential()

  # Add this before LSTM. The tuple denotes the last two dimensions of input
  model.add(Reshape((1, X_train.shape[1])))
  model.add(LSTM(neurons_input, 
                 activation=activation_1, 

                 # Since the data is 2-D, the following needs to be changed from "X_train.shape[1], X_train.shape[2]"
                 input_shape=(1, X_train.shape[1]),
                 kernel_initializer=RandomNormal(mean=0.0, stddev=0.05, seed=42), 
                 bias_initializer=RandomNormal(mean=0.0, stddev=0.05, seed=42)))

  model.add(Dense(2, activation='sigmoid'))

  model.compile (loss = 'sparse_categorical_crossentropy', optimizer=optimizer)
  return model


来源:https://stackoverflow.com/questions/55774632/gridsearchcv-randomizedsearchcv-with-lstm

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!