How to use a Keras RNN model to forecast for future dates or events?

前端 未结 1 2020
傲寒
傲寒 2020-11-29 05:05

Here is my code fore training the complete model and saving it:

num_units = 2
activation_function = \'sigmoid\'
optimizer = \'adam\'
loss_function = \'mea         


        
相关标签:
1条回答
  • 2020-11-29 05:15

    Well, you need a stateful=True model, so you can feed it one prediction after another to get the next and keep the model thinking that each input is not a new sequence, but a sequel to the previous.

    Fixing the code and training

    I see in the code that there is an attempt to make your y be a shifte x (a good option for predicting the next steps). But there is also a big problem in the preprocessing here:

    training_set = df_train.values
    training_set = min_max_scaler.fit_transform(training_set)
    
    x_train = training_set[0:len(training_set)-1]
    y_train = training_set[1:len(training_set)]
    x_train = np.reshape(x_train, (len(x_train), 1, 1))
    

    Data for LSTM layers must be shaped as (number_of_sequences, number_of_steps,features).

    So, you're clearly creating sequences of 1 step only, meaning that your LSTM is not learning sequences at all. (There is no sequence with only one step).

    Assuming that your data is a single unique sequence with 1 feature, it should definitely be shaped as (1, len(x_train), 1).

    Naturally, y_train should also have the same shape.

    This, in its turn, will require that your LSTM layers be return_sequences=True - The only way to make y have a length in steps. Also, for having a good prediction, you may need a more complex model (because now it will be trully learning).

    This done, you train your model until you get a satisfactory result.


    Predicting the future

    For predicting the future, you will need stateful=True LSTM layers.

    Before anything, you reset the model's states: model.reset_states() - Necessary every time you're inputting a new sequence into a stateful model.

    Then, first you predict the entire X_train (this is needed for the model to understand at which point of the sequence it is, in technical words: to create a state).

    predictions = model.predict(`X_train`) #this creates states
    

    And finally you create a loop where you start with the last step of the previous prediction:

    future = []
    currentStep = predictions[:,-1:,:] #last step from the previous prediction
    
    for i in range(future_pred_count):
        currentStep = model.predict(currentStep) #get the next step
        future.append(currentStep) #store the future steps    
    
    #after processing a sequence, reset the states for safety
    model.reset_states()
    

    Example

    This code does this with a 2-feature sequence, a shifted future step prediction, and a method that is a little different from this answer, but based on the same principle.

    I created two models (one stateful=False, for training without needing to reset states every time - never forget to reset states when you're starting a new sequence - and the other stateful=True, copying the weights from the trained model, for predicting the future)

    https://github.com/danmoller/TestRepo/blob/master/TestBookLSTM.ipynb

    0 讨论(0)
提交回复
热议问题