The output of my regression NN with LSTMs is wrong even with low val_loss

烂漫一生 提交于 2020-06-17 09:41:00

问题


The Model

I am currently working on a stack of LSTMs and trying to solve a regression problem. The architecture of the model is as below:

comp_lstm = tf.keras.models.Sequential([
    tf.keras.layers.LSTM(64, return_sequences = True),
    tf.keras.layers.LSTM(64, return_sequences = True),
    tf.keras.layers.LSTM(64),
    tf.keras.layers.Dense(units=128),
    tf.keras.layers.Dense(units=64),
    tf.keras.layers.Dense(units=32),
    tf.keras.layers.Dense(units=1)
])

comp_lstm.compile(optimizer='adam', loss='mae')

When I train the model, it shows some good loss and val_loss figures:

Epoch 6/20
200/200 [==============================] - 463s 2s/step - loss: 1.3793 - val_loss: 1.3578
Epoch 7/20
200/200 [==============================] - 461s 2s/step - loss: 1.3791 - val_loss: 1.3602

Now I run the code to check the output with the code below:

idx = np.random.randint(len(val_X))
sample_X, sample_y = [[val_X[idx,:]]], [[val_y[idx]]]
test = tf.data.Dataset.from_tensor_slices(([sample_X], [sample_y]))
prediction = comp_lstm.predict(test)
print(f'The actual value was {sample_y} and the model predicted {prediction}')

And the output is:

The actual value was [[21.3]] and the model predicted [[2.7479606]]

The next few times I ran it, I got the value:

The actual value was [[23.1]] and the model predicted [[0.8445232]]
The actual value was [[21.2]] and the model predicted [[2.5449793]]
The actual value was [[22.5]] and the model predicted [[1.2662419]]

I am not sure why this is working out the way that it is. The val_loss is super low, but the output is wildly different.


The Data Wrangling

The data wrangling in order to get train_X and val_X etc. is shown below:

hist2 = 128 

features2 = np.array(list(map(list,[df["scaled_temp"].shift(x) for x in range(1, hist2+1)]))).T.tolist()
df_feat2 = pd.DataFrame([pd.Series(x) for x in features2], index = df.index)
df_trans2 = df.join(df_feat2).drop(columns=['scaled_temp']).iloc[hist2:]
df_trans2 = df_trans2.sample(frac=1)
target = df_trans2['T (degC)'].values
feat2 = df_trans2.drop(columns = ['T (degC)']).values

The shape of feat2 is (44435, 128), while the shape of target is (44435,)

The dataframe that is the column df["scaled_temp"] is shown below (which has been scaled with a standard scaler):

Date Time
2020-04-23T21:14:07.546476Z   -0.377905
2020-04-23T21:17:32.406111Z   -0.377905
2020-04-23T21:17:52.670373Z   -0.377905
2020-04-23T21:18:55.010392Z   -0.377905
2020-04-23T21:19:57.327291Z   -0.377905
                                 ...   
2020-06-08T09:13:06.718934Z   -0.889968
2020-06-08T09:14:09.170193Z   -0.889968
2020-06-08T09:15:11.634954Z   -0.889968
2020-06-08T09:16:14.087139Z   -0.889968
2020-06-08T09:17:16.549216Z   -0.889968
Name: scaled_temp, Length: 44563, dtype: float64

The dataframe for df['T (degC)'] is shown below:

Date Time
2020-05-09T07:30:30.621001Z    24.0
2020-05-11T15:56:30.856851Z    21.3
2020-05-27T05:02:09.407266Z    28.3
2020-05-02T09:33:03.219329Z    20.5
2020-05-31T03:20:04.326902Z    22.4
                               ... 
2020-05-31T01:47:45.982819Z    23.1
2020-05-27T08:03:21.456607Z    27.2
2020-05-04T21:58:36.652251Z    20.9
2020-05-17T18:42:39.681050Z    22.5
2020-05-04T22:07:58.350329Z    21.1
Name: T (degC), Length: 44435, dtype: float64

The dataset creation process is as below:

train_X, val_X = feat2[:int(feat2.shape[0]*0.95), :], feat2[int(feat2.shape[0]*0.95):, :]
train_y, val_y = target[:int(target.shape[0]*0.95)], target[int(target.shape[0]*0.95):]
train = tf.data.Dataset.from_tensor_slices(([train_X], [train_y])).batch(BATCH_SIZE).repeat()
val = tf.data.Dataset.from_tensor_slices(([val_X], [val_y])).batch(BATCH_SIZE).repeat()

So I am not sure as to why this is happening.

来源:https://stackoverflow.com/questions/62341688/the-output-of-my-regression-nn-with-lstms-is-wrong-even-with-low-val-loss

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!