Input Shape Error in Second-layer (but not first) of Keras LSTM

做~自己de王妃 提交于 2019-12-18 06:56:29

问题


EDITED for conciseness.

I am trying to build an LSTM model, working off the documentation example at

https://keras.io/layers/recurrent/

from keras.models import Sequential
from keras.layers import LSTM

The following three lines of code (plus comment) are taken directly from the documentation link above:

model = Sequential()
model.add(LSTM(32, input_dim=64, input_length=10))

# for subsequent layers, not need to specify the input size:
model.add(LSTM(16))

ValueError: Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2

I get that error above after executing the second model.add() statement, but before exposing the model to my data, or even compiling it.

What am I doing wrong here? Any help is much appreciated. FYI I'm using Keras 1.2.1. EDIT: Just upgraded to current 1.2.2, still having same issue.


回答1:


Thanks to patyork for answering this on github:

"the second LSTM layer is not getting a 3D input that it expects (with a shape of (batch_size, timesteps, features). This is because the first LSTM layer has (by fortune of default values) return_sequences=False, meaning it only output the last feature set at time t-1 which is of shape (batch_size, 32), or 2 dimensions that doesn't include time."

So to offer a code example of how to use a stacked LSTM to achieve many-to-one (return_sequences=False) sequence classification, just make sure to use return_sequences=True on the intermediate layers like this:

model = Sequential()
model.add(LSTM(32, input_dim=64, input_length=10, return_sequences=True))
model.add(LSTM(24, return_sequences=True))
model.add(LSTM(16, return_sequences=True))
model.add(LSTM(1,  return_sequences=False))

model.compile(optimizer = 'RMSprop', loss = 'categorical_crossentropy')

(no errors)



来源:https://stackoverflow.com/questions/42331396/input-shape-error-in-second-layer-but-not-first-of-keras-lstm

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!