LSTM after embedding of a N-dimensional sequence

偶尔善良 提交于 2019-12-12 01:27:42

问题


I have an input sequence with 2-dimensions train_seq with shape (100000, 200, 2) i.e. 100000 training examples, sequence length of 200, and 2 features.

The sequences are text, so each element is one word with a vocabulary of 5000 words. Hence, I want to use an embedding layer prior to my LSTM.

MAX_SEQUENCE_LENGTH = 200
EMBEDDING_SIZE = 64
MAX_FEATURES = 5000
NUM_CATEGORIES = 5

model_input = Input(shape=(MAX_SEQUENCE_LENGTH,2))

x = Embedding(output_dim=EMBEDDING_SIZE, input_dim=MAX_FEATURES, input_length=(MAX_SEQUENCE_LENGTH,2))(model_input)

x_lstm = LSTM(64)(x)

x = Dense(128, activation='relu', name = 'lstm')(x_lstm)

output = Dense(NUM_CATEGORIES, activation='sigmoid')(x)

model = Model(inputs=[model_input], outputs=[output])

model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

However, I'm not able to build the model and get the following error:

ValueError: Input 0 is incompatible with layer lstm: expected ndim=3, found ndim=4. By looking at the model summary without the LSTM, I can see that the output shape of my embedding is (None, 200, 2, 64)

Layer (type)                 Output Shape              Param #   
=================================================================
merged_input (InputLayer)    (None, 200, 2)            0         
_________________________________________________________________
embedding (Embedding)        (None, 200, 2, 64)        196096    
_________________________________________________________________

Note that this architecture works when the input sequence is 1-dimensional. Can a LSTM receive a 2-dimensional sequence? How do I tell the LSTM layer that the input shape is (None, 200, 2, 64)?

Any help would be appreciated


回答1:


First of all don´t define an input layer, you don´t need it. In general the Embedding layer is used like this:

model = Sequential()
model.add(Embedding(MAX_FEATURES, EMBEDDING_SIZE , input_length=MAX_SEQUENCE_LENGTH ))
model.add(LSTM(64))
(...)

Teh same is true for functional style definitions, gitve it a try.




回答2:


The solution is to add the input shape to the LSTM layer:

x_lstm = LSTM(64, input_shape=(MAX_SEQUENCE_LENGTH,2))(x)

Followed by a Flatten layer

x = Flatten()(x_lstm)


来源:https://stackoverflow.com/questions/54120817/lstm-after-embedding-of-a-n-dimensional-sequence

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!