Keras LSTM autoencoder with embedding layer

我是研究僧i 提交于 2019-12-23 15:49:27

问题


I am trying to build a text LSTM autoencoder in Keras. I want to use an embedding layer but I'am not sure how to implement this. The code looks like this.

inputs = Input(shape=(timesteps, input_dim))
embedding_layer = Embedding(numfeats + 1,
                            EMBEDDING_DIM,
                            weights=[data_gen.get_embedding_matrix()],
                            input_length=maxlen,
                            trainable=False)

embedded_sequence = embedding_layer(inputs)
encoded = LSTM(num_units)(inputs)

decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(???, return_sequences=True)(decoded)

sequence_autoencoder = Model(inputs, decoded)

sequence_autoencoder.compile(loss='binary_crossentropy', optimizer='adam')

I am not sure how to decode the output into the target sequence (which is obviously the input sequence).


回答1:


There is no way to implement an inversed embedding layer in the decoder, because the embedding layer is not differentiable. There are probably other ways to work around:

  1. construct the autoencoder from the output of the embedding layer, to a layer with a similar dimension. Then use the nearest neighbor or other algorithms to generate the word sequence from there.

  2. construct an asymmetric autoencoder, using the time distributed layer and dense layers to reduce the dimension of LSTM output.

Hopefully this helps.




回答2:


You can first convert the word to embeddings and pass them to the fit()

expected_output = np.array([[embedding_matrix[word_index] for word_index in encoded_sequence] for encoded_sequence in padded_sequences])
history = lstm_autoencoder.fit(padded_sequences, expected_output, epochs=15, verbose=1)


来源:https://stackoverflow.com/questions/44731059/keras-lstm-autoencoder-with-embedding-layer

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!