问题
With pretrained embeddings, we can specify them as weights in keras' embedding layer. To use multiple embeddings, would specifying multiple embedding layer be suitable? i.e.
embedding_layer1 = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix_1],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
embedding_layer2 = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix_2],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
model.add(embedding_layer1)
model.add(embedding_layer2)
This suggests to sum them up and represent them into a single layer, which is not what I am after.
回答1:
Here is an example of using multiple embedding layers through multiple inputs by leveraging Keras' functional API. This is for a Kaggle competition so you'll have to read through the code. They feed the network a dictionary with a key for each data input. It's quite clever and I was able to build a separate model using this framework that performed well.
deep-learning-support-9663
来源:https://stackoverflow.com/questions/49805424/multiple-embedding-layers-in-keras