问题
Is the nn.Embedding() essential for learning for an LSTM?
I am using an LSTM in PyTorch to predict NER - example of a similar task is here - https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html
Code wise, I am using code almost identical to the code in the tutorial above.
The only detail is - I am using word2Vec instead of nn.Embedding().
So I remove the nn.Embedding() layer and provide the forward function the features from the word2Vec directly. The RNN does not learn.
Hence, Is the nn.Embedding() essential for learning for an LSTM?
回答1:
nn.Embedding
provides an embedding layer for you.
This means that the layer takes your word token ids and converts these to word vectors.
You can learn the weights for your nn.Embedding
layer during the training process, or you can alternatively load pre-trained embedding weights.
When you want to use a pre-trained word2vec (embedding) model, you just load the pre-trained weights into the nn.Embedding
layer.
You can take a look here on how to load a word2vec embedding layer using gensim library.
I hope this helps.
来源:https://stackoverflow.com/questions/50340016/pytorch-lstm-using-word-embeddings-instead-of-nn-embedding