Embedding in pytorch

南楼画角 提交于 2020-06-24 02:58:44

问题


I have checked the PyTorch tutorial and questions similar to this one on Stackoverflow.

I get confused; does the embedding in pytorch (Embedding) make the similar words closer to each other? And do I just need to give to it all the sentences? Or it is just a lookup table and I need to code the model?


回答1:


nn.Embedding holds a Tensor of dimension (vocab_size, vector_size), i.e. of the size of the vocabulary x the dimension of each vector embedding, and a method that does the lookup.

When you create an embedding layer, the Tensor is initialised randomly. It is only when you train it when this similarity between similar words should appear. Unless you have overwritten the values of the embedding with a previously trained model, like GloVe or Word2Vec, but that's another story.

So, once you have the embedding layer defined, and the vocabulary defined and encoded (i.e. assign a unique number to each word in the vocabulary) you can use the instance of the nn.Embedding class to get the corresponding embedding.

For example:

import torch
from torch import nn
embedding = nn.Embedding(1000,128)
embedding(torch.LongTensor([3,4]))

will return the embedding vectors corresponding to the word 3 and 4 in your vocabulary. As no model has been trained, they will be random.




回答2:


You could treat nn.Embedding as a lookup table where the key is the word index and the value is the corresponding word vector. However, before using it you should specify the size of the lookup table, and initialize the word vectors yourself. Following is a code example demonstrating this.

import torch.nn as nn 

# vocab_size is the number of words in your train, val and test set
# vector_size is the dimension of the word vectors you are using
embed = nn.Embedding(vocab_size, vector_size)

# intialize the word vectors, pretrained_weights is a 
# numpy array of size (vocab_size, vector_size) and 
# pretrained_weights[i] retrieves the word vector of
# i-th word in the vocabulary
embed.weight.data.copy_(torch.fromnumpy(pretrained_weights))

# Then turn the word index into actual word vector
vocab = {"some": 0, "words": 1}
word_indexes = [vocab[w] for w in ["some", "words"]] 
word_vectors = embed(word_indexes)



回答3:


Agh! I think this part is still missing. Showcasing that when you set the embedding layer you automatically get the weights, that you may later alter with nn.Embedding.from_pretrained(weight)

import torch
import torch.nn as nn

embedding = nn.Embedding(10, 4)
print(type(embedding))
print(embedding)

t1 = embedding(torch.LongTensor([0,1,2,3,4,5,6,7,8,9])) # adding, 10 won't work
print(t1.shape)
print(t1)


t2 = embedding(torch.LongTensor([1,2,3]))
print(t2.shape)
print(t2)

#predefined weights
weight = torch.FloatTensor([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
print(weight.shape)
embedding = nn.Embedding.from_pretrained(weight)
# get embeddings for ind 0 and 1
embedding(torch.LongTensor([0, 1]))

Output:

<class 'torch.nn.modules.sparse.Embedding'>
Embedding(10, 4)
torch.Size([10, 4])
tensor([[-0.7007,  0.0169, -0.9943, -0.6584],
        [-0.7390, -0.6449,  0.1481, -1.4454],
        [-0.1407, -0.1081,  0.6704, -0.9218],
        [-0.2738, -0.2832,  0.7743,  0.5836],
        [ 0.4950, -1.4879,  0.4768,  0.4148],
        [ 0.0826, -0.7024,  1.2711,  0.7964],
        [-2.0595,  2.1670, -0.1599,  2.1746],
        [-2.5193,  0.6946, -0.0624, -0.1500],
        [ 0.5307, -0.7593, -1.7844,  0.1132],
        [-0.0371, -0.5854, -1.0221,  2.3451]], grad_fn=<EmbeddingBackward>)
torch.Size([3, 4])
tensor([[-0.7390, -0.6449,  0.1481, -1.4454],
        [-0.1407, -0.1081,  0.6704, -0.9218],
        [-0.2738, -0.2832,  0.7743,  0.5836]], grad_fn=<EmbeddingBackward>)
torch.Size([2, 3])

tensor([[0.1000, 0.2000, 0.3000],
        [0.4000, 0.5000, 0.6000]])

And the last part is that the Embedding layer weights can be learned with the gradient descent.



来源:https://stackoverflow.com/questions/50747947/embedding-in-pytorch

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!