What does the embedding layer for a network looks like?

99封情书 提交于 2019-12-11 07:19:29

问题


I just start with text classification, and I got stuck in the embedding layer. If I have a batch of sequences encoded as integer corresponding to each word, what does the embedding layer looks like? Is there neurons like normal neural layer?

I've seen the keras.layers.Embedding, but after looking for the document I'm really confused about how does it works. I can understand input_dim, but why is output_dim a 2D matrix? How many weights do I have in this embedding layer?

I'm sorry if my question is not explained clearly, I've no experience in NLP, if this problem about word embedding is common basics in NLP, please tell me and I will check for it.


回答1:


Embedding layer is just a trainable look-up table: it takes as input an integer index and returns as output the word embedding associated with that index:

index |                            word embeddings
=============================================================================
  0   |  word embedding for the word with index 0 (usually used for padding)
-----------------------------------------------------------------------------
  1   |  word embedding for the word with index 1
-----------------------------------------------------------------------------
  2   |  word embedding for the word with index 2
-----------------------------------------------------------------------------
  .   |
  .   |
  .   |
-----------------------------------------------------------------------------
  N   |  word embedding for the word with index N
-----------------------------------------------------------------------------

It is trainable in that sense the embeddings values are not necessarily fixed and could be changed during training. The input_dim argument is actually the number of words (or more generally the number of distinct elements in the sequences). The output_dim argument specifies the dimension of each word embedding. For example in case of using output_dim=100 each word embedding would be a vector of size 100. Further, since the input of an embedding layer is a sequence of integers (corresponding to the words in a sentence) therefore its output would have a shape of (num_sequences, len_sequence, output_dim), i.e. for each integer in a sequence an embedding vector of size output_dim is returned.

As for the number of weights in an embedding layer it is very easy to calculate: there are input_dim unique indices and each index is associated with a word embedding of size output_dim. Therefore the number of weights in an embedding layer is input_dim x ouput_dim.




回答2:


Think of a list from which you get objects.

You do object = myList[index]

The embedding layer is similar to this list. But the "object" is a vector of trainable values.

So, your sequence contains indices to get vectors from the embedding.

Word 1 in sequence says: give me the vector for word 1
Word 2 says: give me the vector for word 2, and so on.

In practice, the weights will be a 2D matrix. You get rows from it based on the word indices passed in the sequence.

A sequence like [wordIndex1, wordIndex2, wordIndex3] will become [wordVector1, wordVector2, wordVector3].



来源:https://stackoverflow.com/questions/53762243/what-does-the-embedding-layer-for-a-network-looks-like

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!