word-embedding

What does tf.nn.embedding_lookup function do?

一曲冷凌霜 提交于 2019-11-26 18:43:28
问题 tf.nn.embedding_lookup(params, ids, partition_strategy='mod', name=None) I cannot understand the duty of this function. Is it like a lookup table? Which means to return the parameters corresponding to each id (in ids)? For instance, in the skip-gram model if we use tf.nn.embedding_lookup(embeddings, train_inputs) , then for each train_input it finds the correspond embedding? 回答1: embedding_lookup function retrieves rows of the params tensor. The behavior is similar to using indexing with

How does Keras 1d convolution layer work with word embeddings - text classification problem? (Filters, kernel size, and all hyperparameter)

假装没事ソ 提交于 2019-11-26 16:21:38
问题 I am currently developing a text classification tool using Keras. It works (it works fine and I got up to 98.7 validation accuracy) but I can't wrap my head around about how exactly 1D-convolution layer works with text data. What hyper-parameters should I use? I have the following sentences (input data): Maximum words in the sentence: 951 (if it's less - the paddings are added) Vocabulary size: ~32000 Amount of sentences (for training): 9800 embedding_vecor_length: 32 (how many relations each

Update only part of the word embedding matrix in Tensorflow

感情迁移 提交于 2019-11-26 12:57:20
问题 Assuming that I want to update a pre-trained word-embedding matrix during training, is there a way to update only a subset of the word embedding matrix? I have looked into the Tensorflow API page and found this: # Create an optimizer. opt = GradientDescentOptimizer(learning_rate=0.1) # Compute the gradients for a list of variables. grads_and_vars = opt.compute_gradients(loss, <list of variables>) # grads_and_vars is a list of tuples (gradient, variable). Do whatever you # need to the \