问题
I am trying to write a language model using word embeddings and recursive neural networks in TensorFlow 0.9.0 using the tf.nn.dynamic_rnn
graph operation, but I don't understand how the input
tensor is structured.
Let's say I have a corpus of n words. I embed each word in a vector of length e, and I want my RNN to unroll to t time steps. Assuming I use the default time_major = False
parameter, what shape would my input
tensor [batch_size, max_time, input_size]
have?
Maybe a specific tiny example will make this question clearer. Say I have a corpus consisting of n=8 words that looks like this.
1, 2, 3, 3, 2, 1, 1, 2
Say I embed it in a vector of size e=3 with the embeddings 1 -> [10, 10, 10], 2 -> [20, 20, 20], and 3 -> [30, 30, 30], what would my input
tensor look like?
I've read the TensorFlow Recurrent Neural Network tutorial, but that doesn't use tf.nn.dynamic_rnn
. I've also read the documentation for tf.nn.dynamic_rnn
, but find it confusing. In particular I'm not sure what "max_time" and "input_size" mean here.
Can anyone give the shape of the input
tensor in terms of n, t, and e, and/or an example of what that tensor would look like initialized with data from the small corpus I describe?
TensorFlow 0.9.0, Python 3.5.1, OS X 10.11.5
回答1:
In your case, it looks like batch_size = 1
, since you're looking at a single example. So max_time
is n=8
and input_size
is the input depth, in your case e=3
. So you would want to construct an input
tensor which is shaped [1, 8, 3]
. It's batch_major, so the first dimension (the batch dimension) is 1
. If, say, you had another input at the same time, with n=6
words, then you would combine the two by padding this second example to 8
words (by padding zeros for the last 2 word embeddings) and you would have an inputs
size of [2, 8, 3]
.
来源:https://stackoverflow.com/questions/38111170/how-is-the-input-tensor-for-tensorflows-tf-nn-dynamic-rnn-operator-structured