I am confused about what dynamic RNN (i.e. dynamic_rnn
) is. It returns an output and a state in TensorFlow. What are these state a
Dynamic RNN's allow for variable sequence lengths. You might have an input shape (batch_size, max_sequence_length)
, but this will allow you to run the RNN for the correct number of time steps on those sequences that are shorter than max_sequence_length
.
In contrast, there are static RNNs, which expect to run the entire fixed RNN length. There are cases where you might prefer to do this, such as if you are padding your inputs to max_sequence_length
anyway.
In short, dynamic_rnn
is usually what you want for variable length sequential data. It has a sequence_length
parameter, and it is your friend.
While AlexDelPiero's answer was what I was googling for, the original question was different. You can take a look at this detailed description about LSTMs and intuition behind them. LSTM is the most common example of an RNN.
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
The short answer is: the state is an internal detail that is passed from one timestep to another. The output is a tensor of outputs on each timestep. You usually need to pass all outputs to the next RNN layer or the last output for the last RNN layer. To get the last output you can use output[:,-1,:]