What's the difference between tensorflow dynamic_rnn and rnn?

被刻印的时光 ゝ 提交于 2019-12-29 11:02:32

问题


There are several classes in tf.nn that relate to RNNs. In the examples I find on the web, tf.nn.dynamic_rnn and tf.nn.rnn seem to be used interchangeably or at least I cannot seem to figure out why one is used in place of the other. What is the difference?


回答1:


From RNNs in Tensorflow, a Practical Guide and Undocumented Features by Denny Britz, published in August 21, 2016.

tf.nn.rnn creates an unrolled graph for a fixed RNN length. That means, if you call tf.nn.rnn with inputs having 200 time steps you are creating a static graph with 200 RNN steps. First, graph creation is slow. Second, you’re unable to pass in longer sequences (> 200) than you’ve originally specified.

tf.nn.dynamic_rnn solves this. It uses a tf.While loop to dynamically construct the graph when it is executed. That means graph creation is faster and you can feed batches of variable size.




回答2:


They are nearly the same, but there is a little difference in the structure of input and output. From documentation:

tf.nn.dynamic_rnn

This function is functionally identical to the function rnn above, but >performs fully dynamic unrolling of inputs.

Unlike rnn, the input inputs is not a Python list of Tensors, one for each frame. Instead, inputs may be a single Tensor where the maximum time is either the first or second dimension (see the parameter time_major). Alternatively, it may be a (possibly nested) tuple of Tensors, each of them having matching batch and time dimensions. The corresponding output is either a single Tensor having the same number of time steps and batch size, or a (possibly nested) tuple of such tensors, matching the nested structure of cell.output_size.

For more details, explore source.



来源:https://stackoverflow.com/questions/39734146/whats-the-difference-between-tensorflow-dynamic-rnn-and-rnn

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!