How to implement a stacked RNNs in Tensorflow?

孤者浪人 提交于 2021-02-08 08:21:28

问题


I want to implement an RNN using Tensorflow1.13 on GPU. Following the official recommendation, I write the following code to get a stack of RNN cells

lstm = [tk.layers.CuDNNLSTM(128) for _ in range(2)]
cells = tk.layers.StackedRNNCells(lstm)

However, I receive an error message:

ValueError: ('All cells must have a state_size attribute. received cells:', [< tensorflow.python.keras.layers.cudnn_recurrent.CuDNNLSTM object at 0x13aa1c940>])

How can I correct it?


回答1:


This may be a Tensorflow bug and I would suggest creating an issue on Github. However, if you want to by pass the bug, you can use:

import tensorflow as tf
import tensorflow.keras as tk

lstm = [tk.layers.CuDNNLSTM(128) for _ in range(2)]
stacked_cells = tf.nn.rnn_cell.MultiRNNCell(lstm)

This will work but it will give a deprecation warning that you can suppress.




回答2:


Thanks @qlzh727. Here, I quote the response:

Either StackedRNNCells or StackedRNNCells only works with Cell, not layer. The difference between the cell and layer in RNN is that cell will only process one time step within the whole sequence, whereas the layer will process the whole sequence. You can treat RNN layer as:

for t in whole_time_steps:
  output_t, state_t = cell(input_t, state_t-1)

If you want to stack 2 LSTM layers to together with cudnn in 1.x, you can do:

l1 = tf.layers.CuDNNLSTM(128, return_sequence=True)
l2 = tf.layers.CuDNNLSTM(128)
l1_output = l1(input)
l2_oupput = l2(l1_output) 

In tf 2.x, we unify the cudnn and normal implementation together, you can just change the example above with tf.layers.LSTM(128, return_sequence=True), which will use the cudnn impl if available.



来源:https://stackoverflow.com/questions/55324307/how-to-implement-a-stacked-rnns-in-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!