This it the code:
X = tf.placeholder(tf.float32, [batch_size, seq_len_1, 1], name=\'X\')
labels = tf.placeholder(tf.float32, [None, alpha_size], name=\'label
I encountered a similar problem when I upgraded to v1.2 (tensorflow-gpu).
Instead of using [rnn_cell]*3
, I created 3 rnn_cells
(stacked_rnn) by a loop (so that they don't share variables) and fed MultiRNNCell
with stacked_rnn
and the problem goes away. I'm not sure it is the right way to do it.
stacked_rnn = []
for iiLyr in range(3):
stacked_rnn.append(tf.nn.rnn_cell.LSTMCell(num_units=512, state_is_tuple=True))
MultiLyr_cell = tf.nn.rnn_cell.MultiRNNCell(cells=stacked_rnn, state_is_tuple=True)