Tensorflow dynamic RNN (LSTM): how to format input?

僤鯓⒐⒋嵵緔 提交于 2019-12-02 17:21:14

dynamic: because not all features are present each day

You've got the wrong concept of dynamic here. Dynamic RNN in Tensorflow means the graph is dynamically created during execution, but the inputs are always the same size (0 as the lack of a feature should work ok).

Anyways, what you've got here are sequences of varying length (day1 ... day?) of feature vectors (feature1 ... featureN). First, you need a LSTM cell

cell = tf.contrib.rnn.LSTMcell(size)

so you can then create a dynamically unrolled rnn graph using tf.nn.dynamic_rnn. From the docs:

inputs: The RNN inputs.

If time_major == False (default), this must be a Tensor of shape: [batch_size, max_time, ...], or a nested tuple of such elements.

where max_time refers to the input sequence length. Because we're using dynamic_rnn, the sequence length doesn't need to be defined during compile time, so your input placeholder could be:

x = tf.placeholder(tf.float32, shape=(batch_size, None, N))

Which is then fed into the rnn like

outputs, state = tf.nn.dynamic_rnn(cell, x)

Meaning your input data should have the shape (batch_size, seq_length, N). If examples in one batch have varying length, you should pad them with 0-vectors to the max length and pass the appropriate sequence_length parameter to dynamic_rnn

Obviously I've skipped a lot of details, so to fully understand RNNs you should probably read one of the many excellent RNN tutorials, like this one for example.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!