gated-recurrent-unit

How to apply a different dense layer for each timestep in Keras

天涯浪子 提交于 2021-01-29 02:22:33
问题 I know that applying a TimeDistributed(Dense) applies the same dense layer over all the timesteps but I wanted to know how to apply different dense layers for each timestep. The number of timesteps is not variable. P.S.: I have seen the following link and can't seem to find an answer 回答1: You can use a LocallyConnected layer. The LocallyConnected layer words as a Dense layer connected to each of kernel_size time_steps (1 in this case). from tensorflow import keras from tensorflow.keras.layers

How to apply a different dense layer for each timestep in Keras

风格不统一 提交于 2021-01-29 02:21:30
问题 I know that applying a TimeDistributed(Dense) applies the same dense layer over all the timesteps but I wanted to know how to apply different dense layers for each timestep. The number of timesteps is not variable. P.S.: I have seen the following link and can't seem to find an answer 回答1: You can use a LocallyConnected layer. The LocallyConnected layer words as a Dense layer connected to each of kernel_size time_steps (1 in this case). from tensorflow import keras from tensorflow.keras.layers

Tensorflow Serving - Stateful LSTM

不问归期 提交于 2020-01-21 06:36:54
问题 Is there a canonical way to maintain a stateful LSTM, etc. with Tensorflow Serving? Using the Tensorflow API directly this is straightforward - but I'm not certain how best to accomplish persisting LSTM state between calls after exporting the model to Serving. Are there any examples out there which accomplish the above? The samples within the repo are very basic. 回答1: From Martin Wicke on the TF mailing list: "There's no good integration for stateful models in the model server yet. As you

Tensorflow Serving - Stateful LSTM

混江龙づ霸主 提交于 2020-01-21 06:36:10
问题 Is there a canonical way to maintain a stateful LSTM, etc. with Tensorflow Serving? Using the Tensorflow API directly this is straightforward - but I'm not certain how best to accomplish persisting LSTM state between calls after exporting the model to Serving. Are there any examples out there which accomplish the above? The samples within the repo are very basic. 回答1: From Martin Wicke on the TF mailing list: "There's no good integration for stateful models in the model server yet. As you

Mixing feed forward layers and recurrent layers in Tensorflow?

吃可爱长大的小学妹 提交于 2020-01-02 02:19:30
问题 Has anyone been able to mix feedforward layers and recurrent layers in Tensorflow? For example: input->conv->GRU->linear->output I can imagine one can define his own cell with feedforward layers and no state which can then be stacked using the MultiRNNCell function, something like: cell = tf.nn.rnn_cell.MultiRNNCell([conv_cell,GRU_cell,linear_cell]) This would make life a whole lot easier... 回答1: can't you just do the following: rnnouts, _ = rnn(grucell, inputs) linearout = [tf.matmul(rnnout,

stock prediction : GRU model predicting same given values instead of future stock price

穿精又带淫゛_ 提交于 2019-12-31 03:06:10
问题 i was just testing this model from kaggle post this model suppose to predict 1 day ahead from given set of last stocks . After tweaking few parameters i got surprisingly good result, as you can see. mean squared error was 5.193.so overall it looks good at predicting future stocks right? well it turned out to be horrible when i take a look closely on the results. as you can see that this model is predicting last value of the given stocks which is our current last stock. so i did adjusted

How can I complete following GRU based RNN written in tensorflow?

浪尽此生 提交于 2019-12-25 07:15:53
问题 So far I have written following code: import pickle import numpy as np import pandas as pd import tensorflow as tf # load pickled objects (x and y) x_input, y_actual = pickle.load(open('sample_input.pickle', 'rb')) x_input = np.reshape(x_input, (50, 1)) y_actual = np.reshape(y_actual, (50, 1)) # parameters batch_size = 50 hidden_size = 100 # create network graph input_data = tf.placeholder(tf.float32, [batch_size, 1]) output_data = tf.placeholder(tf.float32, [batch_size, 1]) cell = tf.nn.rnn

Keras GRUCell missing 1 required positional argument: 'states'

女生的网名这么多〃 提交于 2019-12-11 10:48:56
问题 I try to build a 3-layer RNN with Keras. Part of the code is here: model = Sequential() model.add(Embedding(input_dim = 91, output_dim = 128, input_length =max_length)) model.add(GRUCell(units = self.neurons, dropout = self.dropval, bias_initializer = bias)) model.add(GRUCell(units = self.neurons, dropout = self.dropval, bias_initializer = bias)) model.add(GRUCell(units = self.neurons, dropout = self.dropval, bias_initializer = bias)) model.add(TimeDistributed(Dense(target.shape[2]))) Then I

Explanation of GRU cell in Tensorflow?

怎甘沉沦 提交于 2019-12-10 14:35:58
问题 Following code of Tensorflow's GRUCell unit shows typical operations to get a updated hidden state, when previous hidden state is provided along with current input in the sequence. def __call__(self, inputs, state, scope=None): """Gated recurrent unit (GRU) with nunits cells.""" with vs.variable_scope(scope or type(self).__name__): # "GRUCell" with vs.variable_scope("Gates"): # Reset gate and update gate. # We start with bias of 1.0 to not reset and not update. r, u = array_ops.split(1, 2,

Tensorflow RNN input size

夙愿已清 提交于 2019-12-10 10:57:24
问题 I am trying to use tensorflow to create a recurrent neural network. My code is something like this: import tensorflow as tf rnn_cell = tf.nn.rnn_cell.GRUCell(3) inputs = [tf.constant([[0, 1]], dtype=tf.float32), tf.constant([[2, 3]], dtype=tf.float32)] outputs, end = tf.nn.rnn(rnn_cell, inputs, dtype=tf.float32) Now, everything runs just fine. However, I am rather confused by what is actually going on. The output dimensions are always the batch size x the size of the rnn cell's hidden state -