recurrent-neural-network

Predictions from a model become very small. The loss is either 0 or a positive constant

对着背影说爱祢 提交于 2020-05-17 06:04:52
问题 I am implementing the following architecture in Tensorflow. Dual Encoder LSTM https://i.stack.imgur.com/ZmcsX.png During the first few iterations, the loss remains 0.6915 but after that as you can see in the output below, no matter how many iterations I run, the loss keeps varying between -0.0 and a positive constant depending upon the hyperparams. This is happening because the predictions of my model become very small(close to zero) or very high (close to 1). So the model cannot be trained.

TimeDistributed of a KerasLayer in Tensorflow 2.0

醉酒当歌 提交于 2020-05-15 19:22:05
问题 I'm trying to build a CNN + RNN using a pre-trained model from tensorflow-hub: base_model = hub.KerasLayer('https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4', input_shape=(244, 244, 3) base_model.trainable = False model = Sequential() model.add(TimeDistributed(base_model, input_shape=(15, 244, 244, 3))) model.add(LSTM(512)) model.add(Dense(256, activation='relu')) model.add(Dense(3, activation='softmax')) adam = Adam(learning_rate=learning_rate) model.compile(loss='categorical

TimeDistributed of a KerasLayer in Tensorflow 2.0

こ雲淡風輕ζ 提交于 2020-05-15 19:21:08
问题 I'm trying to build a CNN + RNN using a pre-trained model from tensorflow-hub: base_model = hub.KerasLayer('https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4', input_shape=(244, 244, 3) base_model.trainable = False model = Sequential() model.add(TimeDistributed(base_model, input_shape=(15, 244, 244, 3))) model.add(LSTM(512)) model.add(Dense(256, activation='relu')) model.add(Dense(3, activation='softmax')) adam = Adam(learning_rate=learning_rate) model.compile(loss='categorical

What is a dynamic RNN in TensorFlow?

女生的网名这么多〃 提交于 2020-05-10 07:38:47
问题 I am confused about what dynamic RNN (i.e. dynamic_rnn ) is. It returns an output and a state in TensorFlow. What are these state and output? What is dynamic in a dynamic RNN, in TensorFlow? 回答1: Dynamic RNN's allow for variable sequence lengths. You might have an input shape (batch_size, max_sequence_length) , but this will allow you to run the RNN for the correct number of time steps on those sequences that are shorter than max_sequence_length . In contrast, there are static RNNs, which

Input Shape Keras RNN

﹥>﹥吖頭↗ 提交于 2020-04-30 06:26:54
问题 I'm working with a time-series data, that has shape of 2000x1001 , where 2000 is the number of cases, 1000 rows represent the data in time-domain, displacements in X direction during 1 sec period, meaning that the timestep is 0.001. The last column represents the speed, the output value that I need to predict based on the displacements during 1 sec. How the Input Data should be shaped for RNN in Keras ? I've gone trough some tutorials, but still I'm cofused about Input Shape in RNN. Thanks in

LSTM having a systematic offset between predictions and ground truth

∥☆過路亽.° 提交于 2020-04-08 18:28:23
问题 Currently i think i'm experiencing a systematic offset in a LSTM model, between the predictions and the ground truth values. What's the best approach to continue further from now on? The model architecture, along with the predictions & ground truth values are shown below. This is a regression problem where the historical data of the target plus 5 other correlated features X are used to predict the target y . Currently the input sequence n_input is of length 256, where the output sequence n

LSTM having a systematic offset between predictions and ground truth

谁说我不能喝 提交于 2020-04-08 18:21:56
问题 Currently i think i'm experiencing a systematic offset in a LSTM model, between the predictions and the ground truth values. What's the best approach to continue further from now on? The model architecture, along with the predictions & ground truth values are shown below. This is a regression problem where the historical data of the target plus 5 other correlated features X are used to predict the target y . Currently the input sequence n_input is of length 256, where the output sequence n

ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4

本小妞迷上赌 提交于 2020-04-08 00:52:30
问题 I am trying for multi-class classification and here are the details of my training input and output: train_input.shape= (1, 95000, 360) (95000 length input array with each element being an array of 360 length) train_output.shape = (1, 95000, 22) (22 Classes are there) model = Sequential() model.add(LSTM(22, input_shape=(1, 95000,360))) model.add(Dense(22, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) model

Should I normalize my features before throwing them into RNN?

烂漫一生 提交于 2020-03-17 12:06:30
问题 I am playing some demos about recurrent neural network. I noticed that the scale of my data in each column differs a lot. So I am considering to do some preprocess work before I throw data batches into my RNN. The close column is the target I want to predict in the future. open high low volume price_change p_change ma5 ma10 \ 0 20.64 20.64 20.37 163623.62 -0.08 -0.39 20.772 20.721 1 20.92 20.92 20.60 218505.95 -0.30 -1.43 20.780 20.718 2 21.00 21.15 20.72 269101.41 -0.08 -0.38 20.812 20.755 3

Define custom LSTM with multiple inputs

倖福魔咒の 提交于 2020-03-04 04:38:08
问题 Following the tutorial writing custom layer, I am trying to implement a custom LSTM layer with multiple input tensors. I am providing two vectors input_1 and input_2 as a list [input_1, input_2] as suggested in the tutorial. The single input code is working but when I change the code for multiple inputs, its throwing the error, self.kernel = self.add_weight(shape=(input_shape[0][-1], self.units), TypeError: 'NoneType' object is not subscriptable. What change I have to do to get rid of the