recurrent-neural-network

Keras, cascade multiple RNN models for N-dimensional output

烈酒焚心 提交于 2020-08-22 15:01:45
问题 I'm having some difficulty with chaining together two models in an unusual way. I am trying to replicate the following flowchart: For clarity, at each timestep of Model[0] I am attempting to generate an entire time series from IR[i] (Intermediate Representation) as a repeated input using Model[1] . The purpose of this scheme is it allows the generation of a ragged 2-D time series from a 1-D input (while both allowing the second model to be omitted when the output for that timestep is not

Keras, cascade multiple RNN models for N-dimensional output

狂风中的少年 提交于 2020-08-22 15:00:43
问题 I'm having some difficulty with chaining together two models in an unusual way. I am trying to replicate the following flowchart: For clarity, at each timestep of Model[0] I am attempting to generate an entire time series from IR[i] (Intermediate Representation) as a repeated input using Model[1] . The purpose of this scheme is it allows the generation of a ragged 2-D time series from a 1-D input (while both allowing the second model to be omitted when the output for that timestep is not

Trying to understand Pytorch's implementation of LSTM

不羁的心 提交于 2020-07-21 04:59:37
问题 I have a dataset containing 1000 examples where each example has 5 features (a,b,c,d,e). I want to feed 7 examples to an LSTM so it predicts the feature (a) of the 8th day. Reading Pytorchs documentation of nn.LSTM() I came up with the following: input_size = 5 hidden_size = 10 num_layers = 1 output_size = 1 lstm = nn.LSTM(input_size, hidden_size, num_layers) fc = nn.Linear(hidden_size, output_size) out, hidden = lstm(X) # Where X's shape is ([7,1,5]) output = fc(out[-1]) output # output's

What's the input of each LSTM layer in a stacked LSTM network?

萝らか妹 提交于 2020-07-08 03:12:26
问题 I'm having some difficulty understanding the input-output flow of layers in stacked LSTM networks. Let's say i have created a stacked LSTM network like the one below: # parameters time_steps = 10 features = 2 input_shape = [time_steps, features] batch_size = 32 # model model = Sequential() model.add(LSTM(64, input_shape=input_shape, return_sequences=True)) model.add(LSTM(32,input_shape=input_shape)) where our stacked-LSTM network consists of 2 LSTM layers with 64 and 32 hidden units

Keras lstm and dense layer

ε祈祈猫儿з 提交于 2020-06-28 09:25:06
问题 How is dense layer changing the output coming from LSTM layer? How come that from 50 shaped output from previous layer i get output of size 1 from dense layer that is used for prediction? Lets say i have this basic model: model = Sequential() model.add(LSTM(50,input_shape=(60,1))) model.add(Dense(1, activation="softmax")) Is the Dense layer taking the values coming from previous layer and assigning the probablity(using softmax function) of each of the 50 inputs and then taking it out as an

Time series classification - Preparing data

♀尐吖头ヾ 提交于 2020-06-13 10:46:06
问题 Looking for help on preparing input data for time series classification. The data is from a bunch of users who need to be classified. I want to Use LSTMs(plan to implement via Keras, with Tenserflow backend). I have data in two formats. Which is the right way to feed to RNNs for classification? Any help regrading the input shape would be of great help. Format 1 UserID TimeStamp Duration Label 1 2020:03:01:00:00 10 0 1 2020:03:01:01:00 0 0 1 2020:03:01:02:00 100 0 1 2020:03:01:03:00 15 0 1

Time series classification - Preparing data

徘徊边缘 提交于 2020-06-13 10:44:51
问题 Looking for help on preparing input data for time series classification. The data is from a bunch of users who need to be classified. I want to Use LSTMs(plan to implement via Keras, with Tenserflow backend). I have data in two formats. Which is the right way to feed to RNNs for classification? Any help regrading the input shape would be of great help. Format 1 UserID TimeStamp Duration Label 1 2020:03:01:00:00 10 0 1 2020:03:01:01:00 0 0 1 2020:03:01:02:00 100 0 1 2020:03:01:03:00 15 0 1

What is the connections between two stacked LSTM layers?

回眸只為那壹抹淺笑 提交于 2020-06-01 05:12:17
问题 The question is like this one What's the input of each LSTM layer in a stacked LSTM network?, but more into implementing details. For simplicity how about 4 units and 2 units structures like the following model.add(LSTM(4, input_shape=input_shape, return_sequences=True)) model.add(LSTM(2,input_shape=input_shape)) So I know the output of LSTM_1 is 4 length but how do the next 2 units handle these 4 inputs, are they fully connected to the next layer of nodes? I guess they are fully connected

Default Initialization for Tensorflow LSTM states and weights?

痴心易碎 提交于 2020-05-26 09:54:27
问题 I am using the LSTM cell in Tensorflow. lstm_cell = tf.contrib.rnn.BasicLSTMCell(lstm_units) I was wondering how the weights and states are initialized or rather what the default initializer is for LSTM cells (states and weights) in Tensorflow? And is there an easy way to manually set an Initializer? Note: For tf.get_variable() the glorot_uniform_initializer is used as far as I could find out from the documentation. 回答1: First of all, there is a difference between the weights of a LSTM (the

Predictions from a model become very small. The loss is either 0 or a positive constant

一曲冷凌霜 提交于 2020-05-17 06:06:39
问题 I am implementing the following architecture in Tensorflow. Dual Encoder LSTM https://i.stack.imgur.com/ZmcsX.png During the first few iterations, the loss remains 0.6915 but after that as you can see in the output below, no matter how many iterations I run, the loss keeps varying between -0.0 and a positive constant depending upon the hyperparams. This is happening because the predictions of my model become very small(close to zero) or very high (close to 1). So the model cannot be trained.