recurrent-neural-network

Keras fit_generator() - How does batch for time series work?

戏子无情 提交于 2021-01-21 12:16:12
问题 Context: I am currently working on time series prediction using Keras with Tensorflow backend and, therefore, studied the tutorial provided here. Following this tutorial, I came to the point where the generator for the fit_generator() method is described. The output this generator generates is as follows (left sample, right target): [[[10. 15.] [20. 25.]]] => [[30. 35.]] -> Batch no. 1: 2 Samples | 1 Target --------------------------------------------- [[[20. 25.] [30. 35.]]] => [[40. 45.]] -

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

我们两清 提交于 2021-01-20 16:39:02
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

风格不统一 提交于 2021-01-20 16:37:58
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

笑着哭i 提交于 2021-01-20 16:37:44
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

泪湿孤枕 提交于 2021-01-20 16:37:01
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Predicting future values in a multivariate time forecasting LSTM model

好久不见. 提交于 2020-12-15 08:31:07
问题 I am confused on how to predict future results with a time series multivariate LSTM model. I am trying to build a model for a stock market prediction and I have the following data features Date DailyHighPrice DailyLowPrice Volume ClosePrice If I train my model on 5 years of data up until today and I want to predict tomorrows ClosePrice, essentially I will need to predict all the data features for tomorrow. This is where I am confused.... Because if all the data features are dependent on one

logits and labels must be broadcastable error in Tensorflow RNN

泪湿孤枕 提交于 2020-12-08 05:48:10
问题 I am new to Tensorflow and deep leaning. I am trying to see how the loss decreases over 10 epochs in my RNN model that I created to read a dataset from kaggle which contains credit card fraud data. I am trying to classify the transactions as fraud(1) and not fraud(0). When I try to run the below code I keep getting the below error: > 2018-07-30 14:59:33.237749: W > tensorflow/core/kernels/queue_base.cc:277] > _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue