lstm

LSTM for time-series prediction failing to learn (PyTorch)

社会主义新天地 提交于 2021-01-24 20:16:31
问题 I'm currently working on building an LSTM network to forecast time-series data using PyTorch. I tried to share all the code pieces that I thought would be helpful, but please feel free to let me know if there's anything further I can provide. I added some comments at the end of the post regarding what the underlying issue might be. From the univariate time-series data indexed by date, I created 3 date features and split the data into training and validation sets as below. # X_train weekday

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

我们两清 提交于 2021-01-20 16:39:02
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

风格不统一 提交于 2021-01-20 16:37:58
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

笑着哭i 提交于 2021-01-20 16:37:44
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

泪湿孤枕 提交于 2021-01-20 16:37:01
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Understanding input shape to PyTorch LSTM

心不动则不痛 提交于 2021-01-19 06:21:32
问题 This seems to be one of the most common questions about LSTMs in PyTorch, but I am still unable to figure out what should be the input shape to PyTorch LSTM. Even after following several posts (1, 2, 3) and trying out the solutions, it doesn't seem to work. Background: I have encoded text sequences (variable length) in a batch of size 12 and the sequences are padded and packed using pad_packed_sequence functionality. MAX_LEN for each sequence is 384 and each token (or word) in the sequence

Keras custom data generator giving dimension errors with multi input and multi output( functional api model)

夙愿已清 提交于 2021-01-18 04:53:32
问题 I have written a generator function with Keras, before returning X,y from __getitem__ I have double check the shapes of the X's and Y's and they are alright, but generator is giving dimension mismatch array and warnings. (Colab Code to reproduce: https://colab.research.google.com/drive/1bSJm44MMDCWDU8IrG2GXKBvXNHCuY70G?usp=sharing) My training and validation generators are pretty much same as class ValidGenerator(Sequence): def __init__(self, df, batch_size=64): self.batch_size = batch_size

Predicting future values in a multivariate time forecasting LSTM model

好久不见. 提交于 2020-12-15 08:31:07
问题 I am confused on how to predict future results with a time series multivariate LSTM model. I am trying to build a model for a stock market prediction and I have the following data features Date DailyHighPrice DailyLowPrice Volume ClosePrice If I train my model on 5 years of data up until today and I want to predict tomorrows ClosePrice, essentially I will need to predict all the data features for tomorrow. This is where I am confused.... Because if all the data features are dependent on one

ValueError : Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 18]

女生的网名这么多〃 提交于 2020-12-15 06:08:52
问题 I'm new with Keras and I'm trying to build a model for personal use/future learning. I've just started with python and I came up with this code (with help of videos and tutorials). I have a data of 16324 instances, each instance consists of 18 features and 1 dependent variable. import pandas as pd import os import time import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, LSTM, BatchNormalization from tensorflow.keras

ValueError : Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 18]

萝らか妹 提交于 2020-12-15 06:07:29
问题 I'm new with Keras and I'm trying to build a model for personal use/future learning. I've just started with python and I came up with this code (with help of videos and tutorials). I have a data of 16324 instances, each instance consists of 18 features and 1 dependent variable. import pandas as pd import os import time import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, LSTM, BatchNormalization from tensorflow.keras