lstm

Does a clean and extendable LSTM implementation exists in PyTorch?

孤者浪人 提交于 2020-06-22 07:30:57
问题 I would like to create an LSTM class by myself, however, I don't want to rewrite the classic LSTM functions from scratch again. Digging in the code of PyTorch , I only find a dirty implementation involving at least 3-4 classes with inheritance: https://github.com/pytorch/pytorch/blob/98c24fae6b6400a7d1e13610b20aa05f86f77070/torch/nn/modules/rnn.py#L323 https://github.com/pytorch/pytorch/blob/98c24fae6b6400a7d1e13610b20aa05f86f77070/torch/nn/modules/rnn.py#L12 https://github.com/pytorch

The output of my regression NN with LSTMs is wrong even with low val_loss

亡梦爱人 提交于 2020-06-17 09:41:47
问题 The bounty expires in 5 days . Answers to this question are eligible for a +50 reputation bounty. Sharan Duggirala wants to draw more attention to this question. The Model I am currently working on a stack of LSTMs and trying to solve a regression problem. The architecture of the model is as below: comp_lstm = tf.keras.models.Sequential([ tf.keras.layers.LSTM(64, return_sequences = True), tf.keras.layers.LSTM(64, return_sequences = True), tf.keras.layers.LSTM(64), tf.keras.layers.Dense(units

The output of my regression NN with LSTMs is wrong even with low val_loss

烂漫一生 提交于 2020-06-17 09:41:00
问题 The bounty expires in 5 days . Answers to this question are eligible for a +50 reputation bounty. Sharan Duggirala wants to draw more attention to this question. The Model I am currently working on a stack of LSTMs and trying to solve a regression problem. The architecture of the model is as below: comp_lstm = tf.keras.models.Sequential([ tf.keras.layers.LSTM(64, return_sequences = True), tf.keras.layers.LSTM(64, return_sequences = True), tf.keras.layers.LSTM(64), tf.keras.layers.Dense(units

Building CNN + LSTM in Keras for a regression problem. What are proper shapes?

浪子不回头ぞ 提交于 2020-06-17 00:02:34
问题 I am working on a regression problem where I feed a set of spectograms to CNN + LSTM - architecture in keras. My data is shaped as (n_samples, width, height, n_channels) . The question I have how to properly connect the CNN to the LSTM layer. The data needs to be reshaped in some way when the convolution is passed to the LSTM. There are several ideas, such as use of TimeDistributed -wrapper in combination with reshaping but I could not manage to make it work. . height = 256 width = 256 n

How to apply Monte Carlo Dropout, in tensorflow, for an LSTM if batch normalization is part of the model?

时光毁灭记忆、已成空白 提交于 2020-06-16 17:44:48
问题 I have a model composed of 3 LSTM layers followed by a batch norm layer and finally dense layer. Here is the code: def build_uncomplied_model(hparams): inputs = tf.keras.Input(shape=(None, hparams["n_features"])) x = return_RNN(hparams["rnn_type"])(hparams["cell_size_1"], return_sequences=True, recurrent_dropout=hparams['dropout'])(inputs) x = return_RNN(hparams["rnn_type"])(hparams["cell_size_2"], return_sequences=True)(x) x = return_RNN(hparams["rnn_type"])(hparams["cell_size_3"], return

TF 2.0 W Operation was changed … when disabling eager mode and using a callback

我与影子孤独终老i 提交于 2020-06-16 04:10:50
问题 I'm using some LSTM layers from TF2.0. For training purpose I'm using the callback LearningRateScheduler , and for speed purpose I disable the eager mode of Tensorflow ( disable_eager_execution ). But when I am using both of these functions, tensorflow raise a warning: Operation ... was changed by setting attribute after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session Here

How to perform multiclass multioutput classification using lstm

﹥>﹥吖頭↗ 提交于 2020-06-15 07:09:08
问题 I have multiclass multioutput classification (see https://scikit-learn.org/stable/modules/multiclass.html for details). In other words, my dataset looks as follows. node_name, timeseries_1, timeseries_2, label_1, label_2 node1, [1.2, ...], [1.8, ...], 0, 2 node2, [1.0, ...], [1.1, ...], 1, 1 node3, [1.9, ...], [1.2, ...], 0, 3 ... ... ... So, my label_1 could be either 0 or 1 , whereas my label_2 could be either 0 , 1 , or 2 . My current code is as follows. def create_network(): model =

PyTorch LSTM input dimension

℡╲_俬逩灬. 提交于 2020-06-14 19:59:08
问题 I'm trying train a simple 2 layer neural network with PyTorch LSTMs and I'm having trouble interpreting the PyTorch documentation. Specifically, I'm not too sure how to go about with the shape of my training data. What I want to do is train my network on a very large dataset through mini-batches, where each batch is say, 100 elements long. Each data element will have 5 features. The documentation states that the input to the layer should be of shape (seq_len, batch_size, input_size). How

How to input a classification time series data into LSTM

前提是你 提交于 2020-06-13 11:32:28
问题 I want to feed my data into a LSTM network, but can't find any similar question or tutorial. My dataset is something like: person 1: t1 f1 f2 f3 t2 f1 f2 f3 ... tn f1 f2 f3 . . . person K: t1 f1 f2 f3 t2 f1 f2 f3 ... tn f1 f2 f3 So i have k person and for each person i have a matrix like input. The first column of each row is incremental time stamp (like a time-line, so t1 < t2 ) and other columns are features of person in that time. In mathematical aspect: i have a (number of example,number

How to input a classification time series data into LSTM

拈花ヽ惹草 提交于 2020-06-13 11:32:12
问题 I want to feed my data into a LSTM network, but can't find any similar question or tutorial. My dataset is something like: person 1: t1 f1 f2 f3 t2 f1 f2 f3 ... tn f1 f2 f3 . . . person K: t1 f1 f2 f3 t2 f1 f2 f3 ... tn f1 f2 f3 So i have k person and for each person i have a matrix like input. The first column of each row is incremental time stamp (like a time-line, so t1 < t2 ) and other columns are features of person in that time. In mathematical aspect: i have a (number of example,number