lstm

Time series classification - Preparing data

♀尐吖头ヾ 提交于 2020-06-13 10:46:06
问题 Looking for help on preparing input data for time series classification. The data is from a bunch of users who need to be classified. I want to Use LSTMs(plan to implement via Keras, with Tenserflow backend). I have data in two formats. Which is the right way to feed to RNNs for classification? Any help regrading the input shape would be of great help. Format 1 UserID TimeStamp Duration Label 1 2020:03:01:00:00 10 0 1 2020:03:01:01:00 0 0 1 2020:03:01:02:00 100 0 1 2020:03:01:03:00 15 0 1

Time series classification - Preparing data

徘徊边缘 提交于 2020-06-13 10:44:51
问题 Looking for help on preparing input data for time series classification. The data is from a bunch of users who need to be classified. I want to Use LSTMs(plan to implement via Keras, with Tenserflow backend). I have data in two formats. Which is the right way to feed to RNNs for classification? Any help regrading the input shape would be of great help. Format 1 UserID TimeStamp Duration Label 1 2020:03:01:00:00 10 0 1 2020:03:01:01:00 0 0 1 2020:03:01:02:00 100 0 1 2020:03:01:03:00 15 0 1

Tensorflow Data Adapter Error: ValueError: Failed to find data adapter that can handle input

江枫思渺然 提交于 2020-06-10 07:32:25
问题 While running a sentdex tutorial script of a cryptocurrency RNN, link here YouTube Tutorial: Cryptocurrency-predicting RNN Model, but have encountered an error when attempting to train the model. My tensorflow version is 2.0.0 and I'm running python 3.6. When attempting to train the model I receive the following error: File "C:\python36-64\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 734, in fit use_multiprocessing=use_multiprocessing) File "C:\python36-64\lib\site

How to set the variables of LSTMCell as input instead of letting it create it in Tensorflow?

南笙酒味 提交于 2020-06-09 05:25:07
问题 When I create a tf.contrib.rnn.LSTMCell, it creates its kernel and bias trainable variables during initialisation. How the code looks now: cell_fw = tf.contrib.rnn.LSTMCell(hidden_size_char, state_is_tuple=True) What I want it to look: kernel = tf.get_variable(...) bias = tf.get_variable(...) cell_fw = tf.contrib.rnn.LSTMCell(kernel, bias, hidden_size, state_is_tuple=True) What I want to do is to create those variables myself, and give it to the LSTMCell class when instantiating it as input

What is the connections between two stacked LSTM layers?

回眸只為那壹抹淺笑 提交于 2020-06-01 05:12:17
问题 The question is like this one What's the input of each LSTM layer in a stacked LSTM network?, but more into implementing details. For simplicity how about 4 units and 2 units structures like the following model.add(LSTM(4, input_shape=input_shape, return_sequences=True)) model.add(LSTM(2,input_shape=input_shape)) So I know the output of LSTM_1 is 4 length but how do the next 2 units handle these 4 inputs, are they fully connected to the next layer of nodes? I guess they are fully connected

How to fix 'Input and hidden tensors are not at the same device' in pytorch

一世执手 提交于 2020-05-30 08:04:43
问题 When I want to put the model on the GPU, there is an error! It said the Inputs is on GPU, but the hidden state is on CPU. However, all of them had been put on the GPU. I use for m in model.parameters(): print(m.device) #return cuda:0 to see all of the state on the model is on the GPU device. The error is "RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu" Windows 10 server Pytorch 1.2.0 + cuda 9.2 cuda 9.2 cudnn 7.6.3 for

How to fix 'Input and hidden tensors are not at the same device' in pytorch

允我心安 提交于 2020-05-30 08:03:30
问题 When I want to put the model on the GPU, there is an error! It said the Inputs is on GPU, but the hidden state is on CPU. However, all of them had been put on the GPU. I use for m in model.parameters(): print(m.device) #return cuda:0 to see all of the state on the model is on the GPU device. The error is "RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu" Windows 10 server Pytorch 1.2.0 + cuda 9.2 cuda 9.2 cudnn 7.6.3 for

Tensorflow ValueError: Shapes (?, 1) and (?,) are incompatible

我与影子孤独终老i 提交于 2020-05-29 05:19:32
问题 I'm facing this error when running my code with 3 lstm layers. Not sure how to fix it. Can anyone help. Here MAX_SEQUENCE_LENGTH=250. After running the cost function, i get the error 'ValueError: Shapes (?, 1) and (?,) are incompatible' # Generate a Tensorflow Graph tf.reset_default_graph() batch_size = 25 embedding_size = 50 lstmUnits = 64 max_label = 2 x = tf.placeholder(tf.int32, [None, MAX_SEQUENCE_LENGTH]) y = tf.placeholder(tf.int32, [None]) number_of_layers=3 # Embeddings to represent

using output from one LSTM as input into another lstm in tensorflow

回眸只為那壹抹淺笑 提交于 2020-05-28 07:25:06
问题 I want to build an LSTM based neural network which takes two kinds of inputs and predicts two kinds of outputs. A rough structure can be seen in following figure.. The output 2 is dependent upon output 1 and as described in answer to a similar question here, I have tried to implement this by setting the initial state of LSTM 2 from hidden states of LSTM 1. I have implemented this using tensorflow using following code. import tensorflow as tf from tensorflow.keras.layers import Input from

using output from one LSTM as input into another lstm in tensorflow

时光毁灭记忆、已成空白 提交于 2020-05-28 07:25:06
问题 I want to build an LSTM based neural network which takes two kinds of inputs and predicts two kinds of outputs. A rough structure can be seen in following figure.. The output 2 is dependent upon output 1 and as described in answer to a similar question here, I have tried to implement this by setting the initial state of LSTM 2 from hidden states of LSTM 1. I have implemented this using tensorflow using following code. import tensorflow as tf from tensorflow.keras.layers import Input from