recurrent-neural-network

Format time-series data for short term forecasting using Recurrent Neural networks

跟風遠走 提交于 2019-12-22 12:21:59
问题 I want to forecast day-ahead power consumption using recurrent neural networks (RNN). But, I find the required data format (samples, timesteps, features) for RNN as confusing. Let me explain with an example as: I have power_dataset.csv on dropbox, which contains power consumption from 5 June to 18 June at 10 minutely rate (144 observations per day). Now, to check the performance of RNN using rnn R package, I am following these steps train model M for the usage of 17 June by using data from 5

Foward pass in LSTM netwok learned by keras

∥☆過路亽.° 提交于 2019-12-22 10:02:48
问题 I have the following code that I am hoping to get a forward pass from a 2 layer LSTM: """ this is a simple numerical example of LSTM forward pass to allow deep understanding the LSTM is trying to learn the sin function by learning to predict the next value after a sequence of 3 inputs example 1: {0.583, 0.633, 0.681} --> {0.725}, these values correspond to {sin(35.66), sin(39.27}, sin(42.92)} --> {sin(46.47)} example 2: {0.725, 0.767, 0.801} --> {0.849}, these values correspond to {sin(46.47)

How to generate a sentence from feature vector or words?

筅森魡賤 提交于 2019-12-22 06:38:42
问题 I used VGG 16-Layer Caffe model for image captions and I have several captions per image. Now, I want to generate a sentence from those captions (words). I read in a paper on LSTM that I should remove the SoftMax layer from the training network and provide the 4096 feature vector from fc7 layer directly to LSTM. I am new to LSTM and RNN stuff. Where should I begin? Is there any tutorial showing how to generate sentence by sequence labeling? 回答1: AFAIK the master branch of BVLC/caffe does not

implementing RNN with numpy

我只是一个虾纸丫 提交于 2019-12-22 04:04:35
问题 I'm trying to implement the recurrent neural network with numpy. My current input and output designs are as follow: x is of shape: (sequence length, batch size, input dimension) h : (number of layers, number of directions, batch size, hidden size) initial weight : (number of directions, 2 * hidden size, input size + hidden size) weight : (number of layers -1, number of directions, hidden size, directions*hidden size + hidden size) bias : (number of layers, number of directions, hidden size) I

In what order are weights saved in a LSTM kernel in Tensorflow

▼魔方 西西 提交于 2019-12-22 01:36:58
问题 I looked into the saved weights for a LSTMCell in Tensorflow. It has one big kernel and bias weights. The dimensions of the kernel are (input_size + hidden_size)*(hidden_size*4) Now from what I understand this is encapsulating 4 input to hidden layer affine transforms as well as 4 hidden to hidden layer transforms. So there should be 4 matrices of size input_size*hidden_size and 4 of size hidden_size*hidden_size Can someone tell me or point me to the code where TF saves these, so I can break

Recurrent Neural Network Binary Classification

你说的曾经没有我的故事 提交于 2019-12-21 22:28:01
问题 I have access to a dataframe of 100 persons and how they performed on a certain motion test. This frame contains about 25,000 rows per person since the performance of this person is kept track of (approximately) each centisecond (10^-2). We want to use this data to predict a binary y-label, that is to say, if someone has a motor problem or not. The columns and some values of the dataset are follows: 'Person_ID', 'time_in_game', 'python_time', 'permutation_game, 'round', 'level', 'times_level

Understanding Keras prediction output of a rnn model in R

半城伤御伤魂 提交于 2019-12-21 09:20:25
问题 I'm trying out the Keras package in R by doing this tutorial about forecasting the temperature. However, the tutorial has no explanation on how to predict with the trained RNN model and I wonder how to do this. To train a model I used the following code copied from the tutorial: dir.create("~/Downloads/jena_climate", recursive = TRUE) download.file( "https://s3.amazonaws.com/keras-datasets/jena_climate_2009_2016.csv.zip", "~/Downloads/jena_climate/jena_climate_2009_2016.csv.zip" ) unzip( "~

How to determine maximum batch size for a seq2seq tensorflow RNN training model

ε祈祈猫儿з 提交于 2019-12-21 05:14:06
问题 Currently, I am using the default 64 as the batch size for the seq2seq tensorflow model. What is the maximum batch size , layer size etc I can go with a single Titan X GPU with 12 GB RAM with Haswell-E xeon 128GB RAM. The input data is converted to embeddings. Following are some helpful parameters I am using , it seems the cell input size is 1024: encoder_inputs: a list of 2D Tensors [batch_size x cell.input_size]. decoder_inputs: a list of 2D Tensors [batch_size x cell.input_size]. tf.app

What is the upside of using `tf.nn.rnn` instead of `tf.nn.dynamic_rnn` in TensorFlow?

﹥>﹥吖頭↗ 提交于 2019-12-21 02:43:06
问题 What is the upside of using tf.nn.rnn instead of tf.nn.dynamic_rnn? The documentation says : [ dynamic_rnn ] is functionally identical to the function rnn above, but performs fully dynamic unrolling of inputs. Is there any case where one might prefer to use tf.nn.rnn instead of tf.nn.dynamic_rnn ? 来源: https://stackoverflow.com/questions/42356027/what-is-the-upside-of-using-tf-nn-rnn-instead-of-tf-nn-dynamic-rnn-in-tensor

How to use multilayered bidirectional LSTM in Tensorflow?

放肆的年华 提交于 2019-12-21 01:09:22
问题 I want to know how to use multilayered bidirectional LSTM in Tensorflow. I have already implemented the contents of bidirectional LSTM, but I wanna compare this model with the model added multi-layers. How should I add some code in this part? x = tf.unstack(tf.transpose(x, perm=[1, 0, 2])) #print(x[0].get_shape()) # Define lstm cells with tensorflow # Forward direction cell lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0) # Backward direction cell lstm_bw_cell = rnn.BasicLSTMCell(n