recurrent-neural-network

Stream Output of Predictions in Keras

北城余情 提交于 2019-12-13 11:37:41
问题 I have an LSTM in Keras that I am training to predict on time series data. I want the network to output predictions on each timestep, as it will receive a new input every 15 seconds. So what I am struggling with is the proper way to train it so that it will output h_0, h_1, ..., h_t, as a constant stream as it receives x_0, x_1, ...., x_t as a stream of inputs. Is there a best practice for doing this? 回答1: You can enable statefulness in your LSTM layers by setting stateful=True. This changes

Output for an RNN

谁说我不能喝 提交于 2019-12-13 07:29:13
问题 I'm taking the Coursera course Neural Networks for Machine Learning hosted by Geoffrey Hinton from the University of Toronto and there is a quiz question in week 7 for which my answer differs from the right one. The question goes like this: One question is, how should I get a probability between 0 and 1 if the Whh weight is negative and the logistic h unit gives values between 0 and 1. Given the above, their linear combination will allways be negative. A second question would be if we also

LSTM: adding the encoder's hidden states to the decoder in order to increase performance

十年热恋 提交于 2019-12-13 05:28:15
问题 I am trying to experiment with transferring the hidden states of an LSTM from an encoder layer to a decoder layer, as demonstrated in the Keras blog. My data is randomly generated sine-waves (that is, wavelength and phase are determined randomly, as well as the length of the sequence), and the network is trained to receive a number of sine-waves and predict their progression. Without transferring the hidden states, my code is as follows: from keras.models import Model from keras.layers import

How can I use MultiRNNCell with cell= ConvLSTMCell?

北战南征 提交于 2019-12-13 05:08:24
问题 How can I build a multi-layer neural network with the different number of filters in each layer with cell = ConvLSTMCell() and MultiRNNCell? 回答1: cell_1 = ConvLSTMCell(...params...) cell_2 = ConvLSTMCell(...params...) multi_cell = MultiRNNCell([cell_1, cell_2], ...other params...) Then you can call tensorflow dynamic_rnn(..) api with multi_cell and required parameters. 来源: https://stackoverflow.com/questions/48942122/how-can-i-use-multirnncell-with-cell-convlstmcell

How can I build an RNN without using nn.RNN

泪湿孤枕 提交于 2019-12-13 04:04:17
问题 I need to build an RNN (without using nn.RNN) with following specifications : It should have set of weights [ It is a chanracter RNN. It should have 1 hidden layer Wxh (from input layer to hidden layer ) Whh (from the recurrent connection in the hidden layer) W ho (from hidden layer to output layer) I need to use Tanh for hidden layer I need to use softmax for output layer. I have implemented the code . I am using CrossEntropyLoss() as loss function . Which is giving me error as RuntimeError

Must the input height of a 1D CNN be constant?

末鹿安然 提交于 2019-12-13 03:38:53
问题 I'm currently doing my honours research project on online/dynamic signature verification. I am using the SVC 2004 dataset (Task 2). I have done the following data processing: def load_dataset_normalized(path): file_names = os.listdir(path) num_of_persons = len(file_names) initial_starting_point = np.zeros(np.shape([7])) x_dataset = [] y_dataset = [] for infile in file_names: full_file_name = os.path.join(path, infile) file = open(full_file_name, "r") file_lines = file.readlines() num_of

High training error at the beginning of training a Convolutional neural network

旧巷老猫 提交于 2019-12-13 01:09:27
问题 In the Convolutional neural network, I'm working on training a CNN, and during the training process, especially at the beginning of my training I get extremely high training error. After that, this error starts to go down slowly. After approximately 500 Epochs the training error comes near to zero (e.g. 0.006604). Then, I took the final obtained model to measure its accuracy against the testing data, I've got about 89.50%. Does that seem normal? I mean getting a high training error rate at

Saving and Restoring a trained LSTM in Tensor Flow

三世轮回 提交于 2019-12-12 10:40:02
问题 I trained a LSTM classifier, using a BasicLSTMCell. How can I save my model and restore it for use in later classifications? 回答1: I was wondering this myself. As other pointed out, the usual way to save a model in TensorFlow is to use tf.train.Saver() , however I believe this saves the values of tf.Variables . I'm not exactly sure if there are tf.Variables inside the BasicLSTMCell implementation which are saved automatically when you do this, or if there is perhaps another step that need to

Predict using data with less time steps (different dimension) using Keras RNN model

你说的曾经没有我的故事 提交于 2019-12-12 08:55:02
问题 According to the nature of RNN, we can get an output of predicted probabilities at every time stamp (unfold in time). Suppose I train an RNN with 5 time steps, each having 6 features. Thus I have to specify the first layer like this(suppose we use a LSTM layer with 20 nodes as the first layer): model.add(LSTM(20, return_sequences=True, input_shape=(5, 6))) And the model works well if I input the same dimension data. However, now I want to use first 3 time steps of the data to get the

RNN: Back-propagation through time when output is taken only at final timestep

血红的双手。 提交于 2019-12-12 05:59:28
问题 In this blog on Recurrent Neural Networks by Denny Britz. Author states that, " The above diagram has outputs at each time step, but depending on the task this may not be necessary. For example, when predicting the sentiment of a sentence we may only care about the final output, not the sentiment after each word. Similarly, we may not need inputs at each time step. " In the case when we take output only at the final timestep: How will backpropogation change, if there are no outputs at each