lstm

Does tensorflow allow LSTM deconvolution ( convlstm2d) as it does for 2D convolution?

徘徊边缘 提交于 2021-02-10 15:02:13
问题 I am trying to augment a network. For the convolution part, I am using convlstm2d from keras. Is there a process to perform deconvolution ( i.e. lstmdeconv2d ? ) 回答1: There is Conv3D for that, checkout this example used to predict the next frame 回答2: It should be possible to combine any model with the TimeDistributed wrapper. So you can create a deconv-model, and apply it on the output (which is a sequence of vectors) of the LSTM using the TimeDistributed wrapper. An example. First create a

Shape error when passed custom LSTM

六月ゝ 毕业季﹏ 提交于 2021-02-10 14:21:01
问题 I have been trying to custom a LSTM layer for further improvement. But an error which seems like normal raised at pooling layer after my custom LSTM. My environment is: win 10 keras 2.2.0 python 3.6 Traceback (most recent call last): File "E:/PycharmProjects/dialogResearch/dialog/classifier.py", line 60, in model = build_model(word_dict, args.max_len, args.max_sents, args.embedding_dim) File "E:\PycharmProjects\dialogResearch\dialog\model\keras_himodel.py", line 177, in build_model l_dense =

Can not squeeze dim[1], expected a dimension of 1, got 499

别来无恙 提交于 2021-02-08 17:24:25
问题 I am trying to make an AutoEncoder and am stuck at the above error. Looking at other posts with this on Stack Exchange didn't help. Here is the error in full: InvalidArgumentError: Can not squeeze dim[1], expected a dimension of 1, got 499 [[{{node metrics_12/acc/Squeeze}}]] [[{{node ConstantFoldingCtrl/loss_12/time_distributed_6_loss/broadcast_weights/assert_broadcastable/AssertGuard/Switch_0}}]] I can compile my model. Here it is: Layer (type) Output Shape Param # ==========================

Can not squeeze dim[1], expected a dimension of 1, got 499

半腔热情 提交于 2021-02-08 17:23:12
问题 I am trying to make an AutoEncoder and am stuck at the above error. Looking at other posts with this on Stack Exchange didn't help. Here is the error in full: InvalidArgumentError: Can not squeeze dim[1], expected a dimension of 1, got 499 [[{{node metrics_12/acc/Squeeze}}]] [[{{node ConstantFoldingCtrl/loss_12/time_distributed_6_loss/broadcast_weights/assert_broadcastable/AssertGuard/Switch_0}}]] I can compile my model. Here it is: Layer (type) Output Shape Param # ==========================

Understand Keras LSTM weights

纵饮孤独 提交于 2021-02-08 09:24:27
问题 I can understand how to multiply Dense layer weights in order to get predicted output, but how can I interpret matrices from LSTM model? Here are some toy examples (don't mind fitting, it's just about matrix multiplication) Dense example: from keras.models import Model from keras.layers import Input, Dense, LSTM import numpy as np np.random.seed(42) X = np.array([[1, 2], [3, 4]]) I = Input(X.shape[1:]) D = Dense(2)(I) linear_model = Model(inputs=[I], outputs=[D]) print('linear_model.predict:

Passing output of a CNN to BILSTM

旧巷老猫 提交于 2021-02-08 06:15:31
问题 I am working on a project in which I have to pass the output of CNN to Bi directional LSTM. I created the model as below but it is throwing 'incompatible' error. Please let me know where I am going wrong and how to fix this model = Sequential() model.add(Conv2D(filters = 16, kernel_size = 3,input_shape = (32,32,1))) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2,2),strides=1, padding='valid')) model.add(Activation('relu')) model.add(Conv2D(filters = 32, kernel_size=3))

LSTM Batches vs Timesteps

天涯浪子 提交于 2021-02-08 05:41:46
问题 I've followed the TensorFlow RNN tutorial to create a LSTM model. However, in the process, I've grown confused as to the difference, if any, between 'batches' and 'timesteps', and I'd appreciate help in clarifying this matter. The tutorial code (see following) essentially creates 'batches' based on a designated number of steps: with tf.variable_scope("RNN"): for time_step in range(num_steps): if time_step > 0: tf.get_variable_scope().reuse_variables() (cell_output, state) = cell(inputs[:,

LSTM Batches vs Timesteps

限于喜欢 提交于 2021-02-08 05:41:06
问题 I've followed the TensorFlow RNN tutorial to create a LSTM model. However, in the process, I've grown confused as to the difference, if any, between 'batches' and 'timesteps', and I'd appreciate help in clarifying this matter. The tutorial code (see following) essentially creates 'batches' based on a designated number of steps: with tf.variable_scope("RNN"): for time_step in range(num_steps): if time_step > 0: tf.get_variable_scope().reuse_variables() (cell_output, state) = cell(inputs[:,

How does shuffle = 'batch' argument of the .fit() layer work in the background?

白昼怎懂夜的黑 提交于 2021-02-07 14:25:30
问题 When I train the model using the .fit() layer there is the argument shuffle preset to True. Let's say that my dataset has 100 samples and that the batch size is 10. When I set shuffle = True then keras first randomly selects randomly the samples (now the 100 samples have a different order) and on the new order it will start creating the batches: batch 1: 1-10, batch 2: 11-20 etc. If I set shuffle = 'batch' how is it supposed to work in the background? Intuitively and using the previous

Setting the hidden state for each minibatch with different hidden sizes and multiple LSTM layers in Keras

天大地大妈咪最大 提交于 2021-02-07 08:02:21
问题 I created an LSTM using Keras with TensorFlow as backend. Before a minibatch with a num_step of 96 is given to the training, the hidden state of the LSTM is set to true values of a previous time step. First the parameters and data: batch_size = 10 num_steps = 96 num_input = num_output = 2 hidden_size = 8 X_train = np.array(X_train).reshape(-1, num_steps, num_input) Y_train = np.array(Y_train).reshape(-1, num_steps, num_output) X_test = np.array(X_test).reshape(-1, num_steps, num_input) Y_test