keras-layer

How to apply a different dense layer for each timestep in Keras

天涯浪子 提交于 2021-01-29 02:22:33
问题 I know that applying a TimeDistributed(Dense) applies the same dense layer over all the timesteps but I wanted to know how to apply different dense layers for each timestep. The number of timesteps is not variable. P.S.: I have seen the following link and can't seem to find an answer 回答1: You can use a LocallyConnected layer. The LocallyConnected layer words as a Dense layer connected to each of kernel_size time_steps (1 in this case). from tensorflow import keras from tensorflow.keras.layers

How to apply a different dense layer for each timestep in Keras

风格不统一 提交于 2021-01-29 02:21:30
问题 I know that applying a TimeDistributed(Dense) applies the same dense layer over all the timesteps but I wanted to know how to apply different dense layers for each timestep. The number of timesteps is not variable. P.S.: I have seen the following link and can't seem to find an answer 回答1: You can use a LocallyConnected layer. The LocallyConnected layer words as a Dense layer connected to each of kernel_size time_steps (1 in this case). from tensorflow import keras from tensorflow.keras.layers

keras merge concatenate failed because of different input shape even though input shape are the same

痴心易碎 提交于 2021-01-28 06:35:39
问题 I am trying to concatenate 4 different layers into one layer to input into the next part of my model. I am using the Keras functional API and the code is shown below. # Concat left side 4 inputs and right side 4 inputs print(lc,l1_conv_net,l2_conv_net,l3_conv_net) left_combined = merge.Concatenate()([lc, l1_conv_net, l2_conv_net, l3_conv_net]) This errors occurs which says that my input shape is not the same. However, I also printed the input shape and it is seems to be the same except along

AttributeError: 'NoneType' object has no attribute '_inbound_nodes' Keras

给你一囗甜甜゛ 提交于 2021-01-27 18:51:19
问题 from Config import Config from FaceDetection.MTCNNDetect import MTCNNDetect import cv2 import tensorflow as tf import keras from keras import backend as K from keras.layers import Input, Lambda, Dense, Dropout, Convolution2D, MaxPooling2D, Flatten, Concatenate, concatenate from keras.models import Model face_detect = MTCNNDetect(model_path=Config.MTCNN_MODEL) from FaceRecognition.TensorflowGraph import FaceRecGraph from src.FaceAlignment import AlignCustom from FaceRecognition.FaceFeature

Confused about keras Dot Layer. How is the Dot product computed?

前提是你 提交于 2021-01-27 10:46:55
问题 I read all posts about the Dot Layer but none explains how this and so the output shape is computed! It seems so standard though! How exactly are the values computed with a along a specific axis? val = np.random.randint(2, size=(2, 3, 4)) a = K.variable(value=val) val2 = np.random.randint(2, size=(2, 2, 3)) b = K.variable(value=val) print("a") print(val) print("b") print(val2) out = Dot(axes = 2)([a,b]) print(out.shape) print("DOT") print(K.eval(out)) I get: a [[[0 1 1 1] [1 1 0 0] [0 0 1 1]]

Confused about keras Dot Layer. How is the Dot product computed?

a 夏天 提交于 2021-01-27 10:43:20
问题 I read all posts about the Dot Layer but none explains how this and so the output shape is computed! It seems so standard though! How exactly are the values computed with a along a specific axis? val = np.random.randint(2, size=(2, 3, 4)) a = K.variable(value=val) val2 = np.random.randint(2, size=(2, 2, 3)) b = K.variable(value=val) print("a") print(val) print("b") print(val2) out = Dot(axes = 2)([a,b]) print(out.shape) print("DOT") print(K.eval(out)) I get: a [[[0 1 1 1] [1 1 0 0] [0 0 1 1]]

Confused about keras Dot Layer. How is the Dot product computed?

て烟熏妆下的殇ゞ 提交于 2021-01-27 10:43:17
问题 I read all posts about the Dot Layer but none explains how this and so the output shape is computed! It seems so standard though! How exactly are the values computed with a along a specific axis? val = np.random.randint(2, size=(2, 3, 4)) a = K.variable(value=val) val2 = np.random.randint(2, size=(2, 2, 3)) b = K.variable(value=val) print("a") print(val) print("b") print(val2) out = Dot(axes = 2)([a,b]) print(out.shape) print("DOT") print(K.eval(out)) I get: a [[[0 1 1 1] [1 1 0 0] [0 0 1 1]]

Multiple Embedding layers for Keras Sequential model

一个人想着一个人 提交于 2021-01-22 22:56:03
问题 I am using Keras (tensorflow backend) and am wondering how to add multiple Embedding layers into a Keras Sequential model. More specifically, I have several columns in my dataset which have categorical values and I have considered using one-hot encoding but have determined that the number of categorical items is in the hundreds leading to a large and far too sparse set of columns. Upon looking for solutions I have found that Keras' Embedding layer appears to solve the problem very elegantly.

Multiple Embedding layers for Keras Sequential model

人走茶凉 提交于 2021-01-22 22:55:58
问题 I am using Keras (tensorflow backend) and am wondering how to add multiple Embedding layers into a Keras Sequential model. More specifically, I have several columns in my dataset which have categorical values and I have considered using one-hot encoding but have determined that the number of categorical items is in the hundreds leading to a large and far too sparse set of columns. Upon looking for solutions I have found that Keras' Embedding layer appears to solve the problem very elegantly.

Multiple Embedding layers for Keras Sequential model

偶尔善良 提交于 2021-01-22 22:55:38
问题 I am using Keras (tensorflow backend) and am wondering how to add multiple Embedding layers into a Keras Sequential model. More specifically, I have several columns in my dataset which have categorical values and I have considered using one-hot encoding but have determined that the number of categorical items is in the hundreds leading to a large and far too sparse set of columns. Upon looking for solutions I have found that Keras' Embedding layer appears to solve the problem very elegantly.