Passing Individual Channels of Tensors to Layers in Keras

烂漫一生 提交于 2019-12-23 02:18:33

问题


I am trying to emulate something equivalent to a SeparableConvolution2D layer for the theano backend (it already exists for the TensorFlow backend). As the first step What I need to do is pass ONE channel from a tensor into the next layer. So say I have a 2D convolution layer called conv1 with 16 filters which produces an output with shape: (batch_size, 16, height, width) I need to select the subtensor with shape (: , 0, : , : ) and pass it to the next layer. Simple enough right?

This is my code:

from keras import backend as K

image_input = Input(batch_shape = (batch_size, 1, height, width ), name = 'image_input' )

conv1 = Convolution2D(16, 3, 3, name='conv1', activation = 'relu')(image_input)

conv2_input = K.reshape(conv1[:,0,:,:] , (batch_size, 1, height, width))

conv2 = Convolution2D(16, 3, 3, name='conv1', activation = 'relu')(conv2_input)

This throws:

Exception: You tried to call layer "conv1". This layer has no information about its expected input shape, and thus cannot be built. You can build it manually via: layer.build(batch_input_shape)

Why does the layer not have the required shape information? I'm using reshape from the theano backend. Is this the right way of passing individual channels to the next layer?


回答1:


I asked this question on the keras-user group and I got an answer there:

https://groups.google.com/forum/#!topic/keras-users/bbQ5CbVXT1E

Quoting it:

You need to use a lambda layer, like: Lambda(x: x[:, 0:1, :, :], output_shape=lambda x: (x[0], 1, x[2], x[3]))

Note that such a manual implementation of a separable convolution would be horribly inefficient. The correct solution is to use the TensorFlow backend.



来源:https://stackoverflow.com/questions/39110234/passing-individual-channels-of-tensors-to-layers-in-keras

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!