问题
I'm trying to train a model suggested by this research paper where I set half of the filters of a convolution layer to Gabor filters and the rest are random weights which are initialized by default. Normally, if I have to set a layer as not trainable, I set the trainable
attribute as False
. But here I have to freeze only half of the filters of a layer and I have no idea how to do so. Any help would be really appreciated. I'm using Keras with Tensorflow backend.
回答1:
How about making two convolutional layers which are getting the same input and (nearly) the same parameters? So one of them is trainable wir random weights at initialization and the other layer is non trainable with the gabor filters.
You could then merge the outputs of the two layers together in a way that it looks like it's the output from one convolutional network.
Here is an example for demonstration (you need to use Keras functional API):
n_filters = 32
my_input = Input(shape=...)
conv_freezed = Conv2D(n_filters/2, (3,3), ...)
conv_trainable = Conv2D(n_filters/2, (3,3), ...)
conv_freezed_out = conv_freezed(my_input)
conv_trainable_out = conv_trainable(my_input)
conv_out = concatenate([conv_freezed_out, conv_trainable_out])
# set weights and freeze the layer
conv_freezed.set_weights(...)
conv_freezed.trainable = False
来源:https://stackoverflow.com/questions/52418749/set-half-of-the-filters-of-a-layer-as-not-trainable-keras-tensorflow