Is it possible to have non-trainable layer in Keras?

我是研究僧i 提交于 2019-12-08 21:02:22

问题


I would like to calculate constant convolution like blurring or resampling and want it never change during training.

Can I initialize convolution kernel to constant and exclude it from training in Keras?

More specifically, I don't want to use this for purposes declared in the doc. I want to implement residual network this way: one branch does normal trainable convolution, while parallel branch does something constant, like averaging.


回答1:


You should be able to pass a trainable = False argument to your layer definition, or set the layer.trainable = False property after creating your layer. In the latter case you need to compile after the fact. See the FAQ here.

You can then set constant weights for the layer by passing a kernel_initializer = initializer argument. More information on initializers can be found here. If you have a weight matrix already defined somewhere, I think you will need to define a custom initializer that sets the weights to your desired values. The link shows how to define custom initializers at the bottom. Something as simple as the following might work, assuming you have my_constant_weight_matrix defined:

def my_init(shape, dtype=None):
    # Note it must take arguments 'shape' and 'dtype'.
    return my_constant_weight_matrix
model.add(Conv2D(..., kernel_initializer=my_init))  # replace '...' with your args

That said, I have not verified, and when I did a Google search I saw a lot of bug reports pop up about layer freezing not working correctly. Worth a shot though.



来源:https://stackoverflow.com/questions/45123263/is-it-possible-to-have-non-trainable-layer-in-keras

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!