Decoder's weights of Autoencoder with tied weights in Keras

前端 未结 1 590
无人及你
无人及你 2020-12-30 08:21

I have implemented a tied weights Auto-encoder in Keras and have successfully trained it.

My goal is to use only the decoder part of the Auto-encoder as the last la

相关标签:
1条回答
  • 2020-12-30 08:43

    It's been more than 2 years since this question was asked, but this answer might still be relevant for some.

    The function Layer.get_weights() retrieves from self.trainable_weights and self.non_trainable_weights (see keras.engine.base_layer.Layer.weights). In your custom layer, your weights self.W and self.b are not being added to any of these collections and that's why the layer has 0 parameters.

    You could tweak your implementation as follows:

    class TiedtDense(Dense):
        def __init__(self, output_dim, master_layer, **kwargs):
            self.master_layer = master_layer
            super(TiedtDense, self).__init__(output_dim, **kwargs)
    
        def build(self, input_shape):
            assert len(input_shape) >= 2
            input_dim = input_shape[-1]
            self.input_dim = input_dim
    
            self.kernel = tf.transpose(self.master_layer.kernel)
            self.bias = K.zeros((self.units,))
            self.trainable_weights.append(self.kernel)
            self.trainable_weights.append(self.bias)
    

    NOTE: I am excluding the regularizers and constraints for simplicity. If you want those, please refer to keras.engine.base_layer.Layer.add_weight.

    0 讨论(0)
提交回复
热议问题