How do I correctly implement a custom activity regularizer in Keras?

后端 未结 2 2173
你的背包
你的背包 2021-02-14 20:15

I am trying to implement sparse autoencoders according to Andrew Ng\'s lecture notes as shown here. It requires that a sparsity constraint be applied on an autoencoder layer by

相关标签:
2条回答
  • 2021-02-14 20:34

    I correct some erros:

    class SparseRegularizer(keras.regularizers.Regularizer):
        
        def __init__(self, rho = 0.01,beta = 1):
            """
            rho  : Desired average activation of the hidden units
            beta : Weight of sparsity penalty term
            """
            self.rho = rho
            self.beta = beta
            
    
        def __call__(self, activation):
            rho = self.rho
            beta = self.beta
            # sigmoid because we need the probability distributions
            activation = tf.nn.sigmoid(activation)
            # average over the batch samples
            rho_bar = K.mean(activation, axis=0)
            # Avoid division by 0
            rho_bar = K.maximum(rho_bar,1e-10) 
            KLs = rho*K.log(rho/rho_bar) + (1-rho)*K.log((1-rho)/(1-rho_bar))
            return beta * K.sum(KLs) # sum over the layer units
    
        def get_config(self):
            return {
                'rho': self.rho,
                'beta': self.beta
            }
    
    0 讨论(0)
  • 2021-02-14 20:35

    You have defined self.p = -0.9 instead of the 0.05 value that both the original poster and the lecture notes you referred to are using.

    0 讨论(0)
提交回复
热议问题