Keras Custom loss function to pass arguments other than y_true and y_pred

前端 未结 2 2011
小蘑菇
小蘑菇 2020-11-30 10:15

I am writing a keras custom loss function where in I want to pass to this function the following: y_true, y_pred (these two will be passed automatically anyway), weights of

相关标签:
2条回答
  • 2020-11-30 10:52

    New answer

    I think you're looking exactly for L2 regularization. Just create a regularizer and add it in the layers:

    from keras.regularizers import l2
    
    #in the target layers, Dense, Conv2D, etc.:
    layer = Dense(units, ..., kernel_regularizer = l2(some_coefficient)) 
    

    You can use bias_regularizer as well.
    The some_coefficient var is multiplied by the square value of the weight.

    PS: if val in your code is constant, it should not harm your loss. But you can still use the old answer below for val.

    Old answer

    Wrap the Keras expected function (with two parameters) into an outer function with your needs:

    def customLoss(layer_weights, val = 0.01):
        
        def lossFunction(y_true,y_pred):    
            loss = mse(y_true, y_pred)
            loss += K.sum(val, K.abs(K.sum(K.square(layer_weights), axis=1)))
            return loss
    
        return lossFunction
    
    model.compile(loss=customLoss(weights,0.03), optimizer =..., metrics = ...)   
    

    Notice that layer_weights must come directly from the layer as a "tensor", so you can't use get_weights(), you must go with someLayer.kernel and someLayer.bias. (Or the respective var name in case of layers that use different names for their trainable parameters).


    The answer here shows how to deal with that if your external vars are variable with batches: How to define custom cost function that depends on input when using ImageDataGenerator in Keras?

    0 讨论(0)
  • 2020-11-30 11:13

    You can do this another way by using the lambda operator as following:

    model.compile(loss= [lambda y_true,y_pred: Custom_loss(y_true, y_pred, val=0.01)], optimizer =...)

    There are some issues regarding saving and loading the model this way. A workaround is to save only the weights and use model.load_weights(...)

    0 讨论(0)
提交回复
热议问题