Custom loss function in Keras based on the input data

后端 未结 2 1932
夕颜
夕颜 2020-12-01 08:51

I am trying to create the custom loss function using Keras. I want to compute the loss function based on the input and predicted the output of the neural network.

I

相关标签:
2条回答
  • 2020-12-01 09:17

    You could wrap your custom loss with another function that takes the input tensor as an argument:

    def customloss(x):
        def loss(y_true, y_pred):
            # Use x here as you wish
            err = K.mean(K.square(y_pred - y_true), axis=-1)
            return err
    
        return loss
    

    And then compile your model as follows:

    model.compile('sgd', customloss(x))
    

    where x is your input tensor.

    NOTE: Not tested.

    0 讨论(0)
  • 2020-12-01 09:24

    I have come across 2 solutions to the question you asked.

    1. You can pass your input tensor as an argument to the custom loss wrapper function.
        def custom_loss(i):
    
            def loss(y_true, y_pred):
                return K.mean(K.square(y_pred - y_true), axis=-1) + something with i...
            return loss
    
        def baseline_model():
            # create model
            i = Input(shape=(5,))
            x = Dense(5, kernel_initializer='glorot_uniform', activation='linear')(i)
            o = Dense(1, kernel_initializer='normal', activation='linear')(x)
            model = Model(i, o)
            model.compile(loss=custom_loss(i), optimizer=Adam(lr=0.0005))
            return model
    

    This solution is also mentioned in the accepted answer here

    1. You can pad your label with extra data columns from input and write a custom loss. This is helpful if you just want one/few feature column(s) from your input.
        def custom_loss(data, y_pred):
    
            y_true = data[:, 0]
            i = data[:, 1]
            return K.mean(K.square(y_pred - y_true), axis=-1) + something with i...
    
    
        def baseline_model():
            # create model
            i = Input(shape=(5,))
            x = Dense(5, kernel_initializer='glorot_uniform', activation='linear')(i)
            o = Dense(1, kernel_initializer='normal', activation='linear')(x)
            model = Model(i, o)
            model.compile(loss=custom_loss, optimizer=Adam(lr=0.0005))
            return model
    
    
        model.fit(X, np.append(Y_true, X[:, 0], axis =1), batch_size = batch_size, epochs=90, shuffle=True, verbose=1)
    

    This solution can be found also here in this thread.

    I have only used the 2nd method when I had to use input feature columns in the loss. I have used the first method with scalar arguments; but I believe a tensor input works as well.

    0 讨论(0)
提交回复
热议问题