How to get results from custom loss function in Keras?

…衆ロ難τιáo~ 提交于 2019-12-04 04:31:25
Mihai Alexandru-Ionut

Also I want to know what have to return the function.

Custom metrics can be passed at the compilation step.

The function would need to take (y_true, y_pred) as arguments and return a single tensor value.

But I do not know how to get the value of the res for the if and else.

You can return the result from result_metric function.

def custom_metric(y_true,y_pred):
     result = K.abs((y_true-y_pred) / y_pred, axis = 1)
     return result

The second step is to use a keras callback function in order to find the sum of the errors.

The callback can be defined and passed to the fit method.

history = CustomLossHistory()
model.fit(callbacks = [history])

The last step is to create the the CustomLossHistory class in order to find out the sum of your expecting errors list.

CustomLossHistory will inherit some default methods from keras.callbacks.Callback.

  • on_epoch_begin: called at the beginning of every epoch.
  • on_epoch_end: called at the end of every epoch.
  • on_batch_begin: called at the beginning of every batch.
  • on_batch_end: called at the end of every batch.
  • on_train_begin: called at the beginning of model training.
  • on_train_end: called at the end of model training.

You can read more in the Keras Documentation

But for this example we only need on_train_begin and on_batch_end methods.

Implementation

class LossHistory(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.errors= []

    def on_batch_end(self, batch, logs={}):
         loss = logs.get('loss')
         self.errors.append(self.loss_mapper(loss))

    def loss_mapper(self, loss):
         if loss <= 0.1:
             return 0
         elif loss > 0.1 & loss <= 0.15:
             return 5/3
         elif loss > 0.15 & loss <= 0.2:
             return 5
         else:
             return 2000

After your model is trained you can access your errors using following statement.

errors = history.errors

I'll take a leap here and say this won't work because it is not differentiable. The loss needs to be continuously differentiable so you can propagate a gradient through there.

If you want to make this work you need to find a way to do this without discontinuity. For example you could try a weighted average over your 4 discrete values where the weights strongly prefer the closest value.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!