BCEWithLogitsLoss in Keras

蓝咒 提交于 2020-01-25 00:25:12

问题


How to implement BCEWithLogitsLoss in keras and use it as custom loss function while using Tensorflow as backend.

I have used BCEWithLogitsLoss in PyTorch which was defined in torch.

How to implement the same in Keras.?


回答1:


In TensorFlow, you can directly call tf.nn.sigmoid_cross_entropy_with_logits which works both in TensorFlow 1.x and 2.0.

If you want to stick to Keras API, use tf.losses.BinaryCrossentropy and set from_logits=True in the constructor call.

Unlike PyTorch, there are not explicit per-example weights in the API. You can instead set reduction=tf.keras.losses.Reduction.NONE for the loss, do your weighting by explicit multiplication and reduce your loss using tf.reduce_mean.

xent = tf.losses.BinaryCrossEntropy(
    from_logits=True,
    reduction=tf.keras.losses.Reduction.NONE)
loss = tf.reduce_mean(xent(targets, pred) * weights))


来源:https://stackoverflow.com/questions/55683729/bcewithlogitsloss-in-keras

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!