Loss layer on Keras using two input layers and numpy operations

我是研究僧i 提交于 2019-12-24 02:43:06

问题


I have a loss function implemented that uses numpy and opencv methods. This function also uses the input image and the output of the network.

Is it possible to convert the input and the output layers to numpy arrays, compute the loss and use it to optimize the network?


回答1:


No, gradients are needed to perform gradient descent, so if you only have a numerical loss, it cannot be differentiated, in contrast to a symbolic loss that is required by Keras.

Your only chance is to implement your loss using keras.backend functions or to use another Deep Learning framework that might let you specify the gradient manually. You still would need to compute the gradient somehow.



来源:https://stackoverflow.com/questions/46517118/loss-layer-on-keras-using-two-input-layers-and-numpy-operations

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!