问题
I have a loss function implemented that uses numpy and opencv methods. This function also uses the input image and the output of the network.
Is it possible to convert the input and the output layers to numpy arrays, compute the loss and use it to optimize the network?
回答1:
No, gradients are needed to perform gradient descent, so if you only have a numerical loss, it cannot be differentiated, in contrast to a symbolic loss that is required by Keras.
Your only chance is to implement your loss using keras.backend
functions or to use another Deep Learning framework that might let you specify the gradient manually. You still would need to compute the gradient somehow.
来源:https://stackoverflow.com/questions/46517118/loss-layer-on-keras-using-two-input-layers-and-numpy-operations