How can I implement the Kullback-Leibler loss in TensorFlow?

此生再无相见时 提交于 2019-12-12 12:14:15

问题


I need to minimize KL loss in tensorflow.

I tried this function tf.contrib.distributions.kl(dist_a, dist_b, allow_nan=False, name=None), but I failed.

I tried to implement it manually:

def kl_divergence(p,q):
    return p* tf.log(p/q)+(1-p)*tf.log((1-p)/(1-q))

Is it correct?


回答1:


What you have there is the cross entropy, KL divergence should be something like:

def kl_divergence(p, q): 
    return tf.reduce_sum(p * tf.log(p/q))

This assumes that p and q are both 1-D tensors of float, of the same shape and for each, their values sum to 1.

It should also work if p and q are equally sized mini-batches of 1-D tensors that obey the above constraints.



来源:https://stackoverflow.com/questions/43298450/how-can-i-implement-the-kullback-leibler-loss-in-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!