问题
I need to minimize KL loss in tensorflow.
I tried this function tf.contrib.distributions.kl(dist_a, dist_b, allow_nan=False, name=None)
, but I failed.
I tried to implement it manually:
def kl_divergence(p,q):
return p* tf.log(p/q)+(1-p)*tf.log((1-p)/(1-q))
Is it correct?
回答1:
What you have there is the cross entropy, KL divergence should be something like:
def kl_divergence(p, q):
return tf.reduce_sum(p * tf.log(p/q))
This assumes that p and q are both 1-D tensors of float, of the same shape and for each, their values sum to 1.
It should also work if p and q are equally sized mini-batches of 1-D tensors that obey the above constraints.
来源:https://stackoverflow.com/questions/43298450/how-can-i-implement-the-kullback-leibler-loss-in-tensorflow