Interpreting scipy.stats.entropy values

天涯浪子 提交于 2020-06-24 07:47:33

问题


I am trying to use scipy.stats.entropy to estimate the Kullback–Leibler (KL) divergence between two distributions. More specifically, I would like to use the KL as a metric to decide how consistent two distributions are.

However, I cannot interpret the KL values. For ex:

t1=numpy.random.normal(-2.5,0.1,1000)

t2=numpy.random.normal(-2.5,0.1,1000)

scipy.stats.entropy(t1,t2)

0.0015539217193737955

Then,

t1=numpy.random.normal(-2.5,0.1,1000)

t2=numpy.random.normal(2.5,0.1,1000)

scipy.stats.entropy(t1,t2)

= 0.0015908295787942181

How can completely different distributions with essentially no overlap have the same KL value?

t1=numpy.random.normal(-2.5,0.1,1000)

t2=numpy.random.normal(25.,0.1,1000)

scipy.stats.entropy(t1,t2)

= 0.00081111364805590595

This one gives even a smaller KL value (i.e. distance), which I would be inclined to interpret as "more consistent".

Any insights on how to interpret the scipy.stats.entropy (i.e., KL divergence distance) in this context?


回答1:


numpy.random.normal(-2.5,0.1,1000) is a sample from a normal distribution. It's just 1000 numbers in a random order. The documentation for entropy says:

pk[i] is the (possibly unnormalized) probability of event i.

So to get a meaninful result, you need the numbers to be "aligned" so that the same indices correspond to the same positions in the distribution. In your example t1[0] has no relationship to t2[0]. Your sample doesn't provide any direct information about how probable each value is, which is what you need for the KL divergence; it just gives you some actual values that were taken from the distribution.

The most straightforward way to get aligned values is to evaluate the distribution's probability density function at some fixed set of values. To do this, you need to use scipy.stats.norm (which results a distribution object that can be manipulated in various ways) instead of np.random.normal (which only returns sampled values). Here's an example:

t1 = stats.norm(-2.5, 0.1)
t2 = stats.norm(-2.5, 0.1)
t3 = stats.norm(-2.4, 0.1)
t4 = stats.norm(-2.3, 0.1)

# domain to evaluate PDF on
x = np.linspace(-5, 5, 100)

Then:

>>> stats.entropy(t1.pdf(x), t2.pdf(x))
-0.0
>>> stats.entropy(t1.pdf(x), t3.pdf(x))
0.49999995020647586
>>> stats.entropy(t1.pdf(x), t4.pdf(x))
1.999999900414918

You can see that as the distributions move further apart, their KL divergence increases. (In fact, using your second example will give a KL divergence of inf because they overlap so little.)



来源:https://stackoverflow.com/questions/26743201/interpreting-scipy-stats-entropy-values

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!