is seaborn confidence interval computed correctly?

别来无恙 提交于 2020-06-10 04:04:07

问题


First, I must admit that my statistics knowledge is rusty at best: even when it was shining new, it's not a discipline I particularly liked, which means I had a hard time making sense of it.

Nevertheless, I took a look at how the barplot graphs were calculating error bars, and was surprised to find a "confidence interval" (CI) used instead of (the more common) standard deviation. Researching more CI led me to this wikipedia article which seems to say that, basically, a CI is computed as:

mean minus 1.96 times stdev over sqrt(n)

mean plus 1.96 times stdev over sqrt(n)

Or, in pseudocode:

def ci_wp(a):
    """calculate confidence interval using Wikipedia's formula"""
    m = np.mean(a)
    s = 1.96*np.std(a)/np.sqrt(len(a))
    return m - s, m + s

But what we find in seaborn/utils.py is:

def ci(a, which=95, axis=None):
    """Return a percentile range from an array of values."""
    p = 50 - which / 2, 50 + which / 2
    return percentiles(a, p, axis)

Now maybe I'm missing this completely, but this seems just like a completely different calculation than the one proposed by Wikipedia. Can anyone explain this discrepancy?

To give another example, from comments, why do we get so different results between:

 >>> sb.utils.ci(np.arange(100))
 array([ 2.475, 96.525])

 >>> ci_wp(np.arange(100))
 [43.842250270646467,55.157749729353533]

And to compare with other statistical tools:

 def ci_std(a):
     """calculate margin of error using standard deviation"""
     m = np.mean(a)
     s = np.std(a)
     return m-s, m+s

 def ci_sem(a):
     """calculate margin of error using standard error of the mean"""
     m = np.mean(a)
     s = sp.stats.sem(a)
     return m-s, m+s

Which gives us:

>>> ci_sem(np.arange(100))
(46.598850802411796, 52.401149197588204)

>>> ci_std(np.arange(100))
(20.633929952277882, 78.366070047722118)

Or with a random sample:

rng = np.random.RandomState(10)
a = rng.normal(size=100)
print sb.utils.ci(a)
print ci_wp(a)
print ci_sem(a)
print ci_std(a)

... which yields:

[-1.9667006   2.19502303]
(-0.1101230745774124, 0.26895640045116026)
(-0.017774461397903049, 0.17660778727165088)
(-0.88762281417683186, 1.0464561400505796)

Why are Seaborn's numbers so radically different from the other results?


回答1:


Your calculation using this Wikipedia formula is completely right. Seaborn just uses another method: https://en.wikipedia.org/wiki/Bootstrapping_(statistics). It's well described by Dragicevic [1]:

[It] consists of generating many alternative datasets from the experimental data by randomly drawing observations with replacement. The variability across these datasets is assumed to approximate sampling error and is used to compute so-called bootstrap confidence intervals. [...] It is very versatile and works for many kinds of distributions.

In the Seaborn's source code, a barplot uses estimate_statistic which bootstraps the data then computes the confidence interval on it:

>>> sb.utils.ci(sb.algorithms.bootstrap(np.arange(100)))
array([43.91, 55.21025])

The result is consistent with your calculation.

[1] Dragicevic, P. (2016). Fair statistical communication in HCI. In Modern Statistical Methods for HCI (pp. 291-330). Springer, Cham.




回答2:


You need to check the code of percentiles. The seaborn ci code you posted simply computes the percentile limits. This interval has a defined mean of 50 (median) and a default range of 95% confidence interval. The actual mean, the standard deviation, etc. will appear in the percentiles routine.



来源:https://stackoverflow.com/questions/46125182/is-seaborn-confidence-interval-computed-correctly

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!