Error of slope using numpy.polyfit and dependent variable

拈花ヽ惹草 提交于 2019-12-11 04:57:14

问题


I am having trouble understanding the error estimates between the variables when performing linear regression. Referring to How to find error on slope and intercept using numpy.polyfit say I am fitting a straight line with numpy.polyfit (code below).

As mentioned in the question in the link, the square root of the diagonals of the covariance matrix are the estimated standard-deviation for each of the fitted coefficients, and so np.sqrt(V[0][0])) is the standard deviation of the slope. My question(s) is(are): how the standard deviation of y should be represented? Should it be the addition of the uncertainties in quadrature, i.e., y +/- np.sqrt(np.sqrt(V[0][0])**2+np.sqrt(V[1][1])**2) ? Or perhaps I could only represent it by the standard deviation of the residuals (which would be np.sqrt(S)/(len(y)-1))? Finally, is it possible to obtain the residuals from the covariance matrix?

PS: thanks for the heads-up on adding an "answer" to a question.

import numpy as np

# Data for testing
x = np.array([0.24580423, 0.59642861, 0.35879163, 0.37891011, 0.02445137,
       0.23830957, 0.38793433, 0.68054104, 0.83934083, 0.76073689])

y = np.array([0.61502838, 1.01772738, 1.35351035, 1.32799754, 0.23326104,
       0.89275698, 0.689498  , 1.48300835, 2.324673  , 1.52208752])

p, V = np.polyfit(x, y, 1, cov=True)

p2, S, *rest = np.polyfit(x, y, 1, full=True)

来源:https://stackoverflow.com/questions/56644806/error-of-slope-using-numpy-polyfit-and-dependent-variable

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!