I use scipy.odr
in order to make a fit with uncertainties on both x and y following this question Correct fitting with scipy curve_fit including errors in x?
The reason for the discrepancy is that sd_beta
is scaled by the residual variance, whereas cov_beta
isn't.
scipy.odr
is an interface for the ODRPACK FORTRAN library, which is thinly wrapped in __odrpack.c
. sd_beta
and cov_beta
are recovered by indexing into the work
vector that's used internally by the FORTRAN routine. The indices of their first elements in work
are variables named sd
and vcv
(see here).
From the ODRPACK documentation (p.85):
WORK(SDI)
is the first element of ap × 1
arraySD
containing the standard deviationŝσβK
of the function parametersβ
, i.e., the square roots of the diagonal entries of the covariance matrix, whereWORK(SDI-1+K) = SD(K) = ̂V 1/2 β (K, K) = ̂σβK
for
K = 1,... ,p
.
WORK(VCVI)
is the first element of ap × p
arrayVCV
containing the values of the covariance matrix of the parametersβ
prior to scaling by the residual variance, whereWORK(VCVI-1+I+(J-1)*(NP)) = VCV(I,J) = ̂σ⁻²V β(I, J)
for
I = 1,... ,p
andJ = 1,... ,p
.
In other words, np.sqrt(np.diag(output.cov_beta * output.res_var))
will give you the same result as output.sd_beta
.
I've opened a bug report here.