I am fitting data points using a logistic model. As I sometimes have data with a ydata error, I first used curve_fit and its sigma argument to include my individual standard deviations in the fit.
Now I switched to leastsq, because I needed also some Goodness of Fit estimation that curve_fit could not provide. Everything works well, but now I miss the possibility to weigh the least sqares as "sigma" does with curve_fit.
Has someone some code example as to how I could weight the least squares also in leastsq?
Thanks, Woodpicker
I just found that it is possible to combine the best of both worlds, and to have the full leastsq() output also from curve_fit(), using the option full_output:
popt, pcov, infodict, errmsg, ier = curve_fit(func, xdata, ydata, sigma = SD, full_output = True)
This gives me infodict that I can use to calculate all my Goodness of Fit stuff, and lets me use curve_fit's sigma option at the same time...
Assuming your data are in arrays x
, y
with yerr
, and the model is f(p, x)
, just define the error function to be minimized as (y-f(p,x))/yerr
.
The scipy.optimize.curve_fit docs say:
pcov : 2d array
The estimated covariance of popt. The diagonals provide the variance of the parameter estimate. To compute one standard deviation errors on the parameters use perr = np.sqrt(np.diag(pcov)). How the sigma parameter affects the estimated covariance depends on absolute_sigma argument, as described above.
And the section on
absolute_sigma : bool, optional
If True, sigma is used in an absolute sense and the estimated parameter covariance pcov reflects these absolute values.
If False, only the relative magnitudes of the sigma values matter. The returned parameter covariance matrix pcov is based on scaling sigma by a constant factor. This constant is set by demanding that the reduced chisq for the optimal parameters popt when using the scaled sigma equals unity. In other words, sigma is scaled to match the sample variance of the residuals after the fit. Mathematically, pcov(absolute_sigma=False) = pcov(absolute_sigma=True) * chisq(popt)/(M-N)
So, you could just leave absolute_sigma to the default value (False) and then use
perr = np.sqrt(np.diag(pcov))
fitStdErr0 = perr[0]
fitStdErr1 = perr[1]
...
to get the standard deviation error of each fit parameter (as a 1D numpy array). Now you can just pick the useful members (and combine them in a way that is most representative of your data).
来源:https://stackoverflow.com/questions/16510227/python-scipy-implementing-optimize-curve-fit-s-sigma-into-optimize-leastsq