how to set up the initial value for curve_fit to find the best optimizing, not just local optimizing?

跟風遠走 提交于 2019-12-21 06:57:52

问题


I am trying to fit a power-law function, and in order to find the best fit parameter. However, I find that if the initial guess of parameter is different, the "best fit" output is different. Unless I find the right initial guess, I can get the best optimizing, instead of local optimizing. Is there any way to find the **appropriate initial guess ** ????. My code is listed below. Please feel free make any input. Thanks!

import numpy as np
import pandas as pd
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
%matplotlib inline

# power law function
def func_powerlaw(x,a,b,c):
    return a*(x**b)+c

test_X = [1.0,2,3,4,5,6,7,8,9,10]
test_Y =[3.0,1.5,1.2222222222222223,1.125,1.08,1.0555555555555556,1.0408163265306123,1.03125, 1.0246913580246915,1.02]

predict_Y = []
for x in test_X:
    predict_Y.append(2*x**-2+1)

If I align with default initial guess, which p0 = [1,1,1]

popt, pcov = curve_fit(func_powerlaw, test_X[1:], test_Y[1:], maxfev=2000)


plt.figure(figsize=(10, 5))
plt.plot(test_X, func_powerlaw(test_X, *popt),'r',linewidth=4, label='fit: a=%.4f, b=%.4f, c=%.4f' % tuple(popt))
plt.plot(test_X[1:], test_Y[1:], '--bo')
plt.plot(test_X[1:], predict_Y[1:], '-b')
plt.legend()
plt.show()

The fit is like below, which is not the best fit.

If I change the initial guess to p0 = [0.5,0.5,0.5]

popt, pcov = curve_fit(func_powerlaw, test_X[1:], test_Y[1:], p0=np.asarray([0.5,0.5,0.5]), maxfev=2000)

I can get the best fit

---------------------Updated in 10.7.2018-------------------------------------------------------------------------------------------------------------------------

As I need to run thousands to even millions of Power Law function, using @James Phillips's method is too expensive. So what method is appropriate besides curve_fit? such as sklearn, np.linalg.lstsq etc.


回答1:


Here is example code using the scipy.optimize.differential_evolution genetic algorithm, with your data and equation. This scipy module uses the Latin Hypercube algorithm to ensure a thorough search of parameter space and so requires bounds within which to search - in this example, those bounds are based on the data maximum and minimum values. For other problems you might need to supply different search bounds if you know what range of parameter values to expect.

import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings

# power law function
def func_power_law(x,a,b,c):
    return a*(x**b)+c

test_X = [1.0,2,3,4,5,6,7,8,9,10]
test_Y =[3.0,1.5,1.2222222222222223,1.125,1.08,1.0555555555555556,1.0408163265306123,1.03125, 1.0246913580246915,1.02]


# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
    warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
    val = func_power_law(test_X, *parameterTuple)
    return numpy.sum((test_Y - val) ** 2.0)


def generate_Initial_Parameters():
    # min and max used for bounds
    maxX = max(test_X)
    minX = min(test_X)
    maxY = max(test_Y)
    minY = min(test_Y)
    maxXY = max(maxX, maxY)

    parameterBounds = []
    parameterBounds.append([-maxXY, maxXY]) # seach bounds for a
    parameterBounds.append([-maxXY, maxXY]) # seach bounds for b
    parameterBounds.append([-maxXY, maxXY]) # seach bounds for c

    # "seed" the numpy random number generator for repeatable results
    result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
    return result.x

# generate initial parameter values
geneticParameters = generate_Initial_Parameters()

# curve fit the test data
fittedParameters, pcov = curve_fit(func_power_law, test_X, test_Y, geneticParameters)

print('Parameters', fittedParameters)

modelPredictions = func_power_law(test_X, *fittedParameters) 

absError = modelPredictions - test_Y

SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(test_Y))
print('RMSE:', RMSE)
print('R-squared:', Rsquared)

print()


##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
    f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
    axes = f.add_subplot(111)

    # first the raw data as a scatter plot
    axes.plot(test_X, test_Y,  'D')

    # create data for the fitted equation plot
    xModel = numpy.linspace(min(test_X), max(test_X))
    yModel = func_power_law(xModel, *fittedParameters)

    # now the model as a line plot
    axes.plot(xModel, yModel)

    axes.set_xlabel('X Data') # X axis data label
    axes.set_ylabel('Y Data') # Y axis data label

    plt.show()
    plt.close('all') # clean up after using pyplot

graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)



回答2:


There is no simple answer: if there was, it would be implemented in curve_fit and then it would not have to ask you for the starting point. One reasonable approach is to fit the homogeneous model y = a*x**b first. Assuming positive y (which is usually the case when you work with power law), this can be done in a rough and quick way: on the log-log scale, log(y) = log(a) + b*log(x) which is linear regression which can be solved with np.linalg.lstsq. This gives candidates for log(a) and for b; the candidate for c with this approach is 0.

test_X = np.array([1.0,2,3,4,5,6,7,8,9,10])
test_Y = np.array([3.0,1.5,1.2222222222222223,1.125,1.08,1.0555555555555556,1.0408163265306123,1.03125, 1.0246913580246915,1.02])

rough_fit = np.linalg.lstsq(np.stack((np.ones_like(test_X), np.log(test_X)), axis=1), np.log(test_Y))[0]
p0 = [np.exp(rough_fit[0]), rough_fit[1], 0]

The result is the good fit you see in the second picture.

By the way, it's better to make test_X a NumPy array at once. Otherwise, you are slicing X[1:] first, this gets NumPy-fied as an array of integers, and then an error is thrown with negative exponents. (And I suppose the purpose of 1.0 was to make it a float array? This is what dtype=np.float parameter should be used for.)




回答3:


In addition to the very fine answers from Welcome to Stack Overflow that "there is no easy, universal approach and James Phillips that "differential evolution often helps find good starting points (or even good solutions!) if somewhat slower than curve_fit()", allow me to give a separate answer that you may find helpful.

First, the fact that curve_fit() defaults to any parameter values is soul-crushingly bad idea. There is no possible justification for this behavior, and you and everyone else should treat the fact that there are default values for parameters as a serious error in the implementation of curve_fit() and pretend this bug does not exist. NEVER believe these defaults are reasonable.

From a simple plot of data, it should be obvious that a=1, b=1, c=1 are very, very bad starting values. The function decays, so b < 0. In fact, if you had started with a=1, b=-1, c=1 you would have found the correct solution.

It may have also helped to place sensible bounds on the parameters. Even setting bounds of c of (-100, 100) may have helped. As with the sign of b, I think you could have seen that boundary from a simple plot of the data. When I try this for your problem, bounds on c do not help if the initial value is b=1, but it does for b=0 or b=-5.

More importantly, although you print the best-fit params popt in the plot, you do not print the uncertainties or correlations between variables held in pcov, and thus your interpretation of the results is incomplete. If you had looked at these values, you would have seen that starting with b=1 leads not only to bad values but also to huge uncertainties in the parameters and very, very high correlation. This is the fit telling you that it found a poor solution. Unfortunately, the return pcov from curve_fit is not very easy to unpack.

Allow me to recommend lmfit (https://lmfit.github.io/lmfit-py/) (disclaimer: I'm a lead developer). Among other features, this module forces you to give non-default starting values, and to more easily a more complete report. For your problem, even starting with a=1, b=1, c=1 would have given a more meaningful indication that something was wrong:

from lmfit import Model
mod = Model(func_powerlaw)
params = mod.make_params(a=1, b=1, c=1)
ret = mod.fit(test_Y[1:], params, x=test_X[1:])
print(ret.fit_report())

which would print out:

[[Model]]
    Model(func_powerlaw)
[[Fit Statistics]]
    # fitting method   = leastsq
    # function evals   = 1318
    # data points      = 9
    # variables        = 3
    chi-square         = 0.03300395
    reduced chi-square = 0.00550066
    Akaike info crit   = -44.4751740
    Bayesian info crit = -43.8835003
[[Variables]]
    a: -1319.16780 +/- 6892109.87 (522458.92%) (init = 1)
    b:  2.0034e-04 +/- 1.04592341 (522076.12%) (init = 1)
    c:  1320.73359 +/- 6892110.20 (521839.55%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
    C(a, c) = -1.000
    C(b, c) = -1.000
    C(a, b) =  1.000

That is a = -1.3e3 +/- 6.8e6 -- not very well defined! In addition all parameters are completely correlated.

Changing the initial value for b to -0.5:

params = mod.make_params(a=1, b=-0.5, c=1) ## Note !
ret = mod.fit(test_Y[1:], params, x=test_X[1:])
print(ret.fit_report())

gives

[[Model]]
    Model(func_powerlaw)
[[Fit Statistics]]
    # fitting method   = leastsq
    # function evals   = 31
    # data points      = 9
    # variables        = 3
    chi-square         = 4.9304e-32
    reduced chi-square = 8.2173e-33
    Akaike info crit   = -662.560782
    Bayesian info crit = -661.969108
[[Variables]]
    a:  2.00000000 +/- 1.5579e-15 (0.00%) (init = 1)
    b: -2.00000000 +/- 1.1989e-15 (0.00%) (init = -0.5)
    c:  1.00000000 +/- 8.2926e-17 (0.00%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
    C(a, b) = -0.964
    C(b, c) = -0.880
    C(a, c) =  0.769

which is somewhat better.

In short, initial values always matter, and the result is not only the best-fit values, but includes the uncertainties and correlations.



来源:https://stackoverflow.com/questions/52356128/how-to-set-up-the-initial-value-for-curve-fit-to-find-the-best-optimizing-not-j

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!