问题
from numpy import *; from scipy.optimize import *; from math import *
def f(X):
x=X[0]; y=X[1]
return x**4-3.5*x**3-2*x**2+12*x+y**2-2*y
bnds = ((1,5), (0, 2))
min_test = minimize(f,[1,0.1], bounds = bnds);
print(min_test.x)
My function f(X)
has a local minima at x=2.557, y=1
which I should be able to find.
The code showed above will only give result where x=1
. I have tried with different tolerance and alle three method: L-BFGS-B, TNC and SLSQP.
This is the thread I have been looking at so far:
Scipy.optimize: how to restrict argument values
How can I fix this?
I am using Spyder(Python 3.6).
回答1:
You just encounterd the problem with local optimization: it strongly depends on the start (initial) values you pass in. If you supply [2, 1]
it will find the correct minima.
Common solutions are:
use your optimization in a loop with random starting points inside your boundaries
import numpy as np from numpy import *; from scipy.optimize import *; from math import * def f(X): x=X[0]; y=X[1] return x**4-3.5*x**3-2*x**2+12*x+y**2-2*y bnds = ((1,3), (0, 2)) for i in range(100): x_init = np.random.uniform(low=bnds[0][0], high=bnds[0][1]) y_init = np.random.uniform(low=bnds[1][0], high=bnds[1][1]) min_test = minimize(f,[x_init, y_init], bounds = bnds) print(min_test.x, min_test.fun)
use an algorithm that can break free of local minima, I can recommend scipy's
basinhopping()
use a global optimization algorithm and use it's result as initial value for a local algorithm. Recommendations are NLopt's
DIRECT
or the MADS algorithms (e.g.NOMAD
). There is also another one in scipy,shgo
, that I have no tried yet.
回答2:
Try scipy.optimize.basinhopping
. It simply just repeat your minimize procedure multiple times and get multiple local minimums. The minimal one is the global minimum.
minimizer_kwargs = {"method": "L-BFGS-B"}
res=optimize.basinhopping(nethedge,guess,niter=100,minimizer_kwargs=minimizer_kwargs)
来源:https://stackoverflow.com/questions/52438263/scipy-optimize-gets-trapped-in-local-minima-what-can-i-do