Non convex optimizer

后端 未结 2 1993
-上瘾入骨i
-上瘾入骨i 2021-01-05 13:46

I use python2.7 and need to find the maximum of a multivariate scalar function.

In other words I have this function:

def myFun(a,b,c,d,e,f):
    # co         


        
相关标签:
2条回答
  • 2021-01-05 14:29

    You might want to try CVXPY (http://www.cvxpy.org/en/latest), which is oddly enough a non-convex extension of CVXOPT (a convex solver). Then you can do convex optimization with CVXOPT or non-convex with CVXPY, whatever suits you for the problem.

    There are a bunch of non-convex solvers in python, and many of them are listed here: https://scicomp.stackexchange.com/questions/83/is-there-a-high-quality-nonlinear-programming-solver-for-python... but it seems you are actually asking about a continuous solver, which may be local or global, and can handle expensive functions.

    Personally, I'd suggest mystic (https://pypi.python.org/pypi/mystic). True, I'm the author, but it's been decently funded for about a decade, and it can address highly-constrained non-convex problems that are pretty unsolvable with other packages. It can also handle fundamentally convex optimization with nonlinear constraints. Also, mystic is built for massively parallel computing, so you can easily leverage parallel computing in your optimizations at several levels. If you have enough resources, mystic can do an ensemble optimization, which you can imagine like being able to do a grid search (where you can pick the initial distribution of points), and instead of using fixed points on a grid, mystic uses fast linear solvers launched in parallel.

    Here's one of the nearly 100 examples that come with mystic:

    '''
        Maximize: f = 2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2
    
        Subject to:    x[0]**3 - x[1] == 0
                                 x[1] >= 1
    '''
    

    Two solutions are presented (one for a linear solver, and one a global solver):

    def objective(x):
        return 2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2
    
    equations = """
    x0**3 - x1 == 0.0
    """
    bounds = [(None, None),(1.0, None)]
    
    # with penalty='penalty' applied, solution is:
    xs = [1,1]; ys = -1.0
    
    from mystic.symbolic import generate_conditions, generate_penalty
    pf = generate_penalty(generate_conditions(equations), k=1e4)
    from mystic.symbolic import generate_constraint, generate_solvers, solve
    cf = generate_constraint(generate_solvers(solve(equations)))
    
    # inverted objective, used in solving for the maximum
    _objective = lambda x: -objective(x)
    
    
    if __name__ == '__main__':
    
      from mystic.solvers import diffev2, fmin_powell
      from mystic.math import almostEqual
    
      result = diffev2(_objective, x0=bounds, bounds=bounds, constraint=cf, penalty=pf, npop=40, ftol=1e-8, gtol=100, disp=False, full_output=True)
      assert almostEqual(result[0], xs, rel=2e-2)
      assert almostEqual(result[1], ys, rel=2e-2)
    
      result = fmin_powell(_objective, x0=[-1.0,1.0], bounds=bounds, constraint=cf, penalty=pf, disp=False, full_output=True)
      assert almostEqual(result[0], xs, rel=2e-2)
      assert almostEqual(result[1], ys, rel=2e-2)
    

    Here's another:

    """
        Fit linear and quadratic polynomial to noisy data:
                   y(x) ~ a + b * x   --or--   y(x) ~ a + b * x + c * x**2
        where:
                   0 >= x >= 4
                   y(x) = y0(x) + yn
                   y0(x) = 1.5 * exp(-0.2 * x) + 0.3
                   yn = 0.1 * Normal(0,1)
    """
    

    With solution(s):

    from numpy import polyfit, poly1d, linspace, exp
    from numpy.random import normal
    from mystic.math import polyeval
    from mystic import reduced
    
    # Create clean data.
    x = linspace(0, 4.0, 100)
    y0 = 1.5 * exp(-0.2 * x) + 0.3
    
    # Add a bit of noise.
    noise = 0.1 * normal(size=100) 
    y = y0 + noise
    
    @reduced(lambda x,y: abs(x)+abs(y))
    def objective(coeffs, x, y):
        return polyeval(coeffs, x) - y
    
    bounds = [(None, None), (None, None), (None, None)]
    args = (x, y)
    
    # 'solution' is:
    xs = polyfit(x, y, 2) 
    ys = objective(xs, x, y)
    
    
    if __name__ == '__main__':
    
      from mystic.solvers import diffev2, fmin_powell
      from mystic.math import almostEqual
    
      result = diffev2(objective, args=args, x0=bounds, bounds=bounds, npop=40, ftol=1e-8, gtol=100, disp=False, full_output=True)
      assert almostEqual(result[0], xs, tol=1e-1)
      assert almostEqual(result[1], ys, rel=1e-1)
    
      result = fmin_powell(objective, args=args, x0=[0.0,0.0,0.0], bounds=bounds, disp=False, full_output=True)
      assert almostEqual(result[0], xs, tol=1e-1)
      assert almostEqual(result[1], ys, rel=1e-1)
    

    For parallel computing, mystic can leverage pathos and pyina (see: https://github.com/uqfoundation for both), where you just pass the hierarchical configuration of map functions that you want to use to run in parallel. It's pretty easy. It may not be the fastest for a stock problem, but it is (in my opinion) the best choice for problems that you can't solve otherwise (due to size or complexity).

    0 讨论(0)
  • 2021-01-05 14:33

    I had to perform a non-convex optimization recently. I used the L-BFGS method from scipy.optimize. It worked well enough for my purposes. You can find more information here about choosing an optimizer relevant for your purposes. The L-BFGS function in scipy.optimize can approximate the gradient for you as well, since your function is too complex to find the gradient.

    0 讨论(0)
提交回复
热议问题