OpenMDAO: Solver converging to non-optimal point

左心房为你撑大大i 提交于 2020-03-05 05:31:33

问题


I'm trying to understand the limitations of the OpenMDAO optimization algortithms. In particular I setup the following trivial example:

from openmdao.api import Problem, ScipyOptimizeDriver, ExecComp, IndepVarComp, ExplicitComponent

class AddComp(ExplicitComponent):

    def setup(self):
        self.add_input("x")
        self.add_input("y")
        self.add_output("obj")

    def compute(self, inputs, outputs):
        outputs['obj'] = inputs["x"] + inputs["y"]

# build the model
prob = Problem()
indeps = prob.model.add_subsystem('indeps', IndepVarComp())
indeps.add_output('x', 3.0)
indeps.add_output('y', -4.0)

prob.model.add_subsystem("simple", AddComp())

prob.model.connect('indeps.x', 'simple.x')
prob.model.connect('indeps.y', 'simple.y')

# setup the optimization
prob.driver = ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'

prob.model.add_design_var('indeps.x', lower=-50, upper=50)
prob.model.add_design_var('indeps.y', lower=-50, upper=50)
prob.model.add_objective('simple.obj')

prob.setup()
prob.run_driver()
# minimum value
print(prob['simple.obj'])
# location of the minimum
print(prob['indeps.x'])
print(prob['indeps.y'])

The printout from this is:

Optimization terminated successfully.    (Exit mode 0)
            Current function value: -1.0
            Iterations: 1
            Function evaluations: 1
            Gradient evaluations: 1
Optimization Complete
-----------------------------------
[-1.]
[ 3.]
[-4.]

However, the optimal solution would of course be to have x=y=-50. How come this solution cannot be found?

For some reason I had the idea that the driver should find the correct solution for convex problems. But I realize this sounds like a crude summary of solver limitations. Could someone point to an explanation of what problems can be solved by which methods?


回答1:


Whats going on here is that OpenMDAO isn't computing the objective gradient for the optimizer because you have to declare the partials explicitly.

Adding the following to the setup method of the component will declare the constant values of the partials (we don't need a compute_partials method in this case because the objective is a linear function of the inputs, and the partials are therefore constant).

self.declare_partials(of='obj', wrt='x', val=1.0)
self.declare_partials(of='obj', wrt='y', val=1.0)

Alternatively, you can just tell OpenMDAO to compute all partials for the component via finite difference or complex-step:

self.declare_partials(of='*', wrt='*', method='cs')

Where method is one of 'cs' or 'fd'.

With that change the expected optimum is found:

Optimization terminated successfully.    (Exit mode 0)
            Current function value: -99.99999999983521
            Iterations: 7
            Function evaluations: 7
            Gradient evaluations: 7
Optimization Complete
-----------------------------------
[-100.]
[-50.]
[-50.]


来源:https://stackoverflow.com/questions/51027185/openmdao-solver-converging-to-non-optimal-point

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!