问题
I am interested in exploring Multi-Fidelity (MF) optimization methods. I am trying to figure out how well OpenMDAO will support this work. I don't want to get too deep into OpenMDAO code unless I am sure it can do the job.
One of the simple MF approaches is AMMF. It basically optimizes the low fidelity model within a trust region with first order corrections so the results are similar to the high fidelity method. The basic algorithm is as follows:
X=X_0
While not converged
Calculate y_hf = high_fidelity(X)
Calculate y_lf = low_fidelity(X)
Calculate grad_y_hf = grad_high_fidelity(X)
Calculate grad_y_lf = grad_low_fidelity(X)
set_low_to_high_transfer_function(y_hf,y_lf,grad_y_hf,grad_y_lf)
optimize on corrected_transfer_function within |X-X0|<rho
based on solution, lagrange multipliers adjust rho
X=X_opt_solution
repeat
I am thinking that in OpenMDAO high_fidelity() and low_fidelity() can be groups with the appropriate driver and I can call the linearize methods to get the gradients.
Then the optimize stage is a basic OpenMDAO optimization problem like those in the tutorials that operate on a group that contains the low fidelity analysis along with an additional component that corrects the output of that analysis.
Then the outer loop could be implemented as pure python. I cannot really see where OpenMDAO would be needed for that. Maybe someone can comment on that?
Questions:
Is this a sensible way to implement AMMF with OpenMDAO?
One problem I can see in having loop in pure python is in recorders. I would need to have an additional iteration variable for the outer loop. How could I configure the recorder to be aware of this additional loop?
To assess convergence and adjust the trust region, you need the Lagrange multipliers. For gradient based methods, these are usually calculated at the optimization solution. Do OpenMDAO wrappers have ways of extracting those values? Or would I have to create my own spaghetti code to retrieve the values from an optimization output file?
Thanks for the help!
If I get this working I will be happy to pass it along as a tutorial for the OpenMDAO 1.x documentation. I personally think more complicated examples will be helpful for the community.
回答1:
Setting up something like this in OpenMDAO as a single monolithic model is not really possible. By far the simplest approach to take would be to use three separate Problem instances.
- High Fidelity Problem
- Low Fidelity Problem
- Low-High-Transfer-Function Problem (optional)
You would use the first two problem instances to compute analysis results and gradients. The third problem is likely optional, since I expect its going to be much simpler and could be implemented by hand with a simple function. I would write a standard python script to loop the way your pseudo-code suggests is necessary.
So you would use OpenMDAO to make building the analysis models and getting derivatives from them much simpler. But then build a custom python script around that to implement your AMMF.
As for lagrange multipliers, none of our current optimizers expose those as results, so you'll either have to roll your own optimizer or modify the source code in one of ours. Its not an openmdao wrapper issue... its a problem with the python wrappers not exposing that information yet.
It can be tempting to try and build a full and very complex process entirely in OpenMDAO. Sometimes, that is the right call. But in this case, the assembly of your high level algorithm is very simple, and you don't need to compute derivatives across it either. So there isn't a strong need to implement the top process directly in OpenMDAO. Rather, use OpenMDAO as a tool to make this part:
Calculate y_hf = high_fidelity(X)
Calculate y_lf = low_fidelity(X)
Calculate grad_y_hf = grad_high_fidelity(X)
Calculate grad_y_lf = grad_low_fidelity(X)
easier, and do a more traditional type of coding to implement the top level algorithm.
来源:https://stackoverflow.com/questions/35369053/implementing-ammf-within-openmdao