问题
I have a non-linear optimization problem with constraints. It can be solved in Microsoft Excel with the Solver add-in, but I am having trouble replicating that in C#.
My problem is shown in the following spreadsheet. I am solving the classic A x = b problem but with the caveat that all components of x must be non-negative. So instead of using standard linear algebra I use Solver with the non-negative constraint, minimizing the sum of the squared differences, and get a reasonable solution. I have tried to replicate this in C# using either Microsoft Solver Foundation or Solver SDK. However I can't seem to get anywhere with them because with MSF I can't figure out how to define the goal and with Solver SDK I always get back status "optimal" and a solution of all 0s which is definitely not even a local minimum.
Here is my code for Solver SDK:
static double[][] A = new double[][] { new double[] { 1, 0, 0, 0, 0 }, new double[] { 0.760652602, 1, 0, 0, 0 }, new double[] { 0.373419404, 0.760537565, 1, 0, 0 }, new double[] { 0.136996731, 0.373331934, 0.760422587, 1, 0 }, new double[] { 0.040625222, 0.136953801, 0.373244464, 0.76030755, 1 } };
static double[][] b = new double[][] { new double[] { 2017159 }, new double[] { 1609660 }, new double[] { 837732.8125 }, new double[] { 330977.3125 }, new double[] { 87528.38281 } };
static void Main(string[] args)
{
using(Problem problem = new Problem(Solver_Type.Minimize, 5, 0))
{
problem.VarDecision.LowerBound.Array = new double[] { 0.0, 0.0, 0.0, 0.0, 0.0 };
problem.VarDecision.UpperBound.Array = new double[] { Constants.PINF, Constants.PINF, Constants.PINF, Constants.PINF, Constants.PINF };
problem.Evaluators[Eval_Type.Function].OnEvaluate += new EvaluateEventHandler(SumOfSquaredErrors);
problem.ProblemType = Problem_Type.OptNLP;
problem.Solver.Optimize();
Optimize_Status status = problem.Solver.OptimizeStatus;
Console.WriteLine(status.ToString());
foreach(double x in problem.VarDecision.FinalValue.Array)
{
Console.WriteLine(x);
}
}
}
static Engine_Action SumOfSquaredErrors(Evaluator evaluator)
{
double[][] x = new double[evaluator.Problem.Variables[0].Value.Array.Length][];
for(int i = 0; i < x.Length; i++)
{
x[i] = new double[1] { evaluator.Problem.Variables[0].Value.Array[i] };
}
double[][] b_calculated = MatrixMultiply(A, x);
double sum_sq = 0.0;
for(int i = 0; i < b_calculated.Length; i++)
{
sum_sq += Math.Pow(b_calculated[i][0] - b[i][0], 2);
}
evaluator.Problem.FcnObjective.Value[0] = sum_sq;
return Engine_Action.Continue;
}
static double[][] MatrixMultiply(double[][] left, double[][] right)
{
if(left[0].Length != right.Length)
{
throw new ArgumentException();
}
double[][] sum = new double[left.Length][];
for(int i = sum.GetLowerBound(0); i <= sum.GetUpperBound(0); i++)
{
sum[i] = new double[right[i].Length];
}
for(int i = 0; i < sum.Length; i++)
{
for(int j = 0; j < sum[0].Length; j++)
{
for(int k = 0; k < right.Length; k++)
{
sum[i][j] += left[i][k] * right[k][j];
}
}
}
return sum;
}
I don't have any code for Microsoft Solver Foundation because I don't think the goal function can be written in a single line and it doesn't allow for delegates like Solver SDK does.
回答1:
One alternative would be to formulate this as an LP problem:
minimize sum of the elements in x
subject to Ax >= b
This should be fairly simple to formulate using Solver Foundation, based on one of the LP samples.
UPDATE JULY 5
The above approach also looks overly complex, but maybe this is due to the Frontline Solver API. Using Microsoft Solver Foundation, and minimizing the sum of squared differences, the following program:
private static void Main(string[] args)
{
var solver = SolverContext.GetContext();
var model = solver.CreateModel();
var A = new[,]
{
{ 1, 0, 0, 0, 0 },
{ 0.760652602, 1, 0, 0, 0 },
{ 0.373419404, 0.760537565, 1, 0, 0 },
{ 0.136996731, 0.373331934, 0.760422587, 1, 0 },
{ 0.040625222, 0.136953801, 0.373244464, 0.76030755, 1 }
};
var b = new[] { 2017159, 1609660, 837732.8125, 330977.3125, 87528.38281 };
var n = A.GetLength(1);
var x = new Decision[n];
for (var i = 0; i < n; ++i)
model.AddDecision(x[i] = new Decision(Domain.RealNonnegative, null));
// START NLP SECTION
var m = A.GetLength(0);
Term goal = 0.0;
for (var j = 0; j < m; ++j)
{
Term Ax = 0.0;
for (var i = 0; i < n; ++i) Ax += A[j, i] * x[i];
goal += Model.Power(Ax - b[j], 2.0);
}
model.AddGoal(null, GoalKind.Minimize, goal);
// END NLP SECTION
var solution = solver.Solve();
Console.WriteLine("f = {0}", solution.Goals.First().ToDouble());
for (var i = 0; i < n; ++i) Console.WriteLine("x[{0}] = {1}", i, x[i].GetDouble());
}
generates the following solution, which should be in line with the solution from the linked Excel sheet:
f = 254184688.179922
x[0] = 2017027.31820845
x[1] = 76226.6063397686
x[2] = 26007.3375581303
x[3] = 1.00650383558278E-07
x[4] = 4.18546775823669E-09
If I am not mistaken, unlike GRG, Solver Foundation cannot support general non-linear constraints out-of-the-box, I believe you will need additional plug-ins to handle these. For your problem, this is of course not an issue.
For completeness, to formulate the LP problem instead, exchange the code between START NLP SECTION and END NLP SECTION with the following code:
var m = A.GetLength(0);
var constraints = new Term[m];
for (var j = 0; j < m; ++j)
{
Term Ax = 0.0;
for (var i = 0; i < n; ++i) Ax += A[j, i] * x[i];
model.AddConstraint(null, constraints[j] = Model.GreaterEqual(Ax, b[j]));
}
model.AddGoal(null, GoalKind.Minimize, Model.Sum(x));
which will yield the following output (note that objective functions are different in the two cases, hence the large differences in f
):
f = 2125502.27815564
x[0] = 2017159
x[1] = 75302.7580022821
x[2] = 27215.9247379241
x[3] = 5824.5954154355
x[4] = 0
来源:https://stackoverflow.com/questions/11303094/how-can-i-emulate-microsoft-excels-solver-functionality-grg-nonlinear-in-c