问题
I need to maximize the objective function for some problems using R package 'nloptr'. I tried the basic rule "Maximize f(x) <=> Minimize -f(x)" but it does not work. I am not sure what's wrong either using it or there is some other way.
Here is a complete example. The current solution is just the initial vector with minimum objective value. But, I am supposed to get the solution that would maximize the objective function. Can someone please help me how to get it. Thanks!
library(nloptr)
X = log(rbind(c(1.350, 8.100),
c(465.000, 423.000),
c(36.330 , 119.500),
c(27.660 , 115.000),
c(1.040 , 5.500),
c(11700.000, 50.000),
c(2547.000 , 4603.000),
c(187.100 , 419.000),
c(521.000 , 655.000),
c(10.000 , 115.000),
c(3.300 , 25.600),
c(529.000 , 680.000),
c(207.000 , 406.000),
c(62.000 , 1320.000),
c(6654.000 , 5712.000),
c(9400.000 , 70.000),
c(6.800 , 179.000),
c(35.000 , 56.000),
c(0.120 , 1.000),
c(0.023 , 0.400),
c(2.500 , 12.100),
c(55.500 , 175.000),
c(100.000 , 157.000),
c(52.160 , 440.000),
c(87000.000 , 154.500),
c(0.280 , 1.900),
c(0.122 , 3.000),
c(192.000 , 180.000)))
n = nrow(X)
q = 0.5
x0 = cbind(8,4)
x01 = x0[1]
x02 = x0[2]
x1 = X[,1]
x2 = X[,2]
pInit = c(0.1614860, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000,
0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000,
0.0000000, 0.0000000, 0.0000000, 0.7124934, 0.0000000, 0.0000000,
0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000,
0.1260206, 0.0000000, 0.0000000, 0.0000000)
eval_f0 = function(p) {
obj0 = mean((n * p ) ^ q)
grad0 = rbind(q * ((n * p) ^ (q - 1))/((mean((n * p ) ^ q))^2))
return(list("objective" = obj0, "gradient" = grad0))
}
eval_g_eq0 = function(p) {
sum0 = sum(x1 * p) - x01
sum1 = sum(x2 * p) - x02
sum2 = sum(p) - 1
constr0 = rbind(sum0, sum1, sum2)
grad0 = rbind(x1, x2, rep(1,n))
return(list("constraints" = constr0, "jacobian" = grad0))
}
local_opts <- list( "algorithm" = "NLOPT_LD_AUGLAG",
"xtol_rel" = 1.0e-7 )
opts <- list( "algorithm" = "NLOPT_LD_AUGLAG",
"xtol_rel" = 1.0e-7,
"maxeval" = 10000,
"local_opts" = local_opts )
res1 = nloptr(x0 = c(pInit),
eval_f = eval_f0,
lb = c(rep(0, n)),
ub = c(rep(Inf, n)),
eval_g_eq = eval_g_eq0,
opts = opts )
weight = res1$solution
fval0 = res1$objective
print(list(fval0, weight))
回答1:
Please note that the Gradient (and Jacobian) at your starting point pInit
is not finite which makes this task difficult for any gradient-based solver. I will use a different starting point, a bit away from the boundary.
Anyway, it seems easier to find the maximum with the Lagrangian solver in the alabama package. With your definitions above up to x1 = X[,1]; x2 = X[,2]
a possible solution looks like:
f1 <- function(p) mean((n * p ) ^ q)
heq1 <- function(p)
c(sum(x1 * p) - x01, sum(x2 * p) - x02, sum(p) - 1)
For simplicity we let the solver calculate gradients and Jacobians. To find the maximum apply the solver to the negative of the objective function.
sol <- alabama::auglag(rep(0.1, 28), fn=function(p) -f1(p), heq=heq1)
cat("The maximum value is:", -sol$value, '\n')
## The maximum value is: 0.7085338
The equality conditions are satisfied, see
heq1(sol$par)
## [1] -1.685957e-08 3.721533e-08 -2.935964e-08
and the solution found is
sol$par
## [1] 0.012186842 0.006640286 0.006706268 0.006418224 0.014501609 0.405618998
## [7] 0.003531462 0.005458189 0.005582029 0.005158098 0.008072278 0.005510394
## [13] 0.005653117 0.002935642 0.003861549 0.123009564 0.004021866 0.009866779
## [19] 0.024385229 0.027101557 0.011436010 0.006184886 0.007473135 0.004162962
## [25] 0.245429952 0.019978294 0.010919515 0.008195238
I would be interested to know whether this is a reasonable solution for you! I checked it for several starting points and it always came out the same.
来源:https://stackoverflow.com/questions/53099087/maximizing-nonlinear-constraints-problem-using-r-package-nloptr