问题
I have a pretty big model (around 5 million variables and constraints).
The building time is a few minutes and the solving time is a few minutes too (with gurobi)
But it takes very long to write the model (about 2 hours)
This is the time if I use model.write('model.lp', io_options={'symbolic_solver_labels': True})
to be able to record it
It's about the same time if I use SolverFactory
and solve
directly the model from pyomo
here is a little sample, I understand that this model is trivial for gurobi, so I'm not comparing the solving time with the building time here, but I don't understand why it's so long, I though that the problem could come from the disk writing speed, but it seems that the disk is never overloaded and almost not used
import pyomo.environ as pyo
import time
size = 500000
model = pyo.ConcreteModel()
model.set = pyo.RangeSet(0, size)
model.x = pyo.Var(model.set, within=pyo.Reals)
model.constrList = pyo.ConstraintList()
for i in range(size):
model.constrList.add(expr = model.x[i] >= 1)
model.obj = pyo.Objective(expr=sum(model.x[i] for i in range(size)), sense=pyo.minimize)
opt = pyo.SolverFactory('gurobi')
_time = time.time()
res = opt.solve(model)
print(">>> total time () in {:.2f}s".format(time.time() - _time))
print(res)
the results are that the time of the whole solve function is 27 s, but the solving time of gurobi is only 4 s.
回答1:
From my trails with speeding up pyomo model generation you need to benchmark first what part of the process is slowing it down. (Which is really a general advice for perfomance tuning)
so I put you code into a function:
def main():
size = 500000
model = pyo.ConcreteModel()
model.set = pyo.RangeSet(0, size)
model.x = pyo.Var(model.set, within=pyo.Reals)
model.constrList = pyo.ConstraintList()
for i in range(size):
model.constrList.add(expr = model.x[i] >= 1)
model.obj = pyo.Objective(expr=sum(model.x[i] for i in range(size)), sense=pyo.minimize)
return model
so I can run in through the line profiler in ipython:
In [1]: %load_ext line_profiler
In [2]: import test_pyo
In [3]: %lprun -f test_pyo.main test_pyo.main()
which shows that most of the time is spent in model.constrList.add(expr = model.x[i] >= 1).
I did not see much improvement by moving this into a rule based constraint so I decided to try to construct the expression by hand, like in the PyPSA code.
import pyomo.environ as pyo
import time
from pyomo.core.expr.numeric_expr import LinearExpression
from pyomo.core.base.constraint import _GeneralConstraintData
from pyomo.core.base.numvalue import NumericConstant
def main():
size = 500000
model = pyo.ConcreteModel()
model.set = pyo.RangeSet(0, size)
model.x = pyo.Var(model.set, within=pyo.Reals)
setattr(model, "constraint", pyo.Constraint(model.set, noruleinit=True))
v = getattr(model, "constraint")
for i in v._index:
v._data[i] = _GeneralConstraintData(None, v)
expr = LinearExpression()
expr.linear_vars = [model.x[i]]
expr.linear_coefs = [1]
expr.constant = 0
v._data[i]._body = expr
v._data[i]._equality = False
v._data[i]._lower = NumericConstant(1)
v._data[i]._upper = None
model.obj = pyo.Objective(expr=pyo.quicksum(model.x[i] for i in range(size)), sense=pyo.minimize)
return model
which seems to yield about 50% performance improvement. The line profiler shows that a lot of time is now spend in creating the set, the empty LinearExpression object, and also in creating the objective. It might be that fiddling with the objective might improve things a little more.
回答2:
I think that your implicit question is "How can I make this faster?"
If write time is a problem, you might look into the direct python interface to Gurobi SolverFactory('gurobi', io_format='python')
. Setting the symbolic_solver_labels
flag to True
will almost always increase the write time of the model, because component name lookups can be expensive.
来源:https://stackoverflow.com/questions/51269351/pyomo-seems-very-slow-to-write-models