This problem appeared in a challenge, but since it is now closed it should be OK to ask about it.
The problem (not this question itself, this is just background information) can be visually described like this, borrowing their own image:
I chose to solve it optimally. That's probably (for the decision variant) an NP-complete problem (it's certainly in NP, and it smells like an exact cover, though I haven't proven that a general exact cover problem can be reduced to it), but that's fine, it only has to be fast in practice, not necessarily in the worst case. In the context of this question, I'm not interested in any approximation algorithms, unless they provide cuts.
There is an obvious ILP model: generate all possible squares (a square is possible if it covers only cells of the grid that are present), introduce a binary variable x_i
for every square indicating whether we use it or not, then
minimize sum x_i
subject to:
1) x_i is an integer
2) 0 ≤ x_i ≤ 1
3) for every cell c
(sum[j | c ϵ square_j] x_j) = 1
Constraint 3 says that every cell is covered exactly once. Constraints 1 and 2 make x_i binary. The minimum solution gives an optimum solution to the original problem.
The linear relaxation of this (ie ignoring constraint 1) is half-decent, but it does things like this (this is a 6x6 grid with the top left corner missing):
Here 13 squares were chosen "half" (giving an objective value of 6.5), of the sizes (so you can find them back easier)
- 1 of 4x4
- 3 of 3x3
- 6 of 2x2
- 3 of 1x1
An optimal solution for this instance has an objective value of 8, such as this one:
The linear relaxation is half-decent, but I'm not completely happy with it. The gap is sometimes over 10%, and that sometimes leads to a very slow integer phase.
That's where the actual question comes in, are there extra constraints that I can add (lazily) as cuts, to improve a fractional solution?
I've tried alternative formulations of the problem to find cuts, for example instead of choosing squares, what if we choose "left upper corners" and "right lower corners", which are then to be matched up to form non-overlapping squares that cover all cells? Then on every "backslash-like diagonal", there must be a matching number of left-upper and right-lower corners. But that doesn't help, because if we choose squares then that condition is always true anyway, also in fractional solutions.
I've also tried some reasoning about overlaps, for example if two squares overlap clearly their sum must not be bigger than 1, and that can be improved by also adding all squares wholly contained in the overlapping region. But that constraint is less powerful than the constraint that all cells be covered exactly once.
I've tried reasoning about the total area (as in, the total covered area must be equal to the number of cells), but that's already guaranteed by the constraint that every cell must be covered once, stating it in terms of the total area only allows more freedom.
I've also tried to do something with square numbers (the area of every square is, well, a square) and differences of square numbers, but that didn't end in anything useful either.
Constraints on the objective function value
Whenever the objective function value is non-integral -- e.g., because you have an odd number of 0.5-weight squares in the fractional solution -- you can add a cut "directly on the objective function" to force it to the next higher integral value: e.g., in your example fractional solution with 13 squares each having a weight of 0.5 for a total objective function value of 6.5, you could add the constraint that the sum of all x_i >= 7.
More general cuts
This leads to a more general rule that will work whenever you have a fractional solution in which some subset C of cells are "exactly covered" by some subset S of squares having non-integral total weight w. By "exactly covered", I mean that the squares in S each have nonzero weight and together provide a total weight of 1 for every cell in C, and do not overlap any cells outside of C. You can find these subsets of cells easily by creating a graph with a vertex for each cell and an edge between two vertices whenever they are (partially) covered by the same square in the fractional solution: each connected component of this graph is a (minimal) such subset.
Given a fractional solution with some exactly covered cell subset C and square subset S as above, let T be the set of all squares that overlap only cells in C (obviously T is a superset of S). We know that any optimal solution X to the LP subproblem consisting of just the subset C of cells (and the relevant subset T of candidate squares) must have identical total weight to S, since if it didn't this would contradict the optimality of the fractional solution to the original LP: you could just replace S with X and get a better solution.
Now, there are two sets of solutions to the original problem (either of which may be empty): those solutions in which no square covers both a cell in C and a cell outside C, and those solutions in which at least some square does. We want to forbid solutions in the first category that have total weight < RoundUp(w). Let U be the set of all squares that overlap at least one cell in C and at least one cell outside C. We can achieve this by adding the constraint
Sum_{square_i in T}(x_i) + RoundUp(w) * Sum_{square_j in U}(x_j) >= RoundUp(w)
The effect of multiplying the second term on the LHS by RoundUp(w) is that if even a single square that covers both a cell in C and some other cell is included into the solution, the constraint effectively "goes away". This is needed because S and C tell us nothing about such solutions to the original problem, and therefore we can't afford to rule them out. Note that the original solution containing the subset of squares S will be forbidden by this constraint, since every square in U must have had weight 0 in this solution.
No panacea
The second approach is more powerful than the first, since it may happen that, e.g., the graph contains two components, each of which has an odd number of squares, all of which have weight 0.5: this would mean that there are overall an even number of squares, meaning the overall objective function value is integral, preventing the possibility of adding a cut on the objective function. But even applying these cuts again and again doesn't guarantee that a feasible solution will eventually be found: as a concrete example, any time the grid itself is horizontally and/or vertically symmetrical but can be covered by an asymmetrical set of squares, then it can just as easily be covered by the horizontally and/or vertically flipped version of that set of squares -- and, more annoyingly, by any "affine combination" of these two square sets (i.e., by any combination with weights summing to 1). In general there could be many equally good feasible solutions, and the cuts I've described here give no way to stop the LP solver from returning a solution that contains a "superposition" of all k of them, with all squares assigned weight 1/k.
[EDIT 1/7/2015]
A bright side!
Although, as I mentioned above, there's no way of getting the LP solver to "isolate" one particular feasible cover from a fractional "superposition" of several symmetric optimal covers, there is good news: In the event that you do get such a superposition, it's actually very easy to recover a single optimal, feasible cover from it. All you need to do is greedily pick squares with nonzero weight, each time crossing out any squares that overlap the square you just picked. This is guaranteed to work if the solution is a superposition as I've described, and, just as importantly: if this procedure works on a fractional solution (that is, if repeating this greedy step eventually covers all the cells) then the solution you get must be optimal! (Suppose it wasn't: Let X be the optimal fractional solution returned by the LP solver, let F be the feasible solution you just extracted from it greedily, and let y be the smallest weight of any square in F. Notice that every square in F contributes at least y to the coverage value of every cell; so, since F is suboptimal by assumption, subtracting y from the weight of every square in F and scaling all other weights in X up by 1/(1-y) will give another (possibly again fractional) solution of lower weight, contradicting the optimality of X.)
It would be very useful to prove that any fractional solution either (i) has some component in the "covering graph" with non-integral total weight, or (ii) consists of such a superposition: this would mean that you could go on applying my "general" cuts until you get a superposition, which you could then solve greedily. But as it stands, there might be fractional solutions outside these two categories.
I used the branch and price method to good effect on a similar problem (ITA's Strawberry Fields; scroll down). Here is my code, which (lacking comments) is probably only good as a zero-knowledge proof that I know what I'm talking about. It was orders of magnitude faster than a commercial solver (which I won't name).
The key was the branching strategy. Rather than branch directly on the x_i
variables, which is likely what your solver is doing, branch on a higher-level decision. The one that I used for Strawberry Fields was to decide whether two cells would be covered by the same square. If you target the pairs that the fractional solution is most on the fence about, then the structure of the solution seems to be set fairly quickly.
Unfortunately, I can't offer you advice on how to program this into an existing integer program solver. For Strawberry Fields, I went with custom everything, mostly because I wanted to but in part because I was generating columns on the fly, using cumulative grid row and grid column sums to evaluate rectangles quickly.
I would use a search like A* to find a solution. It is important to note A* is like Greedy Best-First-Search in that it can use a heuristic to guide itself. Assuming you have a correct heuristic it will find near optimal (~0.95) solution in reasonable time.
Example solution uses 18 blocks instead of 19 as shown in example. In addition to heuristic you can use some precomputation to increase algorithm effectiveness. For example I computed what I call freedom gradient for each location. For example your initial map becomes a stage:
You can have your own heuristic that can be equally good or even better. I used this just because i found it was trivial. Stage numbers have following meaning: higher the number - more likely it will be in a big box (there are more constraints you can conclude, but it's a start).
The stage value are sum total of 4 cellar automation rules.
For example top left
cell(x,y) := min(cell(x,y-1) ?? 0, cell(x-1,y) ?? 0, cell(x-1,y-1) ?? 0) + 1
The ?? operator is called the null-coalescing operator. It returns the left-hand operand if the operand is not null; otherwise it returns the right hand operand.
In addition you can cut computation work required by removing known solutions you got from precomputation and at each stage. In this case state 0:
Algorithm itself would repeat:
- Order cells and take highest
- Order automation rule results and take highest
- Find trivial cut offs
If you want to get even faster results for larger grids then one fairly streight forward way would be to extend precomputation to compile templates. After that you can do template reduction.
For example you could have a template (top left part)
--+++
+-+++
+++++
You can generate it's hash and add it to hashmap/dictionary with value 4 (meaning it will take minimum of 4 rectangles to cover +'s). Then you have 5x3 range with the same hash you know it is a trivial cut off with cost 4.
Reason why this is faster is that comparisons take a lot of time compared to computing a hash that is near constant speed.
Alternatively there is a way to hypnotize whether or what kind of solution we are actually looking for. Using Wolfram Mathematica function FindMinimum it would look something in the lines of:
FindMinimum[{
a + b + c + d + e + f + g,
(a 1^2 + b 2^2 + c 3^2 + d 4^2 + e 5^2 + f 6^2 + g 7^2) == 119 &&
a >= 0 && b >= 0 && c >= 0 && d >= 0 && e >= 0 && f >= 0 && g >= 0 &&
a \[Element] Integers &&
b \[Element] Integers &&
c \[Element] Integers &&
d \[Element] Integers &&
e \[Element] Integers &&
f \[Element] Integers &&
g \[Element] Integers
}, {a,b,c,d,e,f,g}]
This is based on assumption that we have 12x12 grid with 119 cells and it would give us an optimal solution with grid size counts used.
Sadly wolfram alpha search engine does not interpret this, maybe in future.
Dynamic programming: choose a cell (any cell will do), compute all valid squares that cover it (that is, that only cover the current cell and other cells in the grid). For each such square, recursively get the minimum coverage value for the remaining area (grid minus the square), and then choose the minimum value from those (the result is that value + 1). To make things faster, try to choose a cell which has the fewest valid covering square in each level of the recursion (thus reducing the number of recursive calls in each level).
Simple really.
How about using a constraint that the total area of the squares equals the total area to be covered, in addition to the constraint of no overlaps ? This should be fater than checking the double cover constraint.
来源:https://stackoverflow.com/questions/30818645/minimum-exact-cover-of-grid-with-squares-extra-cuts