运筹系列46:线性规划求解器python代码

会有一股神秘感。 提交于 2020-08-13 20:44:49

1. BruteSolver

This class implements brute-force exhaustive search (try all possible bases) to solve LPs.

This method solves the std form LP min (c.T * x) s.t. Ax = b, x >= 0 using brute-force exhaustive search (try all possible bases). That is, try all possible basis matrices (i.e. all m-combinations of basic indices) and take best.

Parameters:
c, A, b (np arrays): specify the LP in standard form

Returns:
-1 if LP is infeasible or optimal is (±) infinity, else
x (np array): solution to the LP

class BruteSolver:
    def solve(self, c, A, b):
        c,A,b=check(c,A,b)
        m = A.shape[0] # rank of A
        indices = list(range(A.shape[1]))
        opt_basis,opt_val,opt_xb = None,float('inf'),None
        iteration_number = 0
        for basic_indices in itertools.combinations(indices, m):
            iteration_number += 1
            B = A[:, list(basic_indices)]
            if matrix_rank(B) != m:continue
            x_b = np.dot(np.linalg.inv(B), b)
            if (x_b < 0.0).any():continue # infeasible            
            obj = 0.0
            for i, b_i in enumerate(basic_indices):
                obj += (c[b_i] * x_b[i])
            if obj < opt_val:
                opt_val,opt_basis,opt_xb = obj,basic_indices,x_b
        print('brute took {} iterations'.format(iteration_number))
        if opt_basis is None:return -1
        x = np.zeros(shape=(A.shape[1], ))
        for i in range(x.shape[0]):
            if i in opt_basis: x[i] = opt_xb[opt_basis.index(i)]
        return x

The test result is as follows:

Test #1
brute took 38760 iterations
Optimal objective value = -26.0

2. Simplex solver(2 phase)

class SimplexSolver:
    def solve(self, c, A, b,verbose = 0):
        c,A,b  = check(c,A,b)
        # construct initial solution
        if verbose:print("-----Simplex: phase 1-----")
        M,N = A.shape[:2]
        A_ = np.copy(A)
        b_ = np.copy(b)
        for i in range(M):
            if b[i] < 0.0:
                b_[i] = -1.0 * b[i]
                A_[i, :] = -1 * A_[i, :]

        A_ = np.concatenate((A_, np.eye(M)), axis=1)
        c_ = np.zeros(shape=(N + M, ))
        for i in range(N, N + M):c_[i] = 1.0
        basic_indices,x,obj_val = simplex(c_,A_,b_,list(range(N, N + M)),verbose)
        if obj_val != 0.0:
            if verbose:print('get_init_bfs_aux: the original problem is infeasible!');return -1  
        if verbose:print("-----Simplex: phase 2-----")
        return simplex(c,A,b,basic_indices,verbose)
    
def simplex(c,A,b,basic_indices,verbose=0):       
    indices = list(range(A.shape[1]))
    B = A[:, list(basic_indices)]
    optimal,obj_val,opt_infinity,iteration_number = False,float('inf'),False,0
    while not optimal:
        
        iteration_number += 1
        B_inv = np.linalg.inv(B)
        x_b = np.dot(B_inv, b)
        c_b = c[basic_indices]
        obj_val = 0.0
        for i, b_i in enumerate(basic_indices):
            obj_val += (c[b_i] * x_b[i])
        if verbose:print('starting iteration #{}, obj = {}'.format(iteration_number, obj_val))
        # compute reduced cost in each non-basic j-th direction
        reduced_costs = {}
        for j in indices:
            if j not in basic_indices:
                A_j = A[:, j]
                reduced_costs[j] = c[j] - np.dot(c_b.T, np.dot(B_inv, A_j))

        # check if this solution is optimal
        if (np.array(list(reduced_costs.values())) >= 0.0).all():optimal = True;break

        # this solution is not optimal, go to a better neighbouring BFS
        # the pivot policy here is random choice ^_^
        chosen_j = None
        for j in reduced_costs.keys():
            if reduced_costs[j] < 0.0:chosen_j = j;break
        d_b = -1.0 * np.dot(B_inv, A[:, chosen_j])
        # check if optimal is infinity
        if (d_b >= 0).all():opt_infinity = True;break

        # calculate theta_star and the exiting index l
        l,theta_star = None,None
        for i, basic_index in enumerate(basic_indices):
            if d_b[i] < 0:
                if l is None:l = i
                if -x_b[i]/d_b[i] < -x_b[l]/d_b[l]:l,theta_star = i,-x_b[i]/d_b[i]

        # form new solution by replacing basic_indices[l] with chosen_j
        basic_indices[l] = chosen_j
        basic_indices.sort()
        B = A[:, list(basic_indices)]


    if opt_infinity:
        if verbose:print('Optimal is inifinity');return -1
    if not optimal:
        if verbose:print('Optimal not found');return -1
    x = np.zeros(shape=(A.shape[1], ))
    for i in range(x.shape[0]):
        if i in basic_indices:x[i] = x_b[basic_indices.index(i)]
    return basic_indices,x,obj_val

The result is as follows:

Test #1
-----Simplex: phase 1-----
starting iteration #1, obj = 112.0
starting iteration #2, obj = 112.0
starting iteration #3, obj = 102.0
starting iteration #4, obj = 96.0
starting iteration #5, obj = 88.0
starting iteration #6, obj = 82.0
starting iteration #7, obj = 74.0
starting iteration #8, obj = 66.0
starting iteration #9, obj = 62.0
starting iteration #10, obj = 54.0
starting iteration #11, obj = 33.0
starting iteration #12, obj = 19.0
starting iteration #13, obj = 14.0
starting iteration #14, obj = 10.0
starting iteration #15, obj = 10.0
starting iteration #16, obj = 1.0
starting iteration #17, obj = 0.0
-----Simplex: phase 2-----
starting iteration #1, obj = -26.0
Optimal objective value = -26.0

3. Interior point solver

This class implements primal-dual (infeasible) interior-point method to solve LPs.

The origin problem is:
min ⁡ c T x \min c^Tx mincTx
s . t . A x ≥ b s.t. Ax \geq b s.t.Axb
x ≥ 0 x\geq 0 x0


There are many different approaches to creating an interior point solver (not necessarily distinct), some of them are listed as follows:

3.1 Potential Reduction Methods/ Barrier Method:

These methods make use of a logarithmic potential function of the form, where Z is a lower bound on the optimal objective value. Karmarkar proved convergence and complexity results by showing that this function is decreased by at least a constant at each step. We transform the problem as follows:

f ( x ) = t c T x − Σ i = 1 n ( l o g ( − A i x + b i ) + l o g ( x i ) ) f(x)=t c^T x − \Sigma^n_{i=1}(log(-A_ix+b_i)+log(x_i)) f(x)=tcTxΣi=1n(log(Aix+bi)+log(xi))

Use Newton method to solve the problem, we have x n + 1 = x n − H − 1 f ( x ) x^{n+1}=x^n-H^{-1}f(x) xn+1=xnH1f(x). We have two iterations here: 1) for every t, iterate x; 2) iterate t, starts from the x from the first step.

In barrier interior solver, we need an initial solution that is strictly inside the feasible region, or else the Hessian matrix is infeasible.

class BarrierSolver:       
    def solve(self, c, A, b, x, verbose = 0,epsilon=0.0001,HIter=50):
        c,A,b  = check(c,A,b)
        t,factor = 1.0,1.2
        m,n = A.shape[:2]
        ub = 1000*t
        
        k = 0
        while(t < ub and k <10):
            k += 1
            for count in range(HIter):
                slack = 1.0/(A.dot(x) - b)
                gradient = t*c - A.T.dot(slack)
                H = A.transpose().dot(np.diag(np.square(slack))).dot(A) 
                delta = np.linalg.inv(H).dot(gradient)
                x = x - delta
                error = np.linalg.norm(delta)
                if (abs(-error) < epsilon):break
            t = factor*t
            if verbose:print('iteration #{}; Obj = {:.5f}'.format(k, c.dot(x)))
            if (error < epsilon):break
        return x,c.dot(x)

3.2 Path-following algorithms

The dual problem is:
max ⁡ b T l \max b^Tl maxbTl
s . t . A l + s = c s.t. Al+s = c s.t.Al+s=c
s ≥ 0 s\geq 0 s0


We define a central path C C C, a path of points ( x τ , λ τ , s τ ) (x_τ ,λ_τ , s_τ ) (xτ,λτ,sτ), that leads to the set Ω Ω of primal-dual solutions. Points in Ω Ω satisfy the KKT conditions, where points in C C C are defined by conditions that differ from the KKT conditions only by the presence of
a positive parameter τ > 0.

Following KKT conditions, we translate the optimal problem to root finding problem. The optimal solution satisfies:

A l + s = c Al+s = c Al+s=c
A x = b Ax=b Ax=b
x T s = τ x^Ts=\tau xTs=τ
x , s ≥ 0 x,s\geq 0 x,s0


We introduce a centering parameter σ ∈ [ 0 , 1 ] σ ∈ [0, 1] σ[0,1] and a duality measure µ µ µ defined by

µ = x T s / n µ =x^Ts/n µ=xTs/n

Define:

F ( x , l , s ) = [ A l + s − c A x − b X S e ] F(x,l,s)=\begin{bmatrix}Al+s-c\\Ax-b\\XSe \end{bmatrix} F(x,l,s)=Al+scAxbXSe

The problem becomes F ( x , l , s ) = 0 F(x,l,s)=0 F(x,l,s)=0. We use Newton method to solve the equation, that is J Δ = − F J\Delta = -F JΔ=F. The generic step equations are then

[ 0 A T I A 0 0 S 0 X ] [ Δ x Δ l Δ s ] = [ − ( A l + s − c ) − ( A x − b ) − X S e + σ µ e ] \begin{bmatrix}0&A^T&I\\A&0&0\\S&0&X\end{bmatrix}\begin{bmatrix}\Delta x\\\Delta l\\\Delta s\end{bmatrix}=\begin{bmatrix}-(Al+s-c)\\-(Ax-b)\\-XSe+σµe \end{bmatrix} 0ASAT00I0XΔxΔlΔs=(Al+sc)(Axb)XSe+σµe

The step ( ∆ x , ∆ l , ∆ s ) (∆x, ∆l, ∆s) (x,l,s) is a Newton step toward the point ( x σ µ , l σ µ , s σ µ ) ∈ C (x_{σµ},l_{σµ}, s_{σµ}) ∈ C (xσµ,lσµ,sσµ)C at which the pairwise products x i s i x_is_i xisi are all equal to σ µ σµ σµ (This is called central path). σ \sigma σ is used to control the decreasing speed, thus move to the optimal point less aggressively.

Path-following methods follow C C C in the direction of decreasing τ τ τ to the solution set Ω Ω . They do not necessarily stay exactly on C C C or even particularly close to it. Rather, they stay within a loose but well-defined neighborhood of C C C while steadily reducing the duality measure µ µ µ to zero. Each search direction is a Newton step toward a point on C C C, a point for which the duality target measure τ τ τ is equal to or smaller than the current duality measure µ µ µ. The target value τ = σ µ τ = σµ τ=σµ is used. There are three variants of Path following algorithms:

(a) Algorithm SPF: The short-step path-following algorithm chooses a constant value σ k = σ σ_k = σ σk=σ for the centering parameter and fixes the step length at α k = 1 α_k = 1 αk=1 for all iterations k.

(b) Algorithm PC: The predictor-corrector algorithm alternates between two types of steps: predictor steps, which improve the value of µ µ µ but which also tend to worsen the centrality measure given by

∣ ∣ X S e − µ e ∣ ∣ / µ ||XSe − µe||/µ XSeµe/µ

and corrector steps, which have no effect on the duality measure µ but improve centrality.

© Algorithm LPF: The long path-following algorithm makes more aggressive (smaller) choices of centering parameter σ σ σ than does Algorithm SPF. Instead of taking unit steps, Algorithm LPF performs a line search along the Newton direction.

3.3 Infeasible Interior Point Algorithms

Path-following algorithms and potential-reduction methods start from a strictly feasible point ( x 0 , λ 0 , s 0 ) (x^0,λ^0, s^0) (x0,λ0,s0) in some neighborhood of the central path. Often, it is not easy to find a starting point that satisfies this conditions. One way to avoid this difficulty is to embed the given linear program in a slightly larger problem for which a strictly feasible point is easy to identify. This is termed as homogeneous self-dual reformulation and is a particularly useful embedding tool. However, for the purpose of this project we don’t focus much on this tool except than reading the theory behind it.

Initial solution ( x 0 , l 0 , s 0 ) > 0 (x_0, l_0, s_0) > 0 (x0,l0,s0)>0. Note that this is not a feasible solution in general, but it should tend towards feasibility by itself with iterations, therefore initially duality gap might show negative, since this is the infeasible-interior-point algorithm

Algorithm IPF: Infeasible interior point algorithm does not require the initial point to be strictly feasible but only requires that its x x x and s s s components be strictly positive. This algorithm and the details of its implementation are the focus of the remaining part of the report.

class InteriorPointSolver:
    def solve(self, c, A, b, verbose = 0,epsilon=0.0001):
        c,A,b  = check(c,A,b)
        m,n = A.shape[:2]
        x,l,s,k = np.ones(shape=(n, )), np.ones(shape=(m, )), np.ones(shape=(n, )),0

        while abs(np.dot(x, s)) > epsilon:
            k += 1
            primal_obj = np.dot(c, x)
            dual_obj = np.dot(b, l)
            if verbose:print('iteration #{}; primal_obj = {:.5f}, dual_obj = {:.5f}; \
            duality_gap = {:.5f}'.format(k, primal_obj, dual_obj, primal_obj - dual_obj))

            # choose sigma_k and calculate mu_k
            sigma_k = 0.4
            mu_k = np.dot(x, s) / n

            # create linear system A_ * delta = b_
            A_ = np.zeros(shape=(m + n + n, n + m + n))
            A_[0:m, 0:n] = np.copy(A)
            A_[m:m + n, n:n + m] = np.copy(A.T)
            A_[m:m + n, n + m:n + m + n] = np.eye(n)
            A_[m + n:m + n + n, 0:n] = np.copy(np.diag(s))
            A_[m + n:m + n + n, n + m:n + m + n] = np.copy(np.diag(x))

            b_ = np.zeros(shape=(n + m + n, ))
            b_[0:m] = np.copy(b - np.dot(A, x))
            b_[m:m + n] = np.copy(c - np.dot(A.T, l) - s)
            b_[m + n:m + n + n] = np.copy( sigma_k * mu_k * np.ones(shape=(n, )) - \
                                          np.dot(np.dot(np.diag(x), np.diag(s)), np.ones(shape=(n, ))) )

            # solve for delta. Notice we can not solve F(..) = 0 directly since F(..) is not linear.
            delta = np.linalg.solve(A_, b_)
            delta_x = delta[0:n]
            delta_l = delta[n:n + m]
            delta_s = delta[n + m:n + m + n]

            # find step-length alpha_k
            alpha_max = 1.0
            for i in range(n):
                if delta_x[i] < 0:alpha_max = min(alpha_max, -x[i]/delta_x[i])
                if delta_s[i] < 0:alpha_max = min(alpha_max, -s[i]/delta_s[i])
            eta_k = 0.99
            alpha_k = min(1.0, eta_k * alpha_max)

            # update variables
            x,l,s = x + alpha_k * delta_x,l + alpha_k * delta_l,s + alpha_k * delta_s

        # print difference between Ax and b
        if verbose:print('|Ax - b| is = {}'.format(np.linalg.norm(np.dot(A, x) - b)))
        return x,np.dot(c,x)

The result is:

Test #2
iteration #1; primal_obj = -3.00000, dual_obj = 256.00000;             duality_gap = -259.00000
iteration #2; primal_obj = -4.16234, dual_obj = 323.37861;             duality_gap = -327.54095
iteration #3; primal_obj = -6.26425, dual_obj = 326.67147;             duality_gap = -332.93572
iteration #4; primal_obj = -10.62562, dual_obj = 238.31304;             duality_gap = -248.93866
iteration #5; primal_obj = -18.97953, dual_obj = 111.79206;             duality_gap = -130.77158
iteration #6; primal_obj = -25.02193, dual_obj = 35.27621;             duality_gap = -60.29814
iteration #7; primal_obj = -27.34371, dual_obj = -2.47230;             duality_gap = -24.87141
iteration #8; primal_obj = -28.84231, dual_obj = -31.04427;             duality_gap = 2.20196
iteration #9; primal_obj = -28.93610, dual_obj = -29.95410;             duality_gap = 1.01800
iteration #10; primal_obj = -28.97357, dual_obj = -29.38812;             duality_gap = 0.41455
iteration #11; primal_obj = -28.98919, dual_obj = -29.15751;             duality_gap = 0.16832
iteration #12; primal_obj = -28.99560, dual_obj = -29.06394;             duality_gap = 0.06834
iteration #13; primal_obj = -28.99821, dual_obj = -29.02596;             duality_gap = 0.02775
iteration #14; primal_obj = -28.99927, dual_obj = -29.01054;             duality_gap = 0.01126
iteration #15; primal_obj = -28.99970, dual_obj = -29.00428;             duality_gap = 0.00457
iteration #16; primal_obj = -28.99988, dual_obj = -29.00174;             duality_gap = 0.00186
iteration #17; primal_obj = -28.99995, dual_obj = -29.00071;             duality_gap = 0.00075
iteration #18; primal_obj = -28.99998, dual_obj = -29.00029;             duality_gap = 0.00031
iteration #19; primal_obj = -28.99999, dual_obj = -29.00012;             duality_gap = 0.00012
|Ax - b| is = 4.418631847554177e-15
Optimal objective value = -28.999996745085195
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!