Skip to content

Formal Definition

Linear programming: We aim at finding a rational vector \(\vec x\) that minimizes a given linear objective function in \(\vec x\) subject to linear constraints on \(\vec x\).

Given

  • An \(n\)-vector \(\vec c\in\mathbb Q^n\)
  • An \(m\)-vector \(\vec b\in\mathbb Q^m\)
  • An \(m\times n\) matrix \(A=(a_{i,j})\in\mathbb Q^{m\times n}\)

an optimal solution to the following linear programming problem \(LP\): $$ \min\sum_{j=1}^nc_jx_j $$ subject to $$ \sum_{j=1}^na_{i,j}x_j\geq b_i (i\in[m])\land x_j\geq 0 (j\in[n]) $$ is an \(n\)-vector \(\vec x\) that minimizes the linear objective function \(\sum_{j=1}^nc_jx_j\) subject to the constraints.

  • The vector \(\vec x\) is the decision variable.
  • Any \(\vec x\) that satisfies the constraints is feasible.
  • If there exists a feasible solution, then the problem is feasible.
  • A feasible solution that minimizes the value of the objective function is an optimal solution.
  • The value of an optimal solution is called the optimal value.

Canonical Form

The above formulation is in canonical form.

It can model any possible linear optimization problem.

  • If it is maximization problem, just minimize \(-1\) times the objective function.
  • If there exist equal constraint, we split \(=\) to \(\geq\) and \(\leq\).
  • If a variable \(x_j\) can be negative, we introduce two new variables \(x_j^{+}\) and \(x_j^{-}\) to replace \(x_j\) by \(x_j^{+}-x_j^{-}\) in the objective function and constraints.

Solving Linear Programs

In general if the problem admits an optimum, then there exists an optimal solution in one of the vertices of the polytope defined by the constraints.

Simplex Algorithm

Find a feasible solution at a vertex of the polytope and then walk along a path on the edges of the polytope to vertices with non-decreasing values of the objective function.

In practice, the simplex algorithm is efficient. But there exists a family of LP for which the simplex method takes exponential number of steps w.r.t. problem size.

Ellipsoid

It solves linear programs in polynomial time buy not efficient in practice.

Interior-Point

This method solves linear programs in polynomial time and efficient in practice.

Integer Programming

Some constraints require some variable to be integer. $$ \min\sum_{j=1}^nc_jx_j $$ subject to $$ \sum_{j=1}^na_{i,j}x_j\geq b_i (i\in[m])\land x_j\geq 0 (j\in[n])\sum_{j=1}^na_{i,j}x_j\geq b_i (i\in[m])\land x_j\in\mathbb N\text{ or }x_j\in{0,1} (j\in[n]) $$ Linear programs can be solved in polynomial time, but integer programs are \(NP\)-complete.

Linear Relaxation of Integer Programs

The linear relaxation of an integer program is obtained by relaxing the integrality constraint: $$ x_j\in{0,1}\to x_j\in[0,1] $$ The optimal solution to the linear relaxation provides a lower bound to optimal solution of the integer program.

Duality of LP

We can get lower bounds to the optimum by linear combinations of the constraints.

We want to find the non-negative coefficients \(y_i\) of the following linear combination \(P_i(\vec x)\) of constraints that give the highest lower bound. $$ y_1\cdot P_1(\vec x)+y_2\cdot P_2(\vec x)+y_3\cdot P_3(\vec x), $$ by ensuring that the original objective function is \(\geq\) this linear combination $$ =x_1\cdot Q_1(\vec y)+x_2\cdot Q_2(\vec y)+x_3\cdot Q_3(\vec y). $$ We get the constraints of \(\vec y\) according to the corresponding coefficients.

By plugging in the lower bounds of \(P_i(\vec x)\), we obtain the lower bound of the linear combination.

The new LP of \(\vec y\) is the dual of the original minimization problem. The original LP of \(\vec x\) is called primal.

Any feasible solution to the dual gives an objective function value that is a lower bound to the optimal value of the primal.

Duality Theorems

Weak duality: If \(\vec x\) is a feasible solution to the primal, and \(\vec y\) a feasible solution to the dual, then

\[ \sum_{j=1}^{n}c_jx_j\geq\sum_{i=1}^{m}b_iy_i \]
\[ \sum_{j=1}^nc_jx_j\geq\sum_{j=1}^n(\sum_{i=1}^ma_{i,j}y_i)x_j=\sum_{i=1}^m(\sum_{j=1}^na_{i,j}x_j)y_i\geq\sum_{i=1}^mb_iy_i \]

Strong duality: If the primal and dual are feasible, then for any optimal solution \(\vec x^*\) of the primal and any optimal solution \(\vec y^*\) of the dual,

\[ \sum_{j=1}^{n}c_jx_j=\sum_{i=1}^{m}b_iy_i \]

Complementary Slackness

Let \(\bar x\) and \(\bar y\) be feasible primal/dual solutions.

\(\bar x\) and \(\bar y\) fulfill the complementary slackness conditions if

  • \(\sum_{i=1}^ma_{i,j}\bar y_i=c_i\) for each \(j\) s.t. \(\bar x_j>0\).
  • \(\sum_{j=1}^na_{i,j}\bar x_j=b_i\) for each \(i\) s.t. \(\bar y_i>0\).

Corollary: \(\bar x\) and \(\bar y\) fulfill the complementary slackness conditions iff they are optimal solution to their respective linear programs.

Similar proof for weak duality, just replace \(\geq\) by \(=\).