Stanford AA222 / CS361 Engineering Design Optimization I Linear Constrained Optimization
15 Jun 2024 (6 months ago)
Linear Constrained Optimization Problems
- Linear constrained optimization problems involve linear objectives and linear constraints.
- Linear programs are another name for linear constrained optimization problems.
- Linear constrained optimization problems can be solved efficiently to global optimality even with millions of variables and constraints.
- The standard form of a linear constrained optimization problem represents the objective as c transpose X and the constraints as A x less than or equal to B, with X greater than or equal to zero.
- The feasible set of a linear constrained optimization problem is convex, which means that any line segment between two points in the feasible set lies entirely within the feasible set.
- If a linear constrained optimization problem has a unique optimal solution, it is guaranteed to exist at a vertex point.
First-Order Necessary Conditions for Optimality
- The first-order necessary conditions for optimality in a constrained optimization problem are:
- Feasibility: Satisfy the constraints (ax = B and X ≥ 0).
- Dual feasibility: Ensure that the penalty parameter (μ) is non-negative (μ ≥ 0) to enforce the correct direction of the penalty.
- Complementary slackness: The element-wise product of μ and X must be zero (μiXi = 0 for all i).
Simplex Algorithm
- The Simplex algorithm assumes that all variables are non-negative (X ≥ 0) and works primarily with problems in equality form.
- The Simplex algorithm is guaranteed to solve any feasible and bounded linear program.
- Phase one of the Simplex algorithm focuses on finding a starting initial partition, while phase two focuses on moving between vertices to find the optimal vertex.
- The Simplex algorithm relies on the first-order necessary conditions to swap indices and check if all of the first-ordered necessary conditions are satisfied.
- The Simplex algorithm can globally optimize linear programs with extremely high dimensional variables efficiently.
Phase 2: Finding the Optimal Vertex
- In phase two, an entering index and a leaving index are chosen to swap between the basis and free variables, satisfying the equation ax' = B.
- The choice of entering and leaving indices is based on the first-order necessary conditions, specifically the stationarity condition and the non-negativity condition.
- The objective function decreases when the muq term is negative.
- The goal is to find negative muq components to improve the objective.
- If all muv components are non-negative, an optimal solution is reached.
- The greedy heuristic is used to choose the Q index that maximally reduces the objective.
- The minimum ratio test is used to find the leaving index P.
- The algorithm involves two nested loops to find the PQ pair with the maximum decrement in the objective.
Phase 1: Finding a Feasible Starting Vertex
- Phase 1 involves solving an auxiliary linear program with the goal of minimizing the Z variables.
- The Z Matrix is a diagonal matrix with positive ones on the diagonal if the bi term is greater than or equal to zero and negative ones otherwise.
- The auxiliary linear program is used to find an initial partition for the original problem.
- The Z variables are kept in the final form of the problem because their indices might still be in the basis that is returned from the initial solution to the auxiliary linear program.
Certificates and Conclusion