Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 18
27 Mar 2024 (8 months ago)
Convex Optimization
- Convex problems can be solved efficiently and reliably, and have several advantages such as finding exact solutions and potential for embedding in control systems.
- Many applications can be formulated as convex problems, but most real-world problems are not truly convex.
- Convex optimization problems have identical structures across different fields and can be solved efficiently using tools like CVX PI.
- Understanding the dual of a convex problem can provide insights and practical benefits.
- Advanced techniques like repeatedly forming and solving convex approximations can be used to solve complex problems.
Sparse Signal Reconstruction
- Sparse signal reconstruction involves finding a sparse representation of a signal from noisy measurements.
- L1 reconstruction, also known as LASSO, is an effective method for sparse signal reconstruction.
- Total variation denoising is an application of sparse signal reconstruction to image denoising.
- Total variation regularization can be used to reduce noise while preserving sharp edges in images.
- Iterated weighted L1 regularization can improve the performance of total variation regularization.
Cardinality Constraint
- The cardinality constraint can be applied to various scenarios, such as piecewise constant fitting or piecewise linear fitting.
- The L1 norm heuristic is commonly used as a convex relaxation for cardinality constrained problems.
- Polishing is a technique where L1 heuristic is used to guess the sparsity pattern of the solution, and then the problem is resolved with only the non-zero entries.
- L1 penalty keeps up the pressure to make the coefficients small all the way up until zero, which explains its effectiveness as a sparsifying regularizer.
- There are other sparsifying penalties, such as the Huber penalty and the buru penalty, which combine different types of penalties to achieve sparsity.
Other Topics
- Sparse solutions of linear inequalities can be found using the L1 heuristic, but it doesn't always find the global solution.
- Time-varying AR models can be fit using a sparsity-inducing regularizer on the differences in the coefficients.
- The dual of the spectral norm is the sum of the singular values and it can be used as a low-rank regularizer.
- Sparse Bayesian models can be estimated by maximizing the log-likelihood minus the dual norm of the precision matrix.
- Subgradient methods and stochastic subgradient methods are covered in other courses such as 364b.
- Distributed convex optimization allows multiple parties to collaborate to solve a problem without sharing their data.