Course

Convex Optimization II

Stanford University

This course is a continuation of Convex Optimization I, delving deeper into key topics and advanced techniques essential for tackling complex optimization problems. Key areas covered include:

  • Subgradient methods, including their calculus and convergence
  • Cutting-plane and ellipsoid methods for optimization
  • Decentralized convex optimization through primal and dual decomposition
  • Alternating projections and exploiting problem structures in implementations
  • Convex relaxations for challenging problems and global optimization via branch and bound
  • Robust optimization techniques

Applications span various domains such as control systems, circuit design, signal processing, and communications, providing students with practical insights into real-world optimization challenges.

Course Lectures
  • This module introduces basic rules for subgradient calculus, essential for understanding optimization problems. It covers:

    • Course logistics and organization
    • Subgradients and basic inequality principles
    • Subgradient calculus and its rules
    • Concepts of pointwise supremum and expectation
    • Minimization techniques and sublevel sets

    Understanding these foundational concepts is critical for progressing to more advanced topics in convex optimization.

  • Recap: Subgradients
    Stephen Boyd

    This module recaps subgradients and their implications in optimization. It includes:

    • Optimality conditions for both unconstrained and constrained problems
    • Examples such as piecewise linear minimization
    • Directional derivatives and their relationship with subdifferentials
    • Descent directions using subgradients
    • Convergence results of the subgradient method

    By revisiting these concepts, students will solidify their understanding and prepare for more complex optimization strategies.

  • This module focuses on convergence proofs and stopping criteria in optimization methods. Key topics include:

    • Convergence proofs for various optimization methods
    • Stopping criteria development for iterative algorithms
    • Practical examples such as piecewise linear minimization
    • Finding intersection points of convex sets through alternating projections
    • Speeding up subgradient methods with advanced techniques

    Understanding these components is vital for ensuring effective optimization processes and algorithm performance.

  • This module discusses the application of project subgradient methods for dual problems. It includes:

    • Understanding the subgradient of the negative dual function
    • Examples of constrained optimization using subgradient methods
    • Stochastic subgradient methods and their convergence results
    • Applications of stochastic programming for real-world problems

    By exploring these concepts, students will appreciate the role of duality in optimization and its practical implications.

  • Stochastic Programming
    Stephen Boyd

    This module introduces stochastic programming, focusing on variations and their applications. Key topics include:

    • Expected value of convex functions and its implications
    • On-line learning and adaptive signal processing techniques
    • Localization and cutting-plane methods
    • Specific cutting-plane methods for unconstrained minimization
    • Convergence of algorithms and practical applications

    Students will learn how stochastic programming can be leveraged to solve complex optimization problems in uncertain environments.

  • This addendum discusses advanced cutting-plane algorithms, including:

    • Hit-and-run algorithms and their applications
    • Maximum volume ellipsoid methods
    • Extensions of cutting-plane methods and their properties
    • Infeasible start Newton method algorithms
    • Stopping criteria and examples of piecewise linear minimization

    Students will gain insights into the advanced methodologies used in convex optimization to enhance problem-solving capabilities.

  • This module presents an example of piecewise linear minimization, illustrating key concepts in convex optimization. It includes:

    • Application of the ACCPM with constraint dropping
    • Motivation for using the ellipsoid method
    • Properties and examples of the ellipsoid algorithm
    • Understanding stopping criteria and updating the ellipsoid

    Through this example, students will appreciate the practical aspects of optimization techniques and their applications.

  • This module reviews the ellipsoid method, focusing on its improvements and applications in optimization. Key topics include:

    • Recap of convergence proofs and complexity analysis
    • Deep cut ellipsoid methods and their implications
    • Handling inequality constrained problems
    • Summary of methods for nondifferentiable convex optimization
    • Decomposition methods and their applications

    Students will learn about significant advancements in the ellipsoid method and its role in solving complex optimization problems.

  • This module discusses primal and dual decomposition methods, emphasizing their structures and applications. Topics include:

    • Finding feasible iterates in decomposition
    • General decomposition structures and their interpretations
    • Examples involving primal and dual decomposition with constraints
    • Pictorial representations of decomposition processes

    Students will gain insights into the practical implementation of decomposition techniques in convex optimization.

  • This module dives into decomposition applications, particularly in rate control and network flow problems. It includes:

    • Rate control setups and problems
    • Utility functions and their role in dual decomposition
    • Generating feasible flows and convergence analysis
    • Network flow problems and their dual formulations
    • Examples such as minimum queueing delay and optimal flow solutions

    Students will learn how to apply decomposition methods effectively in practical scenarios, enhancing their problem-solving skills.

  • This module covers sequential convex programming (SCP) and its methods for addressing nonconvex optimization problems. Key topics include:

    • Basic principles of SCP and its applications
    • Trust region and affine approximations
    • Examples of nonconvex quadratic programming
    • Trajectories in nonlinear optimal control
    • Convergence of SCP progress and results

    Students will understand how SCP can be utilized to effectively tackle nonconvex challenges in optimization.

  • This module recaps the 'Difference of Convex' programming approach, focusing on its applications and methodologies. It includes:

    • Alternating convex optimization techniques
    • Nonnegative matrix factorization and its significance
    • Conjugate gradient methods and their applications
    • Properties of Krylov sequences in linear equations
    • Methods for solving symmetric positive definite linear systems

    Students will gain a deeper understanding of how these methods can be applied in practical optimization contexts.

  • This module further explores the conjugate gradient method, detailing its efficiency and applications. Key components include:

    • Recap of Krylov subspace and its properties
    • Convergence rates and efficiency of the CG algorithm
    • Preconditioned conjugate gradient methods
    • Applications in symmetric positive definite linear systems
    • Extensions and practical implementations of the method

    Students will learn how to harness the power of the conjugate gradient method in various optimization scenarios.

  • This module discusses truncated Newton methods and their applications in optimization. Topics covered include:

    • Convergence rates versus iterations in optimization
    • Truncated Newton interior-point methods
    • Applications in network rate control problems
    • L_1-norm methods for convex-cardinality problems
    • Examples of sparse modeling and regressor selection

    Students will understand how truncated Newton methods can enhance optimization techniques in practical problems.

  • This module recaps the minimum cardinality problem, illustrating its significance in convex optimization. It covers:

    • Interpretation as a convex relaxation problem
    • Weighted and asymmetric L_1 heuristics for optimization
    • Applications in sparse signal reconstruction and regressor selection
    • Time series analysis and detecting changes in models
    • Extensions to matrices and factor modeling

    By examining these concepts, students will appreciate the practical implications of cardinality problems in optimization.

  • This module explores model predictive control (MPC) and its applications in optimization. Key topics include:

    • Linear time-invariant convex optimal control methods
    • Greedy control strategies and their implementations
    • Dynamic programming solutions for optimization
    • MPC performance versus horizon analysis
    • Variations on MPC and their implications in real-world scenarios

    Students will learn how MPC can be effectively applied in various control systems, enhancing their understanding of optimization techniques.

  • This module covers stochastic model predictive control and its importance in optimization. Key components include:

    • Causal state-feedback control strategies
    • Stochastic finite horizon control methods
    • Dynamic programming solutions in stochastic contexts
    • Branch and bound methods for nonconvex optimization
    • Practical examples illustrating the application of stochastic MPC

    Students will gain insights into how stochastic methods can enhance the robustness and performance of predictive control systems.

  • This final module recaps branch and bound methods, emphasizing their significance in optimization. Key topics include:

    • Basic ideas behind branch and bound algorithms
    • Convergence analysis and bounding conditions
    • Applications in mixed Boolean-convex problems
    • Global lower and upper bounds in optimization
    • Practical examples and algorithm progress

    Students will understand how branch and bound methods can be effectively applied to solve complex optimization challenges.