Lecture

Convergence Proof, Stopping Criterion

This module focuses on convergence proofs and stopping criteria in optimization, providing a rigorous framework for evaluating the efficiency of optimization algorithms. Key topics include:

  • Proof of convergence for subgradient methods.
  • Establishing stopping criteria based on convergence results.
  • Techniques for locating points within the intersection of convex sets using alternating projections.
  • Examples highlighting the application of subgradient methods in constrained environments, such as least L₁-norm problems.

Course Lectures
  • This module covers the foundational aspects of subgradient calculus, providing a thorough introduction to course logistics and organization. Key concepts include:

    • Understanding subgradients and subdifferentials.
    • Application of basic inequalities and rules in subgradient calculus.
    • Exploring pointwise supremum and weak rules.
    • Minimization and composition related to subgradients.

    Students will also learn about quasigradients and their applications in optimization problems.

  • Recap: Subgradients
    Stephen Boyd

    This module revisits the concept of subgradients, building on the foundational knowledge established previously. It covers:

    • Optimality conditions for both unconstrained and constrained problems.
    • Application of the subgradient method on various optimization problems, including piecewise linear minimization.
    • Directional derivatives and their relationship with the subdifferential.
    • Step size rules and convergence results for the subgradient method.

    Students will also examine examples illustrating the application of these methods.

  • This module focuses on convergence proofs and stopping criteria in optimization, providing a rigorous framework for evaluating the efficiency of optimization algorithms. Key topics include:

    • Proof of convergence for subgradient methods.
    • Establishing stopping criteria based on convergence results.
    • Techniques for locating points within the intersection of convex sets using alternating projections.
    • Examples highlighting the application of subgradient methods in constrained environments, such as least L₁-norm problems.
  • This module introduces the project subgradient method specifically tailored for dual problems. Students will learn about:

    • The subgradient of the negative dual function.
    • Examples illustrating the application of subgradient methods in constrained optimization.
    • The stochastic subgradient method, including handling noisy unbiased subgradients.
    • Convergence results and proofs related to stochastic programming.

    This module emphasizes practical applications of these methods in optimization problems.

  • Stochastic Programming
    Stephen Boyd

    This module focuses on stochastic programming, examining its variations and the expected value of convex functions. Key content includes:

    • Understanding expected values through examples like piecewise linear functions.
    • Application of on-line learning and adaptive signal processing techniques.
    • Cutting-plane methods, including the use of cutting-plane oracles.
    • Exploring feasibility problems and inequality-constrained scenarios.

    Students will also delve into specific cutting-plane methods and convergence of algorithms.

  • This addendum explores the Hit-And-Run Cutting-Plane (CG) Algorithm and its applications in optimization. Topics covered include:

    • Maximum volume ellipsoid and Chebyshev center methods.
    • Analytic center cutting-plane methods and their extensions.
    • Dropping constraints and constructing cutting-planes effectively.
    • Properties of the infeasible start Newton method algorithm.

    This module emphasizes constructing and utilizing cutting-planes in optimization problems.

  • Focusing on piecewise linear minimization, this module provides practical examples to reinforce learning. Key concepts include:

    • Application of the Analytic Center Cutting-Plane Method (ACCPM) with constraint dropping.
    • Using the ellipsoid algorithm for minimizing convex functions.
    • Updating ellipsoids and understanding stopping criteria.
    • Interpreting results from the basic ellipsoid algorithm.

    Students will engage in hands-on examples that illustrate these concepts.

  • This recap module consolidates knowledge of the ellipsoid method, emphasizing improvements and understanding convergence. Topics include:

    • Proofs of convergence and complexity analysis.
    • Deep cut techniques in the ellipsoid method.
    • Applications of primal and dual decomposition in handling nondifferentiable convex optimization problems.
    • Examples illustrating dual decomposition applications.

    Students will gain a comprehensive understanding of the methods for managing complex optimization scenarios.

  • This module discusses decomposition methods, their structures, and applications in optimization. Key points include:

    • Understanding primal and dual decomposition with and without constraints.
    • General decomposition structures and their applications.
    • Finding feasible iterates and interpreting results.
    • Complex examples illustrating decomposition methods.

    Students will learn about various applications of decomposition in different optimization scenarios.

  • This module focuses on various decomposition applications, particularly in rate control and network flow problems. Students will learn about:

    • Rate control setups and problems, including Lagrangian formulations.
    • Dual decomposition algorithms and generating feasible flows.
    • Convergence of primal and dual objectives in network problems.
    • Examples illustrating network flow setups and optimal flow strategies.

    Students will engage with practical applications to solidify their understanding.

  • This module delves into sequential convex programming (SCP) as a method for nonconvex optimization problems. Key topics include:

    • Basic principles of SCP and its applications.
    • Trust region methods and affine approximations.
    • Particle methods and fitting functions to data.
    • Example applications, including nonlinear optimal control and discretization.

    Students will gain insights into the progress and convergence of SCP methods.

  • This recap module revisits the 'Difference of Convex' programming, discussing its relevance and applications. Key points include:

    • Alternating convex optimization techniques.
    • Application in nonnegative matrix factorization.
    • Overview of conjugate gradient methods and their applications in linear equations.
    • Properties of Krylov sequences and their spectral analysis.

    Students will understand the significance of these concepts in optimization.

  • This module emphasizes the conjugate gradient method, covering its properties and applications. Key content includes:

    • Overview of Krylov subspace and its properties.
    • Convergence rates and residual convergence analysis.
    • Efficient matrix-vector multiplication techniques.
    • Shifting and preconditioned conjugate gradient algorithms.

    Students will gain a comprehensive understanding of these methods and their applications in optimization.

  • This module covers truncated Newton methods, focusing on their applications and convergence properties. Key topics include:

    • Understanding convergence versus iterations and cumulative CG steps.
    • Truncated Newton interior-point methods and their applications.
    • Evaluating dual rate control problems and primal-dual search directions.
    • Solving convex-cardinality problems and sparse modeling techniques.

    Students will explore various applications, including portfolio investment and sparse signal reconstruction.

  • This recap module focuses on the minimum cardinality problem, discussing its interpretation as a convex relaxation. Key points include:

    • Understanding weighted and asymmetric L₁ heuristics.
    • Applications in regressor selection and sparse signal reconstruction.
    • Exploring total variation reconstruction techniques.
    • Strategies for detecting changes in time series models.

    Students will gain insights into L₁-norm methods and their applications in various optimization contexts.

  • This module explores model predictive control (MPC) strategies and their applications in linear time-invariant systems. Key topics include:

    • Understanding greedy control and dynamic programming solutions.
    • Linear quadratic regulator considerations and finite horizon approximations.
    • Analyzing MPC performance and variations.
    • Application of MPC in optimal trajectories and supply chain management.

    Students will learn about the nuances of MPC and its optimization strategies in various contexts.

  • This module focuses on stochastic model predictive control, exploring its applications in optimization. Key content includes:

    • Causal state-feedback control strategies.
    • Stochastic finite horizon control solutions via dynamic programming.
    • Understanding independent process noise in control systems.
    • Branch and bound methods for solving nonconvex optimization problems.

    Students will engage with sample trajectories and cost histograms to analyze system performance.

  • This recap module emphasizes branch and bound methods in nonconvex minimization, exploring their fundamental concepts and applications. Key points include:

    • Understanding lower and upper bound functions in optimization.
    • Analyzing the branch and bound algorithm structure.
    • Pruning techniques and convergence analysis in optimization.
    • Exploring mixed Boolean-convex problems and their solution methods.

    Students will gain insights into global lower and upper bounds through various examples.