Lecture

Recap: Conjugate Gradient Method

This module further deepens understanding of the conjugate gradient method, covering its application in solving linear equations. Important aspects include:

  • Overview of Krylov subspace and its significance
  • Convergence rates and properties of Krylov sequences
  • Truncated Newton methods and their applications
  • Preconditioning techniques for better performance

Students will explore practical examples to see how these methods can be effectively utilized in optimization problems.


Course Lectures
  • This module introduces fundamental concepts of subgradient calculus, including:

    • Course logistics and organization
    • Subgradients and their properties
    • Basic rules for subgradient calculus
    • Pointwise supremum and expectations
    • Subgradients in the context of minimization and composition

    Students will develop a foundational understanding necessary for subsequent modules.

  • Recap: Subgradients
    Stephen Boyd

    This recap module focuses on reinforcing understanding of subgradients, optimality conditions, and descent directions. Key topics include:

    • Unconstrained and constrained optimality conditions
    • Example of piecewise linear minimization
    • Subgradient methods and step size rules
    • Convergence results and assumptions

    Students will engage with practical applications and examples to solidify their grasp of subgradient methods.

  • This module presents a detailed convergence proof for subgradient methods and outlines the stopping criteria necessary for effective optimization. It covers:

    • Methods for finding intersection points of convex sets
    • Alternating projection techniques
    • Examples such as positive semidefinite matrix completion
    • Strategies for speeding up subgradient methods
    • Application of projected subgradient methods for constrained optimization

    The insights gained here will be essential for tackling more complex optimization challenges introduced later in the course.

  • This module emphasizes the application of subgradient methods to dual problems. It includes:

    • Subgradient of negative dual functions
    • Examples, including strictly convex quadratic functions
    • Convergence characteristics of subgradient methods
    • Stochastic subgradient methods and their assumptions
    • Convergence results and applications in stochastic programming

    Students will learn how stochastic methods can enhance optimization processes and handle uncertainties in models.

  • Stochastic Programming
    Stephen Boyd

    This module covers the fundamentals of stochastic programming, focusing on expectations of convex functions. Key areas include:

    • Variations of stochastic programming applications
    • On-line learning and adaptive signal processing techniques
    • Localization algorithms and cutting-plane methods
    • Examples such as bisection on R and specific cutting-plane methods
    • Convergence of various cutting-plane strategies

    Students will gain insights into cutting-edge optimization techniques applicable in uncertain environments.

  • This addendum provides a deep dive into various cutting-plane algorithms and methodologies, focusing on:

    • Hit-and-run and maximum volume ellipsoid methods
    • Analytic center cutting-plane methods and their extensions
    • Properties of infeasible start Newton methods
    • Constructing cutting-planes and computing analytic centers

    Students will explore advanced techniques for optimization, enhancing their understanding of constraint handling in convex problems.

  • This module introduces piecewise linear minimization, focusing on techniques such as:

    • ACCPM with constraint dropping
    • Motivation and application of the ellipsoid method
    • Updating ellipsoids for minimizing convex functions
    • Stopping criteria and properties of the ellipsoid method

    Students will apply these concepts through practical examples, solidifying their grasp of convex optimization techniques.

  • This recap module revisits the ellipsoid method, covering key improvements and the convergence proof. Highlights include:

    • Deep cut ellipsoid method and its effectiveness
    • Handling inequality constrained problems
    • Methods for addressing nondifferentiable convex optimization challenges
    • Decomposition methods and their applications

    Students will review methods for separating complex optimization problems while gaining insights into practical implementations.

  • This module delves into primal and dual decomposition methods, focusing on:

    • Finding feasible iterates and understanding the decomposition structure
    • Algorithms for primal and dual decomposition with constraints
    • Examples illustrating the complexities of decomposition
    • Pictorial representations for better understanding

    Students will appreciate the intricacies of decomposition in optimization, enhancing their skill set for complex problem-solving.

  • This module emphasizes practical applications of decomposition, particularly in:

    • Rate control setups and problems
    • Network flow problems and their duals
    • Electrical network analogies
    • Examples such as minimum queueing delay and optimal flow

    Students will engage with real-world cases demonstrating the effectiveness of decomposition in complex optimization scenarios.

  • This module introduces sequential convex programming (SCP) as a method for tackling nonconvex optimization problems. Key topics include:

    • Basic ideas of SCP and trust region methods
    • Affine and convex approximations via Taylor expansions
    • Examples illustrating the application of SCP in optimal control
    • Convergence of residuals and trajectory planning

    Students will learn how SCP can be a powerful tool for solving complex optimization problems that arise in practice.

  • This recap module reviews the 'Difference of Convex' programming concept, emphasizing:

    • Alternating convex optimization techniques
    • Applications in nonnegative matrix factorization
    • Discussion on nonconvex methods and their implications
    • Conjugate gradient method and its advantages

    Students will solidify their understanding of these advanced concepts and how they interlink with previously covered material.

  • This module further deepens understanding of the conjugate gradient method, covering its application in solving linear equations. Important aspects include:

    • Overview of Krylov subspace and its significance
    • Convergence rates and properties of Krylov sequences
    • Truncated Newton methods and their applications
    • Preconditioning techniques for better performance

    Students will explore practical examples to see how these methods can be effectively utilized in optimization problems.

  • This module centers on Truncated Newton methods, detailing their application in convex optimization. It covers:

    • Convergence analysis versus iterations
    • Primal-dual search direction strategies
    • Applications in network rate control
    • L1-norm methods for convex-cardinality problems

    Students will learn about the efficiency of these methods and their real-world applications, enhancing their optimization skill set.

  • This recap module focuses on the Minimum Cardinality Problem, exploring its interpretation as a convex relaxation. Key areas include:

    • Weighted and asymmetric L1 heuristics
    • Applications in sparse signal reconstruction
    • Time series analysis and detecting changes
    • Extensions to matrices and factor modeling

    Students will connect theory with applications, enhancing their understanding of convex relaxation in practical scenarios.

  • This module discusses Model Predictive Control (MPC) and its applications in optimal control. Key topics include:

    • Linear time-invariant systems and greedy control strategies
    • Dynamic programming solutions and finite horizon approximations
    • MPC performance metrics and constraints
    • Exploring variations of optimal control problems

    Students will learn how MPC can be applied effectively in real-world scenarios, enhancing their optimization toolkit.

  • This module introduces Stochastic Model Predictive Control, focusing on:

    • Causal state-feedback control and its implications
    • Dynamic programming solutions for stochastic finite horizon control
    • Branch and bound methods for nonconvex optimization problems
    • Practical examples illustrating these methods

    Students will understand how to handle uncertainties within control applications, preparing them for real-world challenges in optimization.

  • This recap module highlights key aspects of Branch and Bound methods, addressing their application in unconstrained nonconvex minimization. Key components include:

    • Understanding lower and upper bound functions
    • Branch and bound algorithm and its convergence analysis
    • Mixed Boolean-convex problems and solution techniques
    • Practical examples illustrating algorithm progress

    Students will consolidate their understanding of these methods, gaining insights into their practical applications in optimization scenarios.