Solvers

Iterative LQR (iLQR)

Missing docstring.

Missing docstring for iLQRSolver. Check Documenter's build log for details.

TrajectoryOptimization.iLQRSolverOptionsType
mutable struct iLQRSolverOptions{T} <: TrajectoryOptimization.AbstractSolverOptions{T}

Solver options for the iterative LQR (iLQR) solver.

  • verbose

    Print summary at each iteration. Default: false

  • live_plotting

    Live plotting. Default: :off

  • cost_tolerance

    dJ < ϵ, cost convergence criteria for unconstrained solve or to enter outerloop for constrained solve. Default: 0.0001

  • gradient_type

    gradient type: :todorov, :feedforward. Default: :todorov

  • gradient_norm_tolerance

    gradient_norm < ϵ, gradient norm convergence criteria. Default: 1.0e-5

  • iterations

    iLQR iterations. Default: 300

  • dJ_counter_limit

    restricts the total number of times a forward pass fails, resulting in regularization, before exiting. Default: 10

  • square_root

    use square root method backward pass for numerical conditioning. Default: false

  • line_search_lower_bound

    forward pass approximate line search lower bound, 0 < linesearchlowerbound < linesearchupperbound. Default: 1.0e-8

  • line_search_upper_bound

    forward pass approximate line search upper bound, 0 < linesearchlowerbound < linesearchupperbound < ∞. Default: 10.0

  • iterations_linesearch

    maximum number of backtracking steps during forward pass line search. Default: 20

  • bp_reg_initial

    initial regularization. Default: 0.0

  • bp_reg_increase_factor

    regularization scaling factor. Default: 1.6

  • bp_reg_max

    maximum regularization value. Default: 1.0e8

  • bp_reg_min

    minimum regularization value. Default: 1.0e-8

  • bp_reg_type

    type of regularization- control: () + ρI, state: (S + ρI); see Synthesis and Stabilization of Complex Behaviors through Online Trajectory Optimization. Default: :control

  • bp_reg_fp

    additive regularization when forward pass reaches max iterations. Default: 10.0

  • bp_sqrt_inv_type

    type of matrix inversion for bp sqrt step. Default: :pseudo

  • bp_reg_sqrt_initial

    initial regularization for square root method. Default: 1.0e-6

  • bp_reg_sqrt_increase_factor

    regularization scaling factor for square root method. Default: 10.0

  • bp_reg

    Default: false

  • max_cost_value

    maximum cost value, if exceded solve will error. Default: 1.0e8

  • max_state_value

    maximum state value, evaluated during rollout, if exceded solve will error. Default: 1.0e8

  • max_control_value

    maximum control value, evaluated during rollout, if exceded solve will error. Default: 1.0e8

  • static_bp

    Default: true

  • log_level

    Default: InnerLoop

source

Augmented Lagrangian Solver

TrajectoryOptimization.AugmentedLagrangianSolverType
struct AugmentedLagrangianSolver <: TrajectoryOptimization.AbstractSolver{T}

Augmented Lagrangian (AL) is a standard tool for constrained optimization. For a trajectory optimization problem of the form:

\[\begin{aligned} \min_{x_{0:N},u_{0:N-1}} \quad & \ell_f(x_N) + \sum_{k=0}^{N-1} \ell_k(x_k, u_k, dt) \\ \textrm{s.t.} \quad & x_{k+1} = f(x_k, u_k), \\ & g_k(x_k,u_k) \leq 0, \\ & h_k(x_k,u_k) = 0. \end{aligned}\]

AL methods form the following augmented Lagrangian function:

\[\begin{aligned} \ell_f(x_N) + &λ_N^T c_N(x_N) + c_N(x_N)^T I_{\mu_N} c_N(x_N) \\ & + \sum_{k=0}^{N-1} \ell_k(x_k,u_k,dt) + λ_k^T c_k(x_k,u_k) + c_k(x_k,u_k)^T I_{\mu_k} c_k(x_k,u_k) \end{aligned}\]

This function is then minimized with respect to the primal variables using any unconstrained minimization solver (e.g. iLQR). After a local minima is found, the AL method updates the Lagrange multipliers λ and the penalty terms μ and repeats the unconstrained minimization. AL methods have superlinear convergence as long as the penalty term μ is updated each iteration.

source
TrajectoryOptimization.AugmentedLagrangianSolverOptionsType
mutable struct AugmentedLagrangianSolverOptions{T} <: TrajectoryOptimization.AbstractSolverOptions{T}

Solver options for the augmented Lagrangian solver.

  • verbose

    Print summary at each iteration. Default: false

  • opts_uncon

    unconstrained solver options. Default: UnconstrainedSolverOptions{Float64}()

  • cost_tolerance

    dJ < ϵ, cost convergence criteria for unconstrained solve or to enter outerloop for constrained solve. Default: 0.0001

  • cost_tolerance_intermediate

    dJ < ϵ_int, intermediate cost convergence criteria to enter outerloop of constrained solve. Default: 0.001

  • gradient_norm_tolerance

    gradient_norm < ϵ, gradient norm convergence criteria. Default: 1.0e-5

  • gradient_norm_tolerance_intermediate

    gradientnormint < ϵ, gradient norm intermediate convergence criteria. Default: 1.0e-5

  • constraint_tolerance

    max(constraint) < ϵ, constraint convergence criteria. Default: 0.001

  • constraint_tolerance_intermediate

    max(constraint) < ϵ_int, intermediate constraint convergence criteria. Default: 0.001

  • iterations

    maximum outerloop updates. Default: 30

  • dual_max

    global maximum Lagrange multiplier. If NaN, use value from constraint Default: NaN

  • penalty_max

    global maximum penalty term. If NaN, use value from constraint Default: NaN

  • penalty_initial

    global initial penalty term. If NaN, use value from constraint Default: NaN

  • penalty_scaling

    global penalty update multiplier; penalty_scaling > 1. If NaN, use value from constraint Default: NaN

  • penalty_scaling_no

    penalty update multiplier when μ should not be update, typically 1.0 (or 1.0 + ϵ). Default: 1.0

  • constraint_decrease_ratio

    ratio of current constraint to previous constraint violation; 0 < constraintdecreaseratio < 1. Default: 0.25

  • outer_loop_update_type

    type of outer loop update (default, feedback). Default: :default

  • active_constraint_tolerance

    numerical tolerance for constraint violation. Default: 0.0

  • kickout_max_penalty

    terminal solve when maximum penalty is reached. Default: false

  • reset_duals

    Default: true

  • reset_penalties

    Default: true

  • log_level

    Default: OuterLoop

source

ALTRO

TrajectoryOptimization.ALTROSolverType
struct ALTROSolver{T, S} <: ConstrainedSolver{T}

Augmented Lagrangian Trajectory Optimizer (ALTRO) is a solver developed by the Robotic Exploration Lab at Stanford University. The solver is special-cased to solve Markov Decision Processes by leveraging the internal problem structure.

ALTRO consists of two "phases":

  1. AL-iLQR: iLQR is used with an Augmented Lagrangian framework to solve the problem quickly to rough constraint satisfaction
  2. Projected Newton: A collocation-flavored active-set solver projects the solution from AL-iLQR onto the feasible subspace to achieve machine-precision constraint satisfaction.
source
TrajectoryOptimization.ALTROSolverOptionsType
mutable struct ALTROSolverOptions{T} <: TrajectoryOptimization.AbstractSolverOptions{T}

Solver options for the ALTRO solver.

  • verbose

    Default: false

  • opts_al

    Augmented Lagrangian solver options. Default: AugmentedLagrangianSolverOptions{Float64}()

  • constraint_tolerance

    constraint tolerance Default: 1.0e-5

  • infeasible

    Use infeasible model (augment controls to make it fully actuated) Default: false

  • dynamically_feasible_projection

    project infeasible results to feasible space using TVLQR. Default: true

  • resolve_feasible_problem

    resolve feasible problem after infeasible solve. Default: true

  • penalty_initial_infeasible

    initial penalty term for infeasible controls. Default: 1.0

  • penalty_scaling_infeasible

    penalty update rate for infeasible controls. Default: 10.0

  • projected_newton

    finish with a projecte newton solve. Default: true

  • opts_pn

    options for projected newton solver. Default: ProjectedNewtonSolverOptions{Float64}()

  • projected_newton_tolerance

    constraint satisfaction tolerance that triggers the projected newton solver. If set to a non-positive number it will kick out when the maximum penalty is reached. Default: 0.001

source

Direct Collocation (DIRCOL)

TrajectoryOptimization.DIRCOLSolverType
struct DIRCOLSolver{Q<:QuadratureRule, L, T, N, M, NM} <: DirectSolver{T}

Direct Collocation Solver. Uses a commerical NLP solver to solve the Trajectory Optimization problem. Uses the MathOptInterface to interface with the NLP.

source
TrajectoryOptimization.DIRCOLSolverOptionsType
mutable struct DIRCOLSolverOptions{T} <: TrajectoryOptimization.DirectSolverOptions{T}

Solver options for the Direct Collocation solver. Most options are passed to the NLP through the opts dictionary

  • nlp

    NLP Solver to use. See MathOptInterface for available NLP solvers Default: Ipopt.Optimizer()

  • opts

    Options dictionary for the nlp solver Default: Dict{Symbol, Any}()

  • verbose

    Print output to console Default: true

  • constraint_tolerance

    Feasibility tolerance Default: -1.0

source

Projected Newton

TrajectoryOptimization.ProjectedNewtonSolverType
struct ProjectedNewtonSolver{T, N, M, NM} <: DirectSolver{T}

Projected Newton Solver Direct method developed by the REx Lab at Stanford University Achieves machine-level constraint satisfaction by projecting onto the feasible subspace. It can also take a full Newton step by solving the KKT system. This solver is to be used exlusively for solutions that are close to the optimal solution. It is intended to be used as a "solution polishing" method for augmented Lagrangian methods.

source
TrajectoryOptimization.ProjectedNewtonSolverOptionsType
mutable struct ProjectedNewtonSolverOptions{T} <: TrajectoryOptimization.DirectSolverOptions{T}

Solver options for the Projected Newton solver.

  • verbose

    Default: true

  • n_steps

    Default: 2

  • solve_type

    Default: :feasible

  • active_set_tolerance

    Default: 0.001

  • constraint_tolerance

    Default: 1.0e-6

  • ρ

    Default: 0.01

  • r_threshold

    Default: 1.1

source