Converting to an NLP

Trajectory optimization problems are really just nonlinear programs (NLPs). A handful of high-quality NLP solvers exist, such as Ipopt, Snopt, and KNITRO. TrajectoryOptimization provides an interface that allows methods that are amenable to use with a general-purpose NLP solver. In the NLP, the states and constraints at every knot point are concatenated into a single large vector of decision variables, and the cost hessian and constraint Jacobians are represented as large, sparse matrices.

Important Types

Below is the documentation for the types used to represent a trajectory optimization problem as an NLP:

TrajectoryOptimization.NLPDataType

Holds all the required data structures for evaluating a trajectory optimization problem as an NLP. It represents the cost gradient, Hessian, constraints, and constraint Jacobians as large, sparse arrays, as applicable.

Constructors

NLPData(G, g, zL, zU, D, d, λ)
NLPData(G, g, zL, zU, D, d, λ, v, r, c)
NLPData(NN, P, [nD])  # suggested constructor

where G and g are the cost function gradient and hessian of size (NN,NN) and (NN,), zL and zU are the lower and upper bounds on the NN primal variables, D and d are the constraint jacobian and violation of size (P,NN) and (P,), and v, r, c are the values, rows, and columns of the non-zero elements of the costraint Jacobian, all of length nD.

source
TrajectoryOptimization.NLPConstraintSetType
NLPConstraintSet{T}

Constraint set that updates views to the NLP constraint vector and Jacobian.

The views can be reset to new arrays using reset_views!(::NLPConstraintSet, ::NLPData)

source
TrajectoryOptimization.TrajDataType
TrajData{n,m,T}

Describes the partitioning of the vector of primal variables, where xinds[k] and uinds[k] give the states and controls at time step k, respectively. t is the vector of times and dt are the time step lengths for each time step.

source
TrajectoryOptimization.NLPTrajType
NLPTraj{n,m,T} <: AbstractTrajectory{n,m,T}

A trajectory of states and controls, where the underlying data storage is a large vector.

Supports indexing and iteration, where the elements are StaticKnotPoints.

source

The TrajOptNLP type

The most important type is the TrajOptNLP, which is a single struct that has all the required methods to evaluate the trajectory optimization problem as an NLP.

TrajectoryOptimization.TrajOptNLPType
TrajOptNLP{n,m,T}

Represents a trajectory optimization problem as a generic nonlinear program (NLP). Convenient for use with direct methods that manipulate the decision variables across all time steps as as a single vector (i.e. a "batch" formulation).

Constructor

TrajOptNLP(prob::Problem; remove_bounds, jac_type)

If remove_bounds = true, any constraints that can be expressed as simple upper and lower bounds on the primal variables (the states and controls) are removed from the ConstraintList and treated separately.

Options for jac_type

  • :sparse: Use a SparseMatrixCSC to represent the constraint Jacobian.
  • :vector: Use (v,r,c) tuples to represent the constraint Jacobian, where

D[r[i],c[i]] = v[i] if D is the constraint Jacobian.

source

Interface

Use the following methods on a TrajOptNLP nlp. Unless otherwise noted, Z is a single vector of NN decision variables (where NN is the total number of states and controls across all knot points).

TrajectoryOptimization.grad_f!Function
grad_f!(nlp::TrajOptNLP, Z, g)

Evaluate the gradient of the cost function for the vector of decision variables Z, storing the result in the vector g.

source
TrajectoryOptimization.hess_f!Function
hess_f!(nlp::TrajOptNLP, Z, G)

Evaluate the hessian of the cost function for the vector of decision variables Z, storing the result in G, a sparse matrix.

source
TrajectoryOptimization.hess_f_structureFunction
hess_f_structure(nlp::TrajOptNLP)

Returns a sparse matrix D of the same size as the constraint Jacobian, corresponding to the sparsity pattern of the constraint Jacobian. Additionally, D[i,j] is either zero or a unique index from 1 to nnz(D).

source
TrajectoryOptimization.jacobian_structureFunction
jacobian_structure(nlp::TrajOptNLP)

Returns a sparse matrix D of the same size as the constraint Jacobian, corresponding to the sparsity pattern of the constraint Jacobian. Additionally, D[i,j] is either zero or a unique index from 1 to nnz(D).

source
TrajectoryOptimization.hess_L!Function
hess_L(nlp::TrajOptNLP, Z, λ, G)

Calculate the Hessian of the Lagrangian G, with the vector of current primal variables Z and dual variables λ.

source

The following methods are useful to getting important information that is typically required by an NLP solver

TrajectoryOptimization.primal_bounds!Function
primal_bounds!(zL, zU, con::AbstractConstraint)

Set the lower zL and upper zU bounds on the primal variables imposed by the constraint con. Return whether or not the vectors zL or zU could be modified by con (i.e. if the constraint con is a bound constraint).

source
primal_bounds!(zL, zU, cons::ConstraintList; remove=true)

Get the lower and upper bounds on the primal variables imposed by the constraints in cons, where zL and zU are vectors of length NN, where NN is the total number of primal variables in the problem. Returns the modified lower bound zL and upper bound zU.

If any of the bound constraints are redundant, the strictest bound is returned.

If remove = true, these constraints will be removed from cons.

source
primal_bounds!(nlp::TrajOptNLP, zL, zU)

Get the lower and upper bounds on the primal variables.

source
TrajectoryOptimization.constraint_typeFunction
constraint_type(nlp::TrajOptNLP)

Build a vector of length IE = num_constraints(nlp) where IE[i] is the type of constraint for constraint i.

Legend:

  • 0 -> Inequality
  • 1 -> Equality
source

MathOptInterface

The TrajOptNLP can be used to set up an MathOptInterface.AbstractOptimizer to solve the trajectory optimization problem. For example if we want to use Ipopt and have already set up our TrajOptNLP, we can solve it using build_MOI!(nlp, optimizer):

using Ipopt
using MathOptInterface
nlp = TrajOptNLP(...)  # assume this is already set up
optimizer = Ipopt.Optimizer()
TrajectoryOptimization.build_MOI!(nlp, optimizer)
MathOptInterface.optimize!(optimizer)