Converting to an NLP
Trajectory optimization problems are really just nonlinear programs (NLPs). A handful of high-quality NLP solvers exist, such as Ipopt, Snopt, and KNITRO. TrajectoryOptimization provides an interface that allows methods that are amenable to use with a general-purpose NLP solver. In the NLP, the states and constraints at every knot point are concatenated into a single large vector of decision variables, and the cost hessian and constraint Jacobians are represented as large, sparse matrices.
Important Types
Below is the documentation for the types used to represent a trajectory optimization problem as an NLP:
TrajectoryOptimization.NLPData — TypeHolds all the required data structures for evaluating a trajectory optimization problem as an NLP. It represents the cost gradient, Hessian, constraints, and constraint Jacobians as large, sparse arrays, as applicable.
Constructors
NLPData(G, g, zL, zU, D, d, λ)
NLPData(G, g, zL, zU, D, d, λ, v, r, c)
NLPData(NN, P, [nD]) # suggested constructorwhere G and g are the cost function gradient and hessian of size (NN,NN) and (NN,), zL and zU are the lower and upper bounds on the NN primal variables, D and d are the constraint jacobian and violation of size (P,NN) and (P,), and v, r, c are the values, rows, and columns of the non-zero elements of the costraint Jacobian, all of length nD.
TrajectoryOptimization.NLPConstraintSet — TypeNLPConstraintSet{T}Constraint set that updates views to the NLP constraint vector and Jacobian.
The views can be reset to new arrays using reset_views!(::NLPConstraintSet, ::NLPData)
TrajectoryOptimization.QuadraticViewCost — TypeQuadraticViewCost{n,m,T}A quadratic cost that is a view into a large sparse matrix
TrajectoryOptimization.ViewKnotPoint — TypeViewKnotPoint{T,n,m}An AbstractKnotPoint whose data is a view into the vector containing all primal variables in the trajectory optimization problem.
TrajectoryOptimization.TrajData — TypeTrajData{n,m,T}Describes the partitioning of the vector of primal variables, where xinds[k] and uinds[k] give the states and controls at time step k, respectively. t is the vector of times and dt are the time step lengths for each time step.
TrajectoryOptimization.NLPTraj — TypeNLPTraj{n,m,T} <: AbstractTrajectory{n,m,T}A trajectory of states and controls, where the underlying data storage is a large vector.
Supports indexing and iteration, where the elements are StaticKnotPoints.
The TrajOptNLP type
The most important type is the TrajOptNLP, which is a single struct that has all the required methods to evaluate the trajectory optimization problem as an NLP.
TrajectoryOptimization.TrajOptNLP — TypeTrajOptNLP{n,m,T}Represents a trajectory optimization problem as a generic nonlinear program (NLP). Convenient for use with direct methods that manipulate the decision variables across all time steps as as a single vector (i.e. a "batch" formulation).
Constructor
TrajOptNLP(prob::Problem; remove_bounds, jac_type)If remove_bounds = true, any constraints that can be expressed as simple upper and lower bounds on the primal variables (the states and controls) are removed from the ConstraintList and treated separately.
Options for jac_type
:sparse: Use aSparseMatrixCSCto represent the constraint Jacobian.:vector: Use(v,r,c)tuples to represent the constraint Jacobian, where
D[r[i],c[i]] = v[i] if D is the constraint Jacobian.
Interface
Use the following methods on a TrajOptNLP nlp. Unless otherwise noted, Z is a single vector of NN decision variables (where NN is the total number of states and controls across all knot points).
TrajectoryOptimization.eval_f — Functioneval_f(nlp::TrajOptNLP, Z)Evalate the cost function at Z.
TrajectoryOptimization.grad_f! — Functiongrad_f!(nlp::TrajOptNLP, Z, g)Evaluate the gradient of the cost function for the vector of decision variables Z, storing the result in the vector g.
TrajectoryOptimization.hess_f! — Functionhess_f!(nlp::TrajOptNLP, Z, G)Evaluate the hessian of the cost function for the vector of decision variables Z, storing the result in G, a sparse matrix.
TrajectoryOptimization.hess_f_structure — Functionhess_f_structure(nlp::TrajOptNLP)Returns a sparse matrix D of the same size as the constraint Jacobian, corresponding to the sparsity pattern of the constraint Jacobian. Additionally, D[i,j] is either zero or a unique index from 1 to nnz(D).
TrajectoryOptimization.eval_c! — Functioneval_c!(nlp::TrajOptNLP, Z, c)Evaluate the constraints at Z, storing the result in c.
TrajectoryOptimization.jac_c! — Functionjac_c!(nlp::TrajOptNLP, Z, C)Evaluate the constraint Jacobian at Z, storing the result in C.
TrajectoryOptimization.jacobian_structure — Functionjacobian_structure(nlp::TrajOptNLP)Returns a sparse matrix D of the same size as the constraint Jacobian, corresponding to the sparsity pattern of the constraint Jacobian. Additionally, D[i,j] is either zero or a unique index from 1 to nnz(D).
TrajectoryOptimization.hess_L! — Functionhess_L(nlp::TrajOptNLP, Z, λ, G)Calculate the Hessian of the Lagrangian G, with the vector of current primal variables Z and dual variables λ.
The following methods are useful to getting important information that is typically required by an NLP solver
TrajectoryOptimization.primal_bounds! — Functionprimal_bounds!(zL, zU, con::AbstractConstraint)Set the lower zL and upper zU bounds on the primal variables imposed by the constraint con. Return whether or not the vectors zL or zU could be modified by con (i.e. if the constraint con is a bound constraint).
primal_bounds!(zL, zU, cons::ConstraintList; remove=true)Get the lower and upper bounds on the primal variables imposed by the constraints in cons, where zL and zU are vectors of length NN, where NN is the total number of primal variables in the problem. Returns the modified lower bound zL and upper bound zU.
If any of the bound constraints are redundant, the strictest bound is returned.
If remove = true, these constraints will be removed from cons.
primal_bounds!(nlp::TrajOptNLP, zL, zU)Get the lower and upper bounds on the primal variables.
TrajectoryOptimization.constraint_type — Functionconstraint_type(nlp::TrajOptNLP)Build a vector of length IE = num_constraints(nlp) where IE[i] is the type of constraint for constraint i.
Legend:
- 0 -> Inequality
- 1 -> Equality
MathOptInterface
The TrajOptNLP can be used to set up an MathOptInterface.AbstractOptimizer to solve the trajectory optimization problem. For example if we want to use Ipopt and have already set up our TrajOptNLP, we can solve it using build_MOI!(nlp, optimizer):
using Ipopt
using MathOptInterface
nlp = TrajOptNLP(...) # assume this is already set up
optimizer = Ipopt.Optimizer()
TrajectoryOptimization.build_MOI!(nlp, optimizer)
MathOptInterface.optimize!(optimizer)