Cost Functions and Objectives

This page details the functions related to building and evaluating cost functions and objectives.

Cost Functions

TrajectoryOptimization.DiagonalCostType
DiagonalCost{n,m,T}

Cost function of the form

\[\frac{1}{2} x^T Q x + \frac{1}{2} u^T R u + q^T x + r^T u + c\]

where $Q$ and $R$ are positive semi-definite and positive definite diagonal matrices, respectively, and $x$ is n-dimensional and $u$ is m-dimensional.

Constructors

DiagonalCost(Qd, Rd, q, r, c; kwargs...)
DiagonalCost(Q, R, q, r, c; kwargs...)
DiagonalCost(Qd, Rd; [q, r, c, kwargs...])
DiagonalCost(Q, R; [q, r, c, kwargs...])

where Qd and Rd are the diagonal vectors, and Q and R are matrices.

Any optional or omitted values will be set to zero(s). The keyword arguments are

  • terminal - A Bool specifying if the cost function is terminal cost or not.
  • checks - A Bool specifying if Q and R will be checked for the required definiteness.
source
TrajectoryOptimization.QuadraticCostType
QuadraticCost{n,m,T,TQ,TR}

Cost function of the form

\[\frac{1}{2} x^T Q x + \frac{1}{2} u^T R u + u^T H x + q^T x + r^T u + c\]

where $R$ must be positive definite, $Q$ and $Q_f$ must be positive semidefinite.

The type parameters TQ and TR specify the type of $Q$ and $R$.

Constructor

QuadraticCost(Q, R, H, q, r, c; kwargs...)
QuadraticCost(Q, R; H, q, r, c, kwargs...)

Any optional or omitted values will be set to zero(s). The keyword arguments are

  • terminal - A Bool specifying if the cost function is terminal cost or not.
  • checks - A Bool specifying if Q and R will be checked for the required definiteness.
source
TrajectoryOptimization.LQRCostFunction
LQRCost(Q, R, xf, [uf; kwargs...])

Convenience constructor for a QuadraticCostFunction of the form:

\[\frac{1}{2} (x-x_f)^T Q (x-xf) + \frac{1}{2} (u-u_f)^T R (u-u_f)\]

If $Q$ and $R$ are diagonal, the output will be a DiagonalCost, otherwise it will be a QuadraticCost.

source
TrajectoryOptimization.invert!Function
invert!(Ginv, cost::QuadraticCostFunction)

Invert the hessian of the cost function, storing the result in Ginv. Performs the inversion efficiently, depending on the structure of the Hessian (diagonal or block diagonal).

source

Adding Cost Functions

Right now, TrajectoryOptimization supports addition of QuadraticCosts, but extensions to general cost function addition should be straightforward, as long as the cost function all have the same state and control dimensions.

Adding quadratic cost functions:

n,m = 4,5
Q1 = Diagonal(@SVector [1.0, 1.0, 1.0, 1.0, 0.0])
R1 = Diagonal(@SVector [1.0, 0.0, 0.0, 0.0, 0.0, 0.0])
Q2 = Diagonal(@SVector [1.0, 1.0, 1.0, 1.0, 2.0])
R2 = Diagonal(@SVector [0.0, 1.0, 1.0, 1.0, 1.0, 1.0])
cost1 = QuadraticCost(Q1, R1)
cost2 = QuadraticCost(Q2, R2)
cost3 = cost1 + cost2
# cost3 is equivalent to QuadraticCost(Q1+Q2, R1+R2)

Objectives

TrajectoryOptimization.ObjectiveType
struct Objective{C} <: TrajectoryOptimization.AbstractObjective

Objective: stores stage cost(s) and terminal cost functions

Constructors:

Objective(cost, N)
Objective(cost, cost_term, N)
Objective(costs::Vector{<:CostFunction}, cost_term)
Objective(costs::Vector{<:CostFunction})
source
TrajectoryOptimization.LQRObjectiveFunction
LQRObjective(Q, R, Qf, xf, N)

Create an objective of the form $(x_N - x_f)^T Q_f (x_N - x_f) + \sum_{k=0}^{N-1} (x_k-x_f)^T Q (x_k-x_f) + u_k^T R u_k$

Where eltype(obj) <: DiagonalCost if Q, R, and Qf are Union{Diagonal{<:Any,<:StaticVector}}, <:StaticVector}

source
TrajectoryOptimization.dgradFunction
dgrad(E::QuadraticExpansion, dZ::Traj)

Calculate the derivative of the cost in the direction of dZ, where E is the current quadratic expansion of the cost.

source

Evaluating the Cost

TrajectoryOptimization.costFunction
cost(obj::Objective, Z::Traj)
cost(obj::Objective, dyn_con::DynamicsConstraint{Q}, Z::Traj)

Evaluate the cost for a trajectory. If a dynamics constraint is given, use the appropriate integration rule, if defined.

source
cost(::Problem)

Compute the cost for the current trajectory

source
TrajectoryOptimization.stage_costFunction
stage_cost(costfun::CostFunction, x, u)
stage_cost(costfun::CostFunction, x)

Calculate the scalar cost using costfun given state x and control u. If only the state is provided, it is assumed it is a terminal cost.

source
stage_cost(cost::CostFunction, z::AbstractKnotPoint)

Evaluate the cost at a knot point, and automatically handle terminal knot point, multiplying by dt as necessary.

source
TrajectoryOptimization.gradient!Function
gradient!(E::QuadraticCostFunction, costfun::CostFunction, z::AbstractKnotPoint, [cache])

Evaluate the gradient of the cost function costfun at state x and control u, storing the result in E.q and E.r. Return a true if the gradient is constant, and false otherwise.

If is_terminal(z) is true, it will only calculate the gradientwith respect to the terminal state.

The optional cache argument provides an optional method to pass in extra memory to facilitate computation of cost expansion. It is vector of length 4, with the following entries: [grad, hess, grad_term, hess_term], where grad and hess are the caches for gradients and Hessians repectively, and the []_term entries are the caches for the terminal cost function.

source
TrajectoryOptimization.hessian!Function
hessian!(E, costfun::CostFunction, z::AbstractKnotPoint, [cache])

Evaluate the hessian of the cost function costfun at knotpoint z. the result in E.Q, E.R, and E.H. Return a true if the hessian is constant, and false otherwise.

If is_terminal(z) is true, it will only calculate the Hessian with respect to the terminal state.

The optional cache argument provides an optional method to pass in extra memory to facilitate computation of cost expansion. It is vector of length 4, with the following entries: [grad, hess, grad_term, hess_term], where grad and hess are the caches for gradients and Hessians repectively, and the []_term entries are the caches for the terminal cost function.

source
TrajectoryOptimization.cost_gradient!Function
cost_gradient!(E::Objective, obj::Objective, Z, init)

Evaluate the cost gradient along the entire tracjectory Z, storing the result in E.

If init == true, all gradients will be evaluated, even if they are constant.

source
TrajectoryOptimization.cost_hessian!Function
cost_hessian!(E::Objective, obj::Objective, Z, init)

Evaluate the cost hessian along the entire tracjectory Z, storing the result in E.

If init == true, all hessian will be evaluated, even if they are constant. If false, they will only be evaluated if they are not constant.

source
TrajectoryOptimization.cost_expansion!Function
cost_expansion!(E::Objective, obj::Objective, Z, [init, rezero])

Evaluate the 2nd order Taylor expansion of the objective obj along the trajectory Z, storing the result in E.

If init == false, the expansions will only be evaluated if they are not constant.

If rezero == true, all expansions will be multiplied by zero before taking the expansion.

source