JOURNAL OF GUIDANCE, CONTROL, AND DYNAMICS Vol. 21, No. 2, March–April 1998
Survey of Numerical Methods for Trajectory Optimization
John T. Betts Boeing Information and Support Services, Seattle, Washington 98124-2207 I. Introduction II. Trajectory Optimization Problem
T is not surprisingthat the developmentof numericalmethods for trajectory optimization have closely paralleled the exploration of space and the development of the digital computer. Space exploration provided the impetus by presenting scientists and engineers with challengingtechnicalproblems. The digital computer provided the tool for solving these new problems. The goal of this paper is to review the state of the art in the eld loosely referred to as trajectory optimization. Presenting a survey of a eld as diverse as trajectory optimization is a daunting task. Perhaps the most dif cult issue is restricting the scope of the survey to permit a meaningful discussion within a limited amount of space. To achieve this goal, I made a conscious decision to focus on the two types of methods most widely used today, namely, direct and indirect. I begin the discussion with a brief review of the underlying mathematics in both direct and indirect methods. I then discuss the complications that occur when path and boundary constraints are imposed on the problem description. Finally, I describe unresolved issues that are the subject of ongoing research. A number of recurrent themes appear throughoutthe paper. First, the aforementioned direct vs indirect is introduced as a means of categorizing an approach. Unfortunately, not every technique falls neatly into one category or another. I will attempt to describe the bene ts and de ciencies in both approaches and then suggest that the techniques may ultimately merge. Second, I shall attempt to discriminate between method vs implementation.A numerical method is usually described by mathematical equations and/or algorithmic logic. Computational results are achieved by implementing the algorithm as software, e.g., Fortran code. A second level of implementation may involve translating a (pre ight) scienti c software implementationinto an (onboard) hardwareimplementation.In general, method and implementation are not the same, and I shall try to emphasize that fact. Third, I shall focus the discussionon algorithms instead of physical models. The de nition of a trajectory problem necessarily entails a de nition of the dynamic environment such as gravitational, propulsion, and aerodynamic forces. Thus it is common to use the same algorithm, with different physical models, to solve different problems. Conversely, different algorithms may be applied to the same physicalmodels (with entirely differentresults). Finally, I shall attempt to focus on general rather than special purpose methods. A great deal of research has been directed toward solving speci c problems. Carefully specialized techniques can either be very effectiveor very ad hoc. Unfortunately,what works well for a launch vehicle guidance problem may be totally inappropriate for a low-thrust orbit transfer.
Let us begin the discussion by de ning the problem in a fairly general way. A trajectory optimization or optimal control problem can be formulated as a collection of N phases. In general, the inde.k/ pendent variable t for phase k is de ned in the region t0 · t · t .k/ . f For many applications the independent variable t is time, and the .k C 1/ .k/ D t f ; however, neither of phases are sequential, that is, t0 those assumptions is required. Within phase k the dynamics of the system are described by a set of dynamic variables zD y.k/ .t/ u.k/ .t/ (1)
made up of the n .k/ state variables and the n .k/ control variables, y u respectively. In addition, the dynamics may incorporate the n .k/ p parameters p.k/ , which are not dependent on t . For clarity I drop the phase-dependentnotation from the remaining discussion in this section. However, it is important to remember that many complex problem descriptionsrequire different dynamics and/or constraints, and a phase-dependent formulation accommodates this requirement. Typically the dynamics of the system are de ned by a set of ordinary differential equations written in explicit form, which are referred to as the state or system equations, P y D f [ y.t /; u.t /; p; t ] (2)
where y is the n y dimension state vector. Initial conditions at time t0 are de ned by ? 0l · ?[ y.t0 /; u.t0 /; p; t0 ] · ? 0u (3)
where ?[ y.t0 /; u.t0 /; p; t0 ] ? ?0 , and terminal conditions at the nal time t f are de ned by ? f l · ?[ y.t f /; u.t f /; p; t f ] · ? f u (4)
where ?[ y.t f /; u.t f /; p; t f ] ? ? f . In addition, the solution must satisfy algebraic path constraints of the form gl · g[ y.t /; u.t /; p; t] · gu (5)
where g is a vector of size n g , as well as simple bounds on the state variables yl · y.t / · yu (6)
John T. Betts received a B.A. degree from Grinnell College, Grinnell, Iowa, in 1965 with a major in physics and a minor in mathematics. In 1967 he received an M.S. in astronautics from Purdue University, West Lafayette, Indiana, with a major in orbit mechanics, and in 1970 he received a Ph.D. from Purdue, specializing in optimal control theory. He joined The Aerospace Corporation in 1970 as a Member of the Technical Staff and from 1977– 1987 was manager of the Optimization Techniques Section of the Performance Analysis Department. He joined The Boeing Company, serving as manager of the Operations Research Group of Boeing Computer Services from 1987–1989. In his current positionas Senior Principal Scientist in the Applied Research and Technology Division, he provides technical support to all areas of The Boeing Company. Dr. Betts is a Member of AIAA and the Society for Industrial and Applied Mathematics with active research in nonlinear programming and optimal control theory. E-mail: John.T.Betts@boeing.com.
c Received March 29, 1997; revision received Oct. 7, 1997; accepted for publication Oct. 20, 1997. Copyright ° 1997 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. 193
and control variables ul · u.t / · uu (7)
algebraic equations a.x/ D 0 for the root x¤ . Beginning with an esN timate x we can construct a new estimate x according to N x D x C ?p (11) where the search direction p is computed by solving the linear system A.x/p D ?a.x/ The n ? n matrix A is de ned by @a1 @ x1 @a2 @ x1 : : : @an @ x1 @a1 @ x2 @a2 @ x2 @an @ x2 ??? ??? :: : @an @ xn @a1 @ xn @a2 @ xn (12)
Note that an equality constraint can be imposed if the upper and lower bounds are equal, e.g., .gl /k D .gu /k for some k. Finally, it may be convenient to evaluate expressions of the form
q[ y.t /; u.t /; p; t ] dt
which involve the quadrature functions q. Collectively we refer to those functions evaluated during the phase, namely, F.t / D f [ y.t /; u.t /; p; t ] g[ y.t /; u.t /; p; t ] q[ y.t /; u.t /; p; t ] (9) AD
as the vector of continuousfunctions. Similarly, functionsevaluated at a speci c point,such as the boundaryconditions?[ y.t0 /, u.t0 /; t0 ] and ?[ y.t f /; u.t f /; t f ], are referred to as point functions. The basic optimal control problem is to determine the n .k/ u dimensional control vectors u.k/ .t / and parameters p.k/ to minimize the performance index
.1/ .1/ J D ? y t0 ; t0 ; y t .1/ ; p.1/ ; t .1/ ; : : : ; f f .N .N y t0 / ; t0 / ; y t .N / ; p.N / ; t .N / f f
Notice that the objective function may depend on quantities computed in each of the N phases. This formulation raises a number of points that deserve further explanation. The concept of a phase, also referred to as an arc by some authors, partitions the time domain. In this formalism the differential equations cannot change within a phase but may change from one phase to another. An obvious reason to introducea phase is to accommodate changes in the dynamics, for example, when simulating a multistage rocket. The boundaryof a phase is often called an event or junction point. A boundary condition that uniquely de nes the end of a phase is sometimes called an event criterion(and a wellposed problem can have only one criterion at each event). Normally, the simulation of a complicated trajectory may link phases together .2/ by forcing the states to be continuous, e.g., y[t .1/ ] D y[t0 ]. Howf ever, for multipath or branch trajectories this may not be the case.1 The differential equations (2) have been written as an explicit system of rst-order equations, i.e., with the rst derivative appearing explicitly on the left-hand side, which is the standard conventionfor aerospace applications. Although this simpli es the presentation, it may not be either necessaryor desirable; e.g., Newton’s law F D ma is not stated as an explicit rst-order system! The objective function (10) has been written in terms of quantities evaluated at the ends of the phases, and this is referred to as the Mayer form.2 If the objective function only involves an integral (8), it is referred to as a problem of Lagrange, and when both terms are present, it is called a problem of Bolza. It is well known that the Mayer form can be obtained from either the Lagrange or Bolza form by introducing an additional state variable. However, again this may have undesirable numerical consequences.
When the scalar step length ? is equal to 1, the iteration scheme is equivalent to replacing the nonlinear equation by a linear approximation constructed about the point x. We expect the method to converge provided the initial guess is close to the root x¤ . Of course, this simple scheme is not without pitfalls. First, to compute the search direction, the matrix A must be nonsingular (invertible), and for arbitrary nonlinear functions a.x/ this may not be true. Second, when the initial guess is not close to the root, the iteration may diverge. One common way to stabilize the iteration is to reduce the length of the step by choosing ? such that ka.N /k · ka.x/k x (14)
The procedure for adjusting the step length is called a line search, and the function used to measure progress(in this case kak) is called a merit function. In spite of the need for caution, Newton’s method enjoys broad applicability,possiblybecause,when it works, the iterates exhibit quadratic convergence. Loosely speaking, this property means that the number of signi cant gures in x (as an estimate for x¤ ) doubles from one iteration to the next.
Let us now consideran unconstrainedoptimizationproblem.Suppose that we must choose the n variables x to minimize the scalar objective function F.x/. Necessary conditions for x¤ to be a stationary point are @F @ x1 @F @ x2 : : : @F @ xn Now if Newton’s method is used to nd a point where the gradient (15) is zero, we must compute the search direction using H.x/p D ?g.x/ (16)
g.x¤ / D rx F D
III. Nonlinear Programming
Essentially all numerical methods for solving the trajectory optimization problem incorporatesome type of iteration with a nite set of unknowns. In fact, progress in optimal control solution methods closely parallels the progress made in the underlyingnonlinear programming (NLP) methods. Space limitations preclude an in-depth presentation of constrained optimization methods. However, it is important to review some of the fundamental concepts. For more complete information the reader is encouraged to refer to the books by Fletcher,3 Gill et al.,4 and Dennis and Schnabel.5 The fundamental approach to most iterative schemes was suggested over 300 years ago by Newton. Suppose we are trying to solve the nonlinear
where the Hessian matrix H is the symmetric matrix of second derivativesof the objective function. Just as before, there are pitfalls in using this method to construct an estimate of the solution. First, we note that the condition g D 0 is necessary but not suf cient. Thus a point with zero gradient can be either a maximum or a minimum. At a minimum point the Hessian matrix is positive de nite, but this may not be true when H is evaluated at some point far from the solution. In fact, it is quite possible that the direction p computed by solving Eq. (16) will point uphill rather than downhill. Second, there is some ambiguity in the choice of a merit function if a line
195 D. Inequality Constraints
search is used to stabilize the method. Certainly we would hope to reduce the objective function, that is, F.N / · F .x/. However, this x may not produce a decrease in the gradient kg.N /k · kg.x/k x (17)
An important generalization of the preceding problem occurs when inequality constraints are imposed. Suppose that we must choose the n variables x to minimize the scalar objective function F.x/ and satisfy the m inequality constraints c.x/ ? 0 (26)
In fact, what have been described are two issues that distinguish a direct and an indirect method for nding a minimum. For an indirect method, a logical choice for the merit function is kg.x/k. In contrast, for a direct method, one probably would insist that the objective function is reduced at each iteration, and to achieve this it may be necessary to modify the calculation of the search direction to ensure that it is downhill. A consequence of this is that the region of convergence for an indirect method may be considerably smaller than the region of convergence for a direct method. Stated differently,an indirect method may require a better initial guess than required by a direct method. Second, to solve the equations g D 0, it is necessary to compute the expressionsg.x/! Typically this implies that analytic expressions for the gradient are necessary when using an indirect method. In contrast, nite difference approximations to the gradient are often used in a direct method.
Suppose that we must choose the n variables x to minimize the scalar objective function F.x/ and satisfy the m equality constraints c.x/ D 0 where m · n. We introduce the Lagrangian L.x; ?/ D F.x/ ? ? c.x/
In contrast to the equally constrained case, now m may be greater than n. However, at the optimal point x¤ , the constraints will fall into one of two classes. Constraints that are strictly satis ed, i.e., ci .x¤ / > 0, are called inactive. The remaining active constraints are on their bounds, i.e., ci .x¤ / D 0. If the active set of constraints is known, then one can simply ignore the remaining constraints and treat the problem using methods for an equality constrained problem. However, algorithms to ef ciently determine the active set of constraints are nontrivial because they require repeated solution of the KT system (22) as constraints are added and deleted. In spite of the complications, methods for nonlinear programming based on the solution of a series of quadratic programming subproblems are widely used. A popular implementation of the so-called successive or sequential quadratic programming (SQP) approach is found in the software NPSOL. 6;7 In summary, the NLP problem requires nding the n vector x to minimize F.x/ subject to the m constraints (27)
(19) and bounds
which is a scalar function of the n variables x and the m Lagrange multipliers ?. Necessary conditions for the point .x¤ ; ?¤ / to be a constrained optimum require nding a stationary point of the Lagrangian that is de ned by rx L.x; ?/ D g.x/ ? G> .x/? D 0 and r? L.x; ?/ D ?c.x/ D 0 (21) (20)
c L · c.x/ · cU x L · x · xU
In this formulation, equality constraints can be imposed by setting c L D cU .
By analogy with the development in the preceding sections, we can use Newton’s method to nd the .n C m/ variables.x; ?/ that satisfy the conditions (20) and (21). Proceeding formally to construct the linear system equivalent to Eq. (12), one obtains HL G G> 0 p N ??
m 2 H L D rx F ? 2 ?i rx ci
This system requires the Hessian of the Lagrangian (23)
The linear system (22) is referred to as the Kuhn–Tucker (KT) or Karush–Kuhn–Tucker system. It is important to observe that an equivalent way of de ning the search direction p is to minimize the quadratic
p> H L p C g> p Gp D ?c
subject to the linear constraints (25)
This is referred to as a quadratic programming (QP) subproblem. Just as in the unconstrainedand root solving applications discussed earlier, when Newton’s method is applied to equality constrained problems, it may be necessary to modify either the magnitude of the step via a line search or the direction itself using a trust region approach. However, regardless of the type of stabilization invoked at points far from the solution, near the answer all methods try to mimic the behavior of Newton’s method.
Progress in the development of NLP algorithms has been closely tied to the advent of the digital computer. Because NLP software is a primary piece of the trajectory optimization tool kit, it has been a pacing item in the development of sophisticated trajectory optimization software. In the early 1960s most implementations were based on a simple Newton method (11) and (12) with optimization done parametrically, i.e., by hand. The size of a typical application was n D m ? 10. In the 1970s quasi-Newton approximations3;5 became prevalent. One popular approach for dealing with constraints was to apply an unconstrainedminimizationalgorithm to a modi ed form of the objective,e.g., minimize J .x; ?/ D F.x/C 1 ?c.x/> c.x/, 2 where ? is large. Although those techniques have generallybeen superseded for general optimization,curiously enough they are fundamental to the de nition of the merit functions used to stabilizestateof-the-art algorithms. A second popular approach for constrained problems, referred to as the reduced gradient approach, identi es a basic set of variablesthat are used to eliminate the active constraints, permitting choice of the nonbasic variables using an unconstrained technique. Careful implementationsof this method8?10 can be quite effective, especially when the constraints are nearly linear and the number of inequalities is small. Most applications in the 1970s and early 1980s were of moderate size, i.e., n D m < 100. Current applications have incorporated advances in numerical linear algebra that exploit matrix sparsity, thereby permitting applications with n; m ? 100;000 (Refs. 11–20).
IV. Optimal Control
A. Dynamic Constraints
The optimal control problem may be interpreted as an extension of the nonlinear programming problem to an in nite number of variables. For fundamental background in the associated calculus of variations the reader should refer to Ref. 21. First let us consider a simple problem with a single phase and no path constraints.
Speci cally, suppose we must choose the control functions u.t/ to minimize J D ?[ y.t f /; t f ] subject to the state equations P y D f [ y.t /; u.t /] and the boundary conditions ?[ y.t f /; u.t f /; t f ] D 0 (32) (31) (30)
Algebraic Equality Constraints
Generalizing the problem in the preceding section, let us assume that we impose algebraic path constraints of the form 0 D g[ y.t /; u.t /; t ] (42)
in addition to the other conditions (31) and (32). Using notation similar to the preceding section, let us de ne the matrix @g1 @u 1 @g2 @u 1 : : : @gn @u 1 @g1 @u 2 @g2 @u 2 @gn @u 2 ??? ??? :: : @gn @ un @g1 @ un @g2 @ un
where the initial conditions y.t0 / D y0 are given at the xed initial time t0 , and the nal time t f is free. Note that this is a very simpli ed version of the problem (2 –10), and we have purposely chosen a problem with only equality constraints. However, in contrast to the previous discussion, we now have a continuous equality constraint (31) as well as a discrete equality (32). In a manner analogous to the de nition of the Lagrangian function (19), we form an augmented performance index O J D [? C ? > ?]t f C
P ?> .t /f f[ y.t/; u.t/] ? yg dt
Two possibilities exist. If the matrix gu is full rank, then the system of differentialand algebraic equations (31) and (42) is referred to as a DAE of index 1, and Eq. (42) is termed a control variable equality constraint. For this case the Hamiltonian (34) is replaced by H D ?> f C ? > g (44)
Notice that, in addition to the Lagrange multipliers? for the discrete constraints,we have multipliers?.t / referred to as adjoint or costate variablesfor the continuous(differentialequation) constraints.In the nite dimensional case, the necessary conditions for a constrained optimum (20) and (21) were obtained by setting the rst derivatives of the Lagrangian to zero. The analogous operation is to set the rst O variation ± J D 0. It is convenient to de ne the Hamiltonian H D ?> .t / f [ y.t /; u.t /] (34) and the auxiliary function
which will result in modi cation to both the adjoint equations (36) and the control equations (37). The second possibility is that the matrix gu is rank de cient. In this case we can differentiate Eq. (42) with respect to t , yielding P P 0 D g y y C gu u C gt D g y f [ y; u] C gu u C gt P : D g0 [ y.t/; u.t/; t ] (45) (46) (47)
U D ? C ?>?
The necessary conditions referred to as the Euler–Lagrange equations that result from setting the rst variation to zero in addition to Eqs. (31) and (32) are P ? D ?H > y called the adjoint equations,
> 0 D Hu
called the control equations, and ?.t f / D U 0 D .U
t > y t Dtf
(38) (39) (40)
where the second step follows by substituting (31) and changing the de nition of y and u. The result is a new path constraint function g0 that is mathematically equivalent provided that the original constraint is imposed at some point on the path, e.g., 0 D g[ y.t0 /; u.t0 /; t0 ]. For this new path function, again, the matrix g0u may be full rank or rank de cient. If the matrix is full rank, the original DAE system is said to have index 2, and this is referred to as a state variable constraint of order 1. In the rank de cient case we may rede ne the Hamiltonian using g0 in place of g. Of course, if the matrix g0u is rank de cient, the process must be repeated. This is referred to as index reduction in the DAE literature.24;25 Note that index reduction may be dif cult to perform, and imposition of a high-index path constraint may be prone to numerical error.
C. Singular Arcs
C H /jt D t f
0 D ?.t0 /
In the preceding section we addressed the DAE system P y D f [ y; u; t ] 0 D g[ y; u; t ] (48) (49)
called the transversality conditions. The partial derivatives H y , Hu , : and U y are considered row vectors, i.e., H y D .@ H =@ y1 ; : : : ; @ H =@ yn /, in these expressions. The control equations (37) are an application of the Pontryagin maximum principle.22 A more general expression is u D arg min H
which can appear when path constraintsare imposed on the optimal control problem. However, even in the absence of path constraints the necessary conditions(31), (36), and (37) lead to the DAE system P y D f[ y; u; t ] P ? D ?H > y
> 0 D Hu
where U de nes the domain of feasible controls. Note that Eq. (41) is really a minimum principleto be consistentwith the algebraicsign conventions used elsewhere. The maximum principle states that the control variable must be chosen to optimize the Hamiltonian (at every instant in time), subject to limitations on the control imposed by state and control path constraints. In essence, the maximum principle is a constrained optimization problem in the variables u.t / at all values of t . The complete set of necessary conditions consists of a differential-algebraic(DAE) system (31), (36), and (37) with boundary conditions at both t0 and t f in Eqs. (38), (39), and (32). This is often referred to as a two-point boundary value problem. A more extensivepresentationof this material can be found in Ref. 23.
(50) (51) (52)
Viewed as a system of DAEs, one expects the optimality condition > 0 D Hu to de ne the control variable providedthe matrix Huu is nonsingular. On the other hand, if Huu is a singular matrix, the control u is not uniquely de ned by the optimality condition. This situation is referred to as a singular arc, and the analysis of this problem involves techniques quite similar to those discussed earlier for path
constraints. Furthermore, singular arc problems are not just mathematical curiosities, because Huu is singular whenever f [ y; u; t ] is a linear function of u. The famous sounding rocket problem proposed by Goddard in 1919 (Ref. 26) containsa singulararc. Recent interest in periodic optimal ight27;28 and the analysis of wind shear during landing29 all involve formulations with singular arcs.
D. Algebraic Inequality Constraints
The necessary conditions for this problem follow directly from the de nitions (20) and (21): @L D y k ? y k ? 1 ? h f . y k ? 1 ; uk ? 1 / D 0 @ ?k @L @f D .?k C 1 ? ?k / C h?>C 1 D0 k @ yk @yk @L @f D h?>C 1 D0 k @uk @uk @L @? D ?? M C D0 @y M @y M (61) (62) (63) (64)
The preceding sections have addressed the treatment of equality path constraints. Let us now consider inequality path constraints of the form 0 · g[ y.t /; u.t /; t] (53)
Unlike an equality constraint that must be satis ed for all t0 · t · t f , inequalityconstraintsmay either be active 0 D g or inactive0 < g at each instant in time. In essence the time domain is partitionedinto constrained and unconstrained subarcs. During the unconstrained arcs the necessary conditions are given by Eqs. (31), (36), and (37), whereas the conditions with modi ed Hamiltonian (44) are applicable in the constrained arcs. Thus the imposition of inequality constraints presents three major complications. First, the number of constrained subarcs present in the optimal solution is not known a priori. Second, the location of the junction points when the transition from constrained to unconstrained (and vice versa) occurs is unknown. Finally, at the junction points it is possible that both the control variables u and the adjoint variables ? are discontinuous. Additional jump conditions that are essentially boundary conditions imposed at the junction points must be satis ed. Thus what was a two-point boundaryvalue problem may become a multipoint boundary value problem when inequalities are imposed. For a more complete discussion of this subject, the reader is referred to the tutorial by Pesch30 and the textbook by Bryson and Ho.23
E. Nonlinear Programming vs Optimal Control
Now let us considerthe limiting form of this problemas M ! 1 and h ! 0. Clearly, in the limit Eq. (61) becomesthe state equation(31), Eq. (62) becomes the adjoint equation (36), Eq. (63) becomes the control equation (37), and Eq. (64) becomes the transversalitycondition (38). Essentially,what has been demonstrated is that the NLP necessary conditions,i.e., Kuhn–Tucker, approach the optimal control necessaryconditionsas the number of variablesgrows. The NLP Lagrange multipliers can be interpreted as discrete approximations to the optimal control adjoint variables. Although this discussion is of theoretical importance, it also suggests a number of ideas that are the basis of modern numerical methods. In particular, if the analysis is extended to inequality constrained problems, it is apparent that the task of identifying the NLP active set is equivalent to de ning constrained subarcs and junction points in the optimal control setting. Early results on this transcription process can be found in Refs. 31–33, and more recently interest has focused on using alternate methods of discretization.34?37
To conclude the discussion let us reemphasize the relationship between optimal control and nonlinearprogrammingproblems with a simple example. Suppose we must choose the control functions u.t / to minimize J D ?[ y.t f /; t f ] subject to the state equations P y D f [ y.t /; u.t /] (55) (54)
The numerical solution of the initial value problem (IVP) for ordinary differential equations (ODE) is fundamental to most trajectory optimizationmethods. The problem can be stated as follows: Compute the value of y.t f / for some value of t0 < t f that satis es P y D f [ y.t /; t] (65)
Initial Value Problems
where y.t0 / D y0 are given at the xed initial and nal times t0 and t f . Let us de ne the NLP variables x D .u0 ; y1 ; u1 ; y2 ; u2 ; : : : ; y M ; u M / (56)
as the values of the state and control evaluated at t0 ; t1 ; : : : ; t M , where tk D tk ? 1 C h with h D t f = M. Now yk ? yk ? 1 P y? h (57)
with the known initial value y.t0 / D y0 . Notice that, unlike the state equations (2), the right-hand sides of these equations do not explicitly involve either the controls u.t / or the parameters p. This distinction is extremely important in the context of trajectory optimization because this requires that the control is completely determined by specifying the state; i.e., it implies that we can write Q u.t / D g[ y.t /; p; t]. Numerical methods for solving the ODE IVP are relatively mature in comparison to the other elds in trajectory optimization. Most schemes can be classi ed as one-step or multistep methods. A popular family of one-step methods is the Runge–Kutta schemes,
yi C 1 D yi C h i where
? j fi j
Let us substitute this approximation into Eq. (55), thereby de ning the NLP constraints ck .x/ D yk ? yk ? 1 ? h f.yk ? 1 ; uk ? 1 / for k D 1; : : : ; M , and the NLP objective function F .x/ D ?. y M / (59) The problem de ned by Eqs. (56), (58), and (59) is a nonlinear program. From Eq. (19) the Lagrangian is L.x; ?/ D F.x/ ? ?> c.x/
fi j D f
yi C h i
? jl fil ; .ti C h i ? j /
for 1 · j · k, and k is referred to as the stage. In these expressions f? j ; ? j ; ? jl g are known constants with 0 · ?1 · ?2 · ? ? ? · 1. The schemes are called explicit if ? jl D 0 for l ? j and implicit otherwise. A convenient way to de ne the coef cients is to use the so-called Butcher diagram ?1 : : : ?k ?11 : : : ?k1 ?1 ??? ??? ??? ?1k : : : ?kk ?k
D ?. y M / ?
?> [ yk ? yk ? 1 ? h f . yk ? 1 ; uk ? 1 /] (60) k
Four common examples of k-stage Runge–Kutta schemes are the following. Euler’s explicit, k D 1: 0 0 1
Classical Runge–Kutta explicit, k D 4: 0
1 2 1 2
0 0 0 1
0 0 0 0
Trapezoidal implicit, k D 2: 0 1 0
1 2 1 2
1 2 1 2
Hermite–Simpson implicit, k D 3: 0
5 24 1 6 1 6
1 3 2 3 2 3
0 1 ? 24
1 6 1 6
An obvious appeal of an explicit scheme is that the computation of each integration step can be performed without iteration; that is, given the value yi at the time ti , the value yi C 1 at the new time ti C 1 follows directly from available values of the right-hand-side functions f. In contrast, for an implicit method, the unknown value yi C 1 appears nonlinearly, e.g., the trapezoidal method requires : 0 D yi C 1 ? yi ? .h i =2/[ f . yi C 1 ; ti C 1 / C f . yi ; ti /] D i (68) Consequently, to compute yi C 1 , given the values ti C 1 , yi , ti , and f [ yi ; ti ], requires solving the nonlinear expression (68) to drive the defect i to zero. The iterations required to solve this equation are called corrector iterations. An initial guess to begin the iteration is usually provided by the so-called predictor step. There is considerable latitude in the choice of predictor and corrector schemes. For some well-behaveddifferentialequations,a single predictorand corrector step are adequate. In contrast, it may be necessary to perform multiple corrector iterations, e.g., using Newton’s method, especially when the differentialequations are stiff. To illustratethis, suppose that instead of Eq. (65) the system dynamics are described by P y D f [ y.t /; u.t /; t]; P ? u D g[ y.t /; u.t /; t] (69)
the multistep class that are based on approximating the functions f .t/ by interpolating polynomials. The Adams –Bashforth method is an explicit multistep method,40 whereas the Adams–Moulton method is implicit.41 Multistep methods must address three issues that we have not discussedfor single-stepmethods. First, as written, the method requires information at .k ? 1/ previous points. Clearly, this implies some method must be used to start the process, and one common technique is to take one or more steps with a one-step method, e.g., Euler. Second, as written, the multistep formula assumes the stepsize h is a xed value. When the stepsize is allowed to vary, careful implementation is necessary to ensure that the calculation of the coef cients is both ef cient and well conditioned. Finally, similar remarks apply when the number of steps k, i.e., the order, of the method is changed. Regardless of whether a one-step or a multistep method is utilized, a successful implementation must address the accuracy of the solution. How well does the discrete solution yi for i D 0; 1; : : : ; M producedby the integrationscheme agree with the real answer y.t /? All well-implemented schemes have some mechanism for adjusting the integration step size and/or order to control the integration error. The reader is urged to consult Refs. 42–46 for additionalinformation. A great deal of discussion has been given to the distinction between explicitand implicit methods. Indeed, it is often temptingto use an explicit method simply becauseit is more easily implemented (and understood). However, the trajectory optimization problem is a boundary value problem (BVP), not an initial value problem, and to quote Ascher et al. on page 69 of Ref. 24, “for a boundary value problem : : : any scheme becomes effectively implicit. Thus, the distinction between explicit and implicit initialvalue schemes becomes less important in the BVP context.” Methods for solving initial value problems when dealing with a system of differential-algebraic equations have appeared more recently. For a semiexplicit DAE system such as Eqs. (48) and (49), it is tempting to try to eliminate the algebraic (control) variables to utilize a more standard method for solving ODEs. Proceeding formally to solve Eq. (49), one can write u.t/ D g?1 [ y; t] (72)
When this value is substituted into Eq. (48), one obtains the nonlinear differential equation P y D f f y; g?1 [ y; t]; t g (73)
which is amenable to solution using any of the ODE techniques described earlier. Another elimination technique, referred to as differential inclusion,47 attempts to form an expression of the form u.t / D P [ y; y; t ] (74)
where ? is a small parameter. Within a very small region 0 · t · t? , the solution displays a rapidly changing behavior, and thereafter the second equation can effectively be replaced by its limiting form 0 D g[ y.t /; u.t /; t ] (70)
The singular perturbation problem (69) is in fact a stiff system of ODEs and in the limit approaches a DAE system. Techniques designed speci cally for solving singular perturbation formulations have been suggested for guidance applications.38;39 The second class of integration schemes are termed multistep schemes and have the general form yi C k D
k ?1 jD0 k
? j yi C j C h
? j fi C j
where ? j and ? j are known constants. If ? k D 0, the method is explicit; otherwise it is implicit. The Adams schemes are members of
by solving a subset of the differential equations (48). Because the number of state and control variables is not necessarily equal, it is imperative to partition the differential equations in some stable manner to perform this elimination.Unfortunately,it is seldom possible to analytically construct a feedback control of the form of Eq. (72) or Eq. (74). When analytic elimination is impossible, the only recourse is to introduce a nonlinear iterative technique, e.g., Newton’s method, that must be executed at every integration step. This approach not only is very time consuming but also can con ict with logic used to control integration error in the dynamic variables y. If an implicit method is used for solving the ODEs, this elimination iteration must be performed within each corrector iteration; in other words, it becomes an iteration within an iteration. In essence, methods that attempt to eliminate the control to avoid the DAE problem are cumbersome, numerically unstable, and problem speci c. The rst general technique for solving DAEs was proposed by Gear48 and utilizes a backward differentiation formula (BDF) in a linear multistep method. In contrast to the elimination methods in the preceding paragraph, the algebraic variables u.t/ are treated the same as the differential variables y.t /. The method was originally
proposed for the semiexplicit index 1 system described by Eqs. (48) and (49) and soon extended to the fully implicit form F[P; z; t ] D 0 z (75)
where z D . y; u/. The basic idea of the BDF approach is to replace P the derivative z by the derivative of the polynomial that interpolates the solution computed over the preceding k steps. The simplest example is the implicit Euler method that replaces Eq. (75) with F zi ? zi?1 ; zi ; ti hi D0 (76)
The resulting nonlinear system in the unknowns zi is usually solved by some form of Newton’s method at each time step ti . The widely used production code DASSL developed by Petzold49;50 essentially uses a variable-step-size,variable-orderimplementationof the BDF formulas. The method is appropriate for index 1 DAEs with consistent initial conditions. Current research into the solution of DAEs with higher index (?2) has renewed interest in one-step methods, speci cally the implicit Runge–Kutta schemes described for ODEs. A discussion of methods for solving DAEs is found in Ref. 25.
In practice the numerical solution of a trajectory optimization problem inevitably involves tabular data. Typically, propulsion, aerodynamic,weight, and mass propertiesfor a vehicle are speci ed using tables. For example, the thrust of a motor may be speci ed by a nite set of table values fT .M k ; h k /; M k ; h k g for k D 1; : : : ; N , in lieu of de ning the functional form in terms of Mach number and altitude. In some cases this approach is necessary simply because there is not enough information to permit an analytic representation of the function based on the laws of physics. Often tabular data are obtained as the result of experimental tests. Finally, there may be historical precedence for specifying data in this format as a convenient way for communicationbetween disciplines.Regardlessof the reason for specifying a nonlinear function as a collection of tabular values, the numerical implementation of a trajectory optimization must deal with this format. The necessary conditions described in Sec. III assume continuity and differentiabilityfor the objective and constraint functions. Similar restrictions are implied when stating the necessary conditions for the optimal control problem in Sec. IV. The numerical integration techniques given in the previous subsection make similar assumptionsabout continuityand differentiability for the right-hand sides of the differential algebraic equations. Successful application of those techniques requires that tabular data be represented using a smooth differentiable function. Unfortunately, by far the single most widely used approach is linear interpolation. This is also by far the single most catastrophic impediment to an efcient solution of the trajectory optimization problem! A piecewise linear representationis not differentiableat the table points and thus is fundamentally inconsistent with the theory described in the preceding sections. Recognition of this dif culty is certainly not new, and it has been discussed by other authors51;52 in the simulation of space launch vehicles. Nevertheless, inappropriate data modeling techniques persist, presumably for historical reasons, in many real applications. There are many alternatives for representing tabular data using a smooth functional form. For some applications an appropriate model is suggested by the physics, e.g., a quadratic drag polar. In lieu of a form derived from physical considerations, functional approximation based solely on the mathematical requirements can be incorporated. Function approximation using B-splines53 can effectively produce the required continuity. Methods for constructing smooth approximations utilizing nonlinear programming techniques have also been developed.54?57 Nonlinearrationalfunctionor neural network approximations can also produce suf cient smoothness, although it is not clear that these are preferable to B-splines.
possible ways to put the pieces together to form a complete algorithm; however, all techniques have one attribute in common. Because all of the algorithms involve applicationof Newton’s method, a convenient way to organize the discussion is to describe the function evaluation procedure for each method. Speci cally, we will describe the function generator for each algorithm. The inputs to the function generator are the variables. The outputs of the function generator are the objective and constraints. The basic concept is illustrated in Fig. 1.
A. Direct Shooting 1. Algorithm
The variablesfor a direct shootingapplicationare chosenas a subset of the initial conditions,the nal conditions,and the parameters. Thus for each phase let us de ne X .k/ D f y.t0 /; p; t0 ; y.t f /; t f g The total set of NLP variables is then x ? X .1/ ; X .2/ ; : : : ; X .N / (78) (77)
Notice that any time-varying quantities must be represented using the nite set of parameters x, and consequently this implies that the control/time history must be de ned by a nite set of parameters. For example, one might have an explicit representation such as u D p1 C p2 t or an implicit relationship such as 0 D p1 u.t / C sin[p2 u.t /] (80) (79)
When the control is de ned explicitly as in Eq. (79), propagation of the trajectory from the beginning to the end of the phase can be accomplished using an ODE initial value method as described. On the other hand, if the control is de ned implicitly, the phase propagation will require the use of a DAE initial value method as describedin Sec. V.A. Notice also that problems with path inequality constraints (5) must be treated as a sequence of constrained and unconstrained arcs. Thus phases must be introduced to account for these individual arcs, in addition to phases that are necessary to model known problem discontinuities such as jettison of a stage. The NLP constraintsand objective functionare quantities that are evaluated at the boundaries of one or more of the phases. Thus we have ?.1/ [ y.t0 /; p; t0 ] c.x/ D ?.1/ [ y.t f /; p; t f ] : : : ? ?
Compendium of Methods
[ y.t0 /; p; t0 ]
The basic elements involved in the speci cation of a numerical method for solving the trajectory optimization problem have been described in the preceding sections. There is a broad spectrum of
[ y.t f /; p; t f ]
In summary, the function generator for the direct shooting method is of the following form:
Input: x Do for (each phase) k D 1; N .k/ Initialize phase k: y.k/ .t0 /; p.k/ ; t0 Constraint evaluation: compute ?.k/ [ y.t0 /; p; t0 ] Initial value problem: given t f , compute y.k/ .t f /, i.e., solve Eq. (65) or Eqs. (48) and (49) Constraint evaluation: compute ?.k/ [ y.t f /; p; t f ] End do Terminate trajectory Compute objective F.x/; c.x/ Output: F.x/; c.x/
linear tangent steering law. In this case the control can be de ned by six parameters p1 and p2 : u.t / D p1 C p2 t ? I D arctanfu 2 =u 1 g ? I D arcsinfu 3 =kukg (82)
which then determine the inertial yaw and pitch angles according to (83) (84)
This form of the control law is an exact solution of the optimal control problem when gravity is constant23;62 and is implemented in the Space Shuttle’s ight avionics. Again the optimal steering is approximated by a nite set of parameters, and the resulting trajectory optimization problem is amenable to direct shooting.
The direct shooting method is one of the most widely used methods and is especially effective for launch vehicle and orbit transfer applications. The program to optimize simulated trajectories (POST) program58 developed by Martin Marietta for simulating the trajectories of launch vehicles such as the Titan is a widely distributed implementation of the direct shooting method. Originally developedto supportmilitary space applications,it is similar in functionality to the generalized trajectory simulation (GTS) program59 developed at The Aerospace Corporation. Most major aerospace rms either use POST or have an equivalent capability for launch vehicle optimization and mission analysis. Early versions of POST utilized a reduced gradient optimization algorithm similar to the methods in Refs. 9 and 10, and more recent releases have incorporated an SQP method.7 GTS utilizes a modi ed form of the reduced gradient algorithm,8 which incorporates quasi-Newton updates for constraint elimination and Hessian approximation. Programs such as POST and GTS have extensive libraries of application-speci c models. In particular, the libraries permit de nition of the vehicle dynamicsfthe right-hand-sidefunctions f[ y.t/; u.t/; p; t ]g in many coordinate systems, e.g., Earth-centered inertial, intrinsic, orbital, etc. It is also common to have 10–20 different models for computing the gravitational, propulsive, and aerodynamic forces. In most cases the user can also specify the type of numerical integration and interpolation to be used, as well as the trajectory input and output formats. Direct shooting applicationshave been most successful in launch and orbit transfer problems primarily because this class of problem lends itself to parameterization with a relatively small number of NLP variables. For example, an orbit transfer problem with impulsive burns60;61 can be posed with four variables per burn, namely, the time of ignition and the velocity increment .ti ; D Vi /. Typically the mission orbit can be de ned using three to ve nonlinear constraints enforced at the end of the trajectory.Thus a typical two-burn orbit transfer can be posed as an NLP with eight variables and four or ve constraints. When the vehicle thrust-to-weight ratio is high, there is little motivation to consider a more elaborate mathematical model of the thrust variation for two reasons. First, the performance bene t that can be achieved with a thrust variation, i.e., by introducing time-varying control u.t /, is negligible. Second, most real vehicles do not have the ability to implement a variable direction thrust even if it was computed. In fact, many spacecraft incorporate spin stabilization, which implies a constant inertial attitude during the burns. Stated simply, the application neither permits nor warrants a mathematical model of higher delity, and direct shooting is very effective. A similar situation exists when designing optimal launch vehicle trajectories. During the early portion of an ascent trajectory, it is common to de ne the turning by a nite set of pitch rates. This approach is used for most expendable launch vehicles, e.g., Titan, Delta, and Atlas/Centaur, and is often a part of the onboard mission data load. Consequently, the steering during the early portion of a launch vehicle trajectory is de ned by a relatively small number of parameters, and the resulting optimization problem is readily formulated using the direct shooting method. Steering during the second stage of the Space Shuttle ascent trajectory is de ned by a
Most successful direct shootingapplicationshave one salient feature in common, namely, the ability to describe the problem in terms of a relatively small number of optimization variables. If the dynamic behavior of the control functions u.t / cannot be represented using a limited number of NLP variables, the success of a direct shooting method can be degraded signi cantly. For example, it is tempting to approximate the controls using an expansion such as
u.t / D
pk Bk .t/
where M ?1, and Bk .t / are as a set of basis functions, e.g., Bspline. This approach impacts the direct shooting method in two ways. Both are related to the calculationof gradient information for the NLP iteration. The rst issue is related to the sensitivity of the variables. Changes early in the trajectory (near t0 ) propagate to the end of the trajectory. The net effect is that the constraints can behave very nonlinearly with respect to variables, thereby making the optimization problem dif cult to solve. This is one of the major reasons for the multiple shooting techniques, which will be described later. The second issue is the computational cost of evaluating the gradient information. The most common approach to computing gradients is via nite difference approximations. A forward difference approximationto column j of the Jacobianmatrix G in Eq. (20) is G: j D .1=± j /[c. x C ± j / ? c.x/] (86)
where the vector ± j D ± j e j , and e j is a unit vector in direction j . A central difference approximation is G: j D .1=2± j /[c. x C ± j / ? c. x ? ± j /] (87)
To calculate gradient information this way, it is necessary to integrate the trajectory for each perturbation. Consequently, at least n trajectories are required to evaluate a nite difference gradient, and this information may be required for each NLP iteration. The cost of computing the nite difference gradients is reduced somewhat in the GTS59 program by using a partial trajectory mechanism. This approach recognizes that it is not necessary to integrate the trajectory from t0 to t f if the optimization variable is introduced later, say at ts > t0 . Instead the gradient information can be computed by integrating from the return point tr to t f , where ts ? tr > t0 , because the portion of the trajectory from t0 to tr will not be altered by the perturbation.A less common alternative to nite difference gradients is to integrate the so-called variational equations. In this technique, an additional differential equation is introduced for each NLP variable, and this augmented system of differentialequations must be solved along with the state equations. Unfortunately, the variational equations must be derived for each application and consequently are used far less in general purpose trajectory software. Another issue that must be addressed is the accuracy of the gradient information. Forward difference estimates are of order ±, whereas central difference estimates are .± 2 /. Of course the more
accurate central difference estimates are twice as expensive as forward difference gradients. Typically, numerical implementations use forward difference estimates until nearly converged and then switch to the more accurate derivatives for convergence. Although techniquesfor selecting the nite difference perturbationsize might seem to be critical to accurate gradientevaluation,a number of effective methods are available to deal with this matter.4 A more crucial matter is the interaction between the gradient computations and the underlying numerical interpolation and integration algorithms. We have already discussed how linear interpolation of tabular data can introducegradient errors. However, it should be emphasizedthat sophisticated predictor corrector variable-step/variable-order numerical integration algorithms also introduce noise into the gradients. Although those techniques enhance the ef ciency of the integration, they degrade the ef ciency of the optimization. In fact, a simple xed-step/ xed-order integrator may yield better overall ef ciency in the trajectory optimization because the gradient information is more accurate. Two integration methods that are suitable for use inside the trajectory function generator are described in Ref. 63 and Refs. 64 and 65. Another issue arises in the context of a trajectory optimization application when the nal time t f is de ned implicitly, not explicitly, by a boundary or event condition. In this case we are not asking to integrate from t0 to t f but rather from t0 until ? [ y.t f /; t f ] D 0. Most numerical integration schemes interpolate the solution to locate the nal point. On the other hand, if the nal point is found by iteration, e.g., using a root- nding method, the net effect is to introduce noise into the external Jacobian evaluations.A better alternative is to simply add an extra variable and constraint to the overall NLP problem and avoid the use of an internal iteration. In fact, inaccuracies in the gradient can be introduced by 1) internal iterations, e.g., solving Kepler’s equation, event detection; 2) interpolationof tabular data; and 3) discontinuousfunctions,e.g., “ABS,” “MAX,” “IF” tests; and a carefully implemented algorithm must avoid these dif culties.66
B. Indirect Shooting 1. Algorithm
Indirect Shooting Input: x Do for (each phase) k D 1; N .k/ Initialize phase k: y.k/ .t0 /; ? .k/ .t0 /; p.k/ ; t0 Initial value problem: given t f compute y.k/ .t f /; ?.k/ .t f /, i.e., solve Eqs. (31), (36), and (37) Constraint evaluation: evaluate Eq. (89) End do Terminate trajectory Output: c.x/
Although the indirect shooting method would seem to be quite straightforward, it suffers from a number of dif culties that will be described in the next section. Primarily because of the computational limitations, successful applications of indirect shooting have focused on special cases. Because the method is very sensitive to the initial guess, it is most successful when the underlyingdynamics are rather benign. The method has been utilized for launch vehicle trajectorydesign in the program DUKSUP67 and for low-thrust orbit analysis.68
The sensitivity of the indirect shooting method has been recognized for some time. Computational experience with the technique in the late 1960s is summarized by Bryson and Ho on page 214 of Ref. 23:
The main dif culty with these methods is getting started; i.e., nding a rst estimate of the unspeci ed conditions at one end that produces a solution reasonably close to the speci ed conditions at the other end. The reason for this peculiar dif culty is the extremal solutions are often very sensitive to small changes in the unspeci ed boundary conditions. : : : Since the system equations and the Euler–Lagrange equations are coupled together, it is not unusual for the numerical integration, with poorly guessed initial conditions, to produce “wild” trajectories in the state space. These trajectories may be so wild that values of x.t/ and/or ?.t / exceed the numerical range of the computer!
Let us begin with a description of indirect shooting for the simplest type of optimal control problem with no path constraints and a single phase. The variables are chosen as a subset of the boundary values for the optimal control necessary conditions. For this case the NLP variables are x D f?.t0 /; t f g and the NLP constraints are ?[ y.t /; p; t] c.x/ D ?.t / ? U > y .8t C H / (89)
A number of techniques have been proposed for dealing with this sensitivity. One rather obvious approach is to begin the iteration process with a good initial guess. Referred to as imbedding, continuation, or homotopy methods, the basic idea is to solve a sequence of problems and use the solution of one problem as the initial guess for a slightly modi ed problem. Thus, suppose it is necessary to solve a.x/ D 0 and that we can imbed this problem into a family of related problems N a.x; ? / D 0 (90)
A major difference between direct and indirect shooting occurs in the de nition of the control functions u.t /. For indirect shooting the control is de ned at each point in time by the maximum principle (41) or (37). Thus in some sense the values ?.t0 / become the parameters that de ne the optimal control function instead of p. When the maximum principle is simple enough to permit an explicit de nition of the control, propagation of the trajectory from the beginning to the end of the phase can be accomplished using an ODE initial value method. On the other hand, if the control is de ned implicitly, the phase propagation will require the use of a DAE initial value method as described in Sec. V.A. Notice also that problems with path inequality constraints (5) must be treated as a sequence of constrained and unconstrained arcs. Thus phases must be introduced to account for these individual arcs just as with direct shooting. When additional phases are introduced, in general it will be necessary to augment the set of variables to include the unknown adjoint and multipliers at each of the phase boundaries. Furthermore, additional constraints are included to re ect the additional necessary conditions. In summary, the function generator for the indirect shooting method is of the following form:
N where the parameter 0 · ? · 1. Assume the problem a.x; 0/ D 0 is easy to solve, and when ? D 1, the real solution is obtained, i.e., N a.x; 1/ D a.x/ D 0. Typically, it is desirable to choose the parameter ? such that the solution x.? / varies smoothly along the homotopy path. For example,it may be easy to solve an orbit transferusing twobody dynamics. A more accurate solution involving oblate Earth perturbationscould then be obtained by turning on the gravitational perturbations with an imbedding technique. Clearly, the homotopy method can be used with any trajectory optimizing algorithm, but it has been especially useful for indirect methods. Another technique that has been used to reduce the solution sensitivity is referred to as the sweep method. Essentially, the idea is to integrate the state equations forward, i.e., from t0 to t f , and then integratethe adjoint equations backward, i.e., from t f to t0 . The goal is to exploit the fact that the state equations may be integrated stably in the forward direction and the adjoint equations may be stable in the reverse direction. This approach requires that the state, control, and adjoint time histories be saved using some type of interpolation method. Numerical processing is further complicated by interaction between the interpolation scheme and the integration error control, especially when dealing with discontinuities that occur at phase boundaries.
Perhaps the biggest issue that must be addressed when using an indirect method is the derivation of the necessary conditions themselves. For realistic trajectory simulations the differentialequations (2), path constraints (5), and boundary conditions (4) may all be complicated mathematical expressions. To impose the optimality conditions (36–39) it is necessary to analytically differentiate the expressions for f , g, and ?. For a production software tool such as POST or GTS, which permits problem formulation using alternate coordinate systems and propulsion, gravitational, and aerodynamic models, this can be a daunting task! As a consequence,the optimality conditions are usually not derived for all possible combinations and models. The impact is that current implementations of indirect methods suffer from a lack of exibility. Let us emphasize that this is a limitation in current but not necessarily future implementations. In particular, a number of authors have explored the utility of automatic differentiationtools to eliminate this impediment. For example, the ADIFOR software,69 the OCCAL software,70 and the approach taken by Mehlhorn and Sachs71 represent promising attempts to automate this process.
C. Multiple Shooting 1. Algorithm
Both the direct and indirect shooting methods suffer from a common dif culty. The essential shortcoming of these methods is that small changes introduced early in the trajectory can propagate into very nonlinear changes at the end of the trajectory. Although this effect can be catastrophic for an indirect method, it also represents a substantial limitation on the utility of a direct formulation. The basic notion of multiple shooting (in contrast to simple shooting) was originally introduced72;73 for solving two-point boundary value problems, and we begin the discussion for this case. In its simplest form the problem can be stated as follows: Compute the unknown initial values v.t0 / D v0 such that the boundary condition 0 D ?[v.t f /; t f ] holds for some value of t0 < t f that satis es P v D f[v.t /; t] (92) (91)
uncouplingbetween the multiple shooting segments. For the simple case described, the Jacobian matrix is banded with n v ? n v blocks along the diagonal, and very ef cient methods for solving the linear system (12) can be utilized. Note that the multiple shooting segments are introduced strictly for numerical reasons. The original optimal control problem may also have phases as described previously. Thus, in general, each phase will be subdivided into multiple shooting segments as illustrated in Fig. 1. Furthermore, within each phase the set of differential-algebraicequations and corresponding boundary conditions may be different, depending on whether the arc is constrained or unconstrained, etc. The multiple shooting concept can be incorporated into either a direct or indirect method. The distinction between the two occurs in the de nition of the dynamic variables v, the dynamic system (92), and the boundary conditions (91). For a direct multiple shooting method, we can identify the dynamic variables v with the state and control . y; u/. By analogy the dynamics are given by the original state equations (2) and path constraints (5). In lieu of the simple boundary conditions (91), we directly impose Eqs. (3) and (4). For an indirect multiple shooting algorithm the dynamic variables v must include the state, control, and adjoint variables . y; u; ?/. The dynamics are given by the original state equations (2) and the appropriate necessary conditions (36) and (37). In this instance the boundary conditions (91) are replaced with the transversality conditions (38) and (39) along with (3) and (4). It also may be necessary to augment the set of NLP iteration variables x and constraints c.x/ to account for the additional conditions that occur when entering and leaving path inequalities. As a nal distinction for an indirect method, the number of NLP variables x and constraints c.x/ are equal, i.e., m D n, because the optimality conditions uniquely dene the values of the NLP variables. For a direct method, n and m may differ, and the objective function F.x/ must be used to de ne the optimal values of the NLP variables. The function generator for the multiple shooting method is of the following form: Multiple Shooting Input: x Do for (each phase) k D 1; N Initialize phase k: Do for (each segment) j D 0; M ? 1 Initialize segment j C 1: v j ; t j N Initial value problem: given t j C 1 , compute v j , i.e., solve DAE system N Constraint evaluation: save v j C 1 ? v j in Eq. (95) End do Save ?[v M ; t f ] in Eq. (95) End do Terminate trajectory Output: c.x/ [and F.x/]
The fundamental idea of multiple shooting is to break the trajectory into shorter pieces or segments. Thus we break the time domain into smaller intervals of the form t0 < t1 < ? ? ? < t M D t f (93)
Let us denotev j for j D 0; : : : ; .M ?1/, as the initialvalue for the dynamic variable at the beginning of each segment. For segment j we can integrate the differentialequations (92) from t j to the end of the N segment at t j C 1 . Denote the result of this integrationby v j . Collecting the results for all segments, let us de ne a set of NLP variables x D fv0 ; v1 ; : : : ; v M ? 1 g (94)
Now we also must ensure that the segments join at the boundaries; consequently, we impose the constraints N v1 ? v0 N v2 ? v1 : : : ?[v M ; t f ] One obvious result of the multiple shooting approach is an increase in the size of the problem that the Newton iteration must solve since additional variables and constraints are introduced for each shooting segment. In particular the number of NLP variables and constraints for a multiple shooting application is n D n v M where n v is the number of dynamic variables v, and M is the number of segments. Fortunately the Jacobian matrix A that appears in the calculation of the Newton search direction (12) is sparse. In particular, only Mn 2 elements in A are nonzero. This sparsity is a direct conv sequence of the multiple shooting formulation because variables early in the trajectory do not change constraints later in the trajectory. In fact, Jacobian sparsity is the mathematical consequence of
Perhaps the single most important bene t derived from a multiple shooting formulation (either direct or indirect) is enhanced robustness. The BNDSCO implementation74 of indirect multiple shooting is widely used in Germany to solve very dif cult applications. An optimal interplanetary orbit transfer involving planetary perturbations has been computed by Callies.75 Berkmann and Pesch29 have utilizedthe approachfor the study of landingin the presenceof windshear,Kreim et al.76 for Shuttle re-entry,and Pesch30 for a number of other aerospace applications. System identi cation problems have been addressed by Bock and Plitt.77 An interesting bene t of the multiple shooting algorithm is the ability to exploit a parallel processor. The method is sometimes called parallel shooting, because the simulation of each segment and/or phase can be implemented on an individual processor. This technique was explored for a direct multiple shooting method78 and remains an intriguing prospect for multiple shooting methods in general.
The multiple shooting technique greatly enhances the robustness of either direct or indirect methods. However, the number of NLP
203 2. Examples
iteration variables and constraints increases markedly over simple shooting implementations. Consequently, it is imperative to exploit matrix sparsity to ef ciently solve the NLP Newton equations. For indirect shooting, the matrix A has a simple block banded structure, and ef cient linear algebra methods are rather straightforward. For direct shooting, sparsity appears both in the Jacobian G and the Hessian H, and the relevant sparse linear system is the KT system (22). This system can be solved ef ciently using the multifrontal method for symmetric inde nite matrices.13 In general, all of the other issues associated with simple direct and indirect shooting still apply. Perhaps the most perplexingdif culty with shootingmethods is the need to de ne constrained and unconstrainedsubarcs a priori when solving problems with path inequalities.
D. Indirect Transcription 1. Algorithm
Collocation methods have been used for solving boundary value problemsoccurringin many elds for nearly 40 years,and the reader is urged to consult Refs. 24, 80, and 81. Dickmanns81 implemented the Hermite–Simpson method in the CHAP3 software and reported successful solution of a number of challengingapplications,including Shuttle re-entryproblems with convectiveheatingand maximum cross-rangecapability.The COLSYS package developed by Ascher et al.82 has also been widely used for boundary value problems.
Historically,transcriptionor collocationmethodswere developed for solving two-point boundary value problems such as those encountered when solving the necessary conditions with an indirect formulation. Let us again consider the simple boundary value problem (91) and (92) and, as with multiple shooting, subdivide the interval as in Eq. (93). A fundamental part of the multiple shooting method was to solve the ODEs using an initial value method. Let us consider taking a single step with an explicit method such as Euler’s. Following the multiple shooting methodology, we then must impose constraints of the form N 0 D vj C1 ? vj D v j C 1 ? .v j C h j f j / (96) (97)
Collocation methods can be extremely effective for solving multipoint boundary value problems such as those encountered when optimizing a trajectory. As with all indirect methods, however, the techniques cannot be applied without computing the adjoint equations. Furthermore, when path inequality constraints are present, it is imperative to predetermine the sequence of constrained and unconstrained subarcs to formulate the correct BVP.
E. Direct Transcription 1. Algorithm
where t j C 1 D h j C t j for all of the segments j D 0; : : : ; .M ? 1/. Of course, there is no reason to restrict the approximation to an Euler method. In fact, if we incorporate an implicit scheme such as the trapezoidal method (68), satisfaction of the defect constraint (97) is exactly the same as the corrector iteration step described for all implicit integrators. The only difference is that all of the corrector iterations are done at once in the boundary value context, whereas the corrector iterations are done one at a time (step by step) when the process is part of an initial value integration method. One of the most popular and effective choices for the defect constraint in collocation methods is the Hermite–Simpson method 79 : : N 0 D v j C 1 ? v j ? .h j C 1 =6/[ f j C 1 C 4 f j C 1 C f j ] D which is the Simpson defect with Hermite interpolant N v j C 1 D 1 [v j C v j C 1 ] C .h j C 1 =8/[ f j ? f j C 1 ] 2 (99)
There are two major reasons that direct transcription methods83 are actively being investigated.First, like all direct methods,they can be applied without explicitly deriving the necessary conditions,i.e., adjoint, transversality, maximum principle. This feature makes the method appealing for complicated applications and promises versatility and robustness. Second, in contrast to all other techniques described, direct transcription methods do not require an a priori speci cation of the arc sequence for problems with path inequalities. Just as before, we break the time domain into smaller intervals of the form t0 < t1 < ? ? ? < t M D t f (100)
The NLP variables then become the values of the state and control at the grid points, namely, x D f y 0 ; u0 ; y 1 ; u1 ; : : : ; y M ; u M g (101)
for the variables at the interval midpoint. Schemes of this type are referred to as collocation methods80 because the solution is a piecewise continuouspolynomial that collocates, i.e., satis es, the ODEs at the so-called collocation points in the subinterval t j · t · t j C 1 . For obvious reasons, the points t j are also called grid points, mesh points, or nodes. The function generator for the indirect transcription method is of the following form: Indirect Transcription
The set of NLP variables may be augmented to include the parameters p, the times t0 and t f , and for some discretizationsthe values of the state and control at collocation points between the grid points. The key notion of the collocation methods is to replace the original set of ODEs (2) with a set of defect constraints i D 0, which are imposed on each interval in the discretization. Thus, when combined with the original path constraints, the complete set of NLP constraints is ?0 g[ y0 ; u0 ; p; t0 ]
g[ y1 ; u1 ; p; t1 ] c.x/ D : : :
Input: x Do for (each phase) k D 1; N Initialize phase k: Do for (each grid point) j D 0; M ? 1 Constraint evaluation: save discretization defect, e.g., Eq. (98) or (68) End do Save ?[v M ; t f ] in Eq. (95) End do Terminate trajectory Output: c.x/
g[ y M ? 1 ; u M ? 1 ; p; t M ? 1 ] g[ y M ; u M ; p; t M ] ?f The resultingformulation is a transcriptionof the original trajectory optimization problem de ned by Eqs. (2–7) into an NLP problem as given by Eqs. (27–29). Collecting results yields a function generator for the direct transcription method of the following form:
Input: x Do for (each phase) k D 1; N Initialize phase k: save ? 0 Do for (each grid point) j D 0; M ? 1 Constraint evaluation: save g[ y j ; u j ; p; t j ] and End do Terminate phase Save g[ y M ; u M ; p; t M ] and ? f End do Terminate trajectory Output: c.x/ and F.x/
The nonlinear programming problem that results from this formulation is large. It is clear that the number of NLP variables n ? .n y C n u /M N , with a similar number of NLP constraints.Thus a typical trajectory with 7 states, 2 controls, 100 grid points per phase, and 5 phases produces an NLP with n D 4500. Fortunately, the pertinent matrices for this NLP problem, namely, the Jacobian G and the Hessian H, are also sparse. So for an applicationof this type it would not be unusual for these matrices to have fewer than 1% of the elementsbe nonzero.Consequently,exploitingsparsityto reduce both storage and computation time is a critical aspect of a successful implementation when using a direct transcription method.
The optimal trajectories by implicit simulation (OTIS) program originally proposed by Hargraves and Paris84 implements the basic collocation method, in addition to a more standard direct shooting approach. The original implementation has been widely distributed to NASA, the U.S. Air Force, and academic and commercial institutions throughout the United States. Early versions of the tool utilized the Hermite–Simpson defect 79 and the NPSOL nonlinear programming software.7 More recent versions of the OTIS software have incorporatedthe sparse optimal control software (SOCS).85?89 The OTIS/SOCS library incorporates a number of features not available in earlier versions. In particular, a sparse nonlinear programming algorithm11;12;14;15 permits the solution of problems with n ? m ? 100;000 on an engineering workstation. The sparse NLP implements a sparse SQP method based on a Schur-complement algorithm suggested by Gill et al.6;16 Jacobian and Hessian matrices are computed ef ciently using sparse nite differencingas proposed by Coleman and Mor? 18 and Curtis et al.19 Automatic re nement e of the mesh to achieve speci ed accuracy in the discretization is available.90 Like other widely used production tools for trajectory design, OTIS provides an extensive library of models to permit ascent, re-entry, and orbital simulations. The SOCS software has also been used independent of the OTIS software for low-thrust orbit transfers,91 low-thrust interplanetary transfers,92 commercial aircraft mission analysis,93 and applications in chemical process control and robotics. The direct transcription method has also been implemented by Enright and Conway.34;94 The advanced launch trajectory optimization software (ALTOS) program,95;96 developed in Germany for the European Space Agency, incorporates many of the same features. A sparse reduced gradient method has been investigated for collocation problems by Brenan and Hallman,97 and Steinbach has investigated an interior point SQP approach.98
gu . If these matrices are dense, then the number of perturbations ° needed by the sparse differencing technique is approximatelyequal to the number of state and control variables, i.e., ° ? n y C n u . Hessian information can be computed using ° 2 =2 perturbations. Thus as long as ° is small, this technique is reasonable.Conversely, the sparse differencing approach is too expensive when the size of the DAE system becomes large. On the other hand, when the righthand-side matrices f y , fu , g y , and gu are sparse, then it may be that ° ? n y Cn u . Stated simply, sparse nite differencingmust either exploit right-hand-sidesparsity or is limited by the number of state and control variables. Reduced gradient algorithms97 and reduced SQP methods20 are limited in a differentway. Algorithmsof this type construct the reduced Hessian, which is a dense n d ? n d matrix where the number of degrees of freedom is approximately equal to the number of controls times the number of grid points, i.e., n d ? n u M. Because the reduced Hessian is dense and linear algebra operations are .n 3 /, this implies an upper limit with n u M ? 500. Consed quently, NLP methods that form the reduced Hessian are limited by the number of control variables and/or the number of mesh points. The second principle attraction involves the treatment of path inequality constraints. De ning an a priori distribution of constrained subarcs is not necessary because the underlying NLP effectively determines the subarcs using an active set strategy. However, this approach is really only an approximation to the true solution. First, the number of grid points effectively de nes the resolution of the constrained subarcs. Second, when entering or leaving a path constraint the control time history may require jump discontinuities at the junction points. If the controlapproximationdoes not permit discontinuities, then the result will be suboptimal. One can of course introduce a phase boundary at the junction point that will improve the control approximation. However, this technique has not been automated. A third more fundamental dif culty occurs when the underlying DAE has an index greater than 1, as with singular arcs and/or state variable inequalities. In this case, either the NLP subproblem is singular and/or attempts to re ne the mesh for improved accuracy will fail. Essentiallythese dif culties can be attributedto the fact that none of the discretizationschemesdescribedare appropriatefor high index DAEs. One approach is to impose additional nonlinear path conditions along the singular arc.99 Unfortunately, this approach requires analytic elimination of the adjoint variables and a priori knowledge of the constrained subarcs. A more promising approach is to incorporatea four-point collocationscheme during the singular arc as proposed by Logsdon and Biegler.100
The vast majority of successful trajectory optimization applications incorporate some variant of the methods described earlier. For the sake of completeness,we include a brief discussion of two other approaches that have been considered but that are generally not computationally competitive.
A. Dynamic Programming
When the direct transcription method is implemented using a sparse nonlinear programming algorithm, the overall approach does resolve many of the dif culties encountered in trajectory optimization. Some limitations can be attributed to the underlying sparse NLP algorithms. For example, one of the principal attractions of a direct method is that adjoint equations do not need to be computed. On the other hand, the underlying NLP must in fact use derivative informationthat is in some sense equivalent.The sparse NLP used in SOCS computes Jacobian and Hessian information by sparse nite differencing. This technique will be ef cient as long as the number of perturbations is reasonably small. The number of perturbations are determined by the DAE right-hand-side matrices f y , fu , g y , and
In the late 1950s Bellman101 introduced a generalization of the classical Hamilton–Jacobi theory. The key notion of these so-called extremal eld methods is described by a system of rst-order nonlinear partial differential equations known as the Hamilton–Jacobi– Bellman equation.Essentially,these PDEs describethe optimal control functionsu¤ [x; t ] as well as the optimal objectivefor all possible initial conditions [x.t0 /; t0 ]. Hamilton–Jacobi–Bellman theory has played a major role in the development of necessary and suf cient conditions and has provided a uni ed theoretical basis for the eld of optimal control. Despite its theoretical importance, the utility of dynamic programming as the basis for a viable numerical method is summarized by Bryson and Ho on page 136 of Ref. 23 as follows:
The great drawback of dynamic programming is, as Bellman himself calls it, the “curse of dimensionality.” Even recording the solution to a moderately complicated problem involves an enormous amount of storage. If we want only one optimal path from a known initial point, it is wasteful and tedious, to nd a whole eld of extremals. : : :
All of the trajectory optimization methods described earlier have well-de ned termination criteria. As a consequence, it is possible O to decide whether a candidate solution, e.g., x, is in fact an answer by evaluating the necessary conditions, e.g., Eqs. (20) and (21) or (36– 40). The ability to de ne convergence is a fundamental property of calculus-based methods. In contrast, when the variables are discrete, calculus-basedmethods do not apply. In general, for problems with discrete variables the only way to decide if a candidate O solution x is in fact an answer is by comparison with all other possible candidates. Unfortunately, this is a combinatorial problem that is computationally prohibitive for all but the smallest applications. To avoid direct comparison of all possible solutions, it is necessary to introduce randomness at some point in the optimization process and abandon a de nitive convergence criterion. The basic notion of genetic algorithms, simulated annealing, tabu search, and evolutionary or Monte Carlo methods is to randomly select values for the unknown problem variables. After a nite number of random samples the best value is considered the answer. For some applications, notably those with discrete variables, algorithms of this type are the only practical alternative. However, trajectory optimization problems do not fall in this class! Trajectory applications are not characterized by discrete variables, and there simply is no reason to use a method that incurs the penalty associated with this assumption. Nevertheless,methods of this type have attractedthe interest of many analysts,presumablybecause they are incrediblysimple to apply without a detailed understandingof the system being optimized. Unfortunately, because they do not exploit gradient information, they are not computationally competitive with the methods in Sec. VI.
advantages and disadvantages, and I have attempted to highlight them. Future research and development will undoubtedly focus on removing de ciencies in these techniques. Progress in the analysis of high-index differential-algebraic equations, automatic differentiation, and sparse nonlinear programming will certainly lead to re nements in existing software and methods. In fact, one may expect many of the best features of seemingly disparate techniques to merge, forming still more powerful methods.
W. P., “Mission Timeline for a Shuttle-IUS Flight Using a Nonstandard Shuttle Park Orbit,” The Aerospace Corp., Technical Operating Rept. TOR-0083-(3493-14), El Segundo, CA, Oct. 1982. 2 Citron, S. J., Elements of Optimal Control, Holt, Rinehart and Winston, New York, 1969. 3 Fletcher, R., Practical Methods of Optimization, Vol. 2, Constrained Optimization, Wiley, New York, 1985. 4 Gill, P. E., Murray, W., and Wright, M. H., Practical Optimization, Academic, London, 1981. 5 Dennis, J. E., Jr., and Schnabel, R. B., Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice–Hall, Englewood Cliffs, NJ, 1983. 6 Gill, P. E., Murray, W., Saunders, M. A., and Wright, M. H., “Some Theoretical Properties of an Augmented Lagrangian Merit Function,” Dept. of Operations Research, Stanford Univ., TR SOL 86-6, Stanford, CA, April 1986. 7 Gill, P. E., Murray, W., Saunders, M. A., and Wright, M. H., “User’s Guide for NPSOL (version 4.0): A Fortran Package for Nonlinear Programming,” Dept. of Operations Research, Stanford Univ., TR SOL 86-2, Stanford, CA, Jan. 1986. 8 Betts, J. T., and Hallman, W. P., “NLP2 Optimization Algorithm Documentation,” The Aerospace Corp., Technical Operating Rept. TOR0089(4464-06)-1, reissue A, El Segundo, CA, 1997. 9 Rosen, J. B., and Kreuser, J., “A Gradient Projection Algorithm for Nonlinear Constraints,” Numerical Methods for Non-LinearOptimization, edited by F. A. Lootsma, Academic, London, 1972, pp. 297–300. 10 Lasdon, L. S., and Waren, A. D., “Generalized Reduced Gradient Software for Linearly and Nonlinearly Constrained Optimization,” Design and Implementation of Optimization Software, edited by H. J. Greenberg, Sijthoff and Noordhoff, Alphen aan den Rijn, The Netherlands, 1978, pp. 335–362. 11 Betts, J. T., and Frank, P. D., “A Sparse Nonlinear Optimization Algorithm,” Journal of Optimization Theory and Applications, Vol. 82, 1994, pp. 519–541. 12 Betts, J. T., Carter, M. J., and Huffman, W. P., “Software for Nonlinear Optimization,” Boeing Information and Support Services, Mathematics and Engineering Analysis Library Rept. MEA-LR-83 R1, The Boeing Co., Seattle, WA, June 1997. 13 Ashcraft, C. C., Grimes, R. G., and Lewis, J. G., “Accurate Symmetric Inde nite Linear Equation Solvers,” SIAM Journal of Matrix Analysis (submitted for publication). 14 Betts, J. T., Eldersveld, S. K., and Huffman, W. P., “A Performance Comparison of Nonlinear Programming Algorithms for Large Sparse Problems,” Proceedings of the AIAA Guidance, Navigation, and Control Conference, AIAA, Washington, DC, 1993 (AIAA Paper 93-3751). 15 Betts, J. T., and Huffman, W. P., “Sparse Nonlinear Programming Test Problems (release 1.0),” Boeing Computer Services, BCS Technology TR BCSTECH-93-047, The Boeing Co., Seattle, WA, 1993. 16 Gill, P. E., Murray, W., Saunders, M. A., and Wright, M. H., “A SchurComplement Method for Sparse Quadratic Programming,” Dept. of Operations Research, Stanford Univ., TR SOL 87-12, Stanford, CA, Oct. 1987. 17 Betts, J. T., “Sparse Jacobian Updates in the Collocation Method for Optimal Control Problems,” Journal of Guidance, Control, and Dynamics, Vol. 13, 1990, pp. 409–415. 18 Coleman, T. F., and Mor? , J. J., “Estimation of Sparse Jacobian Matrices e and Graph Coloring Problems,” SIAM Journal of Numerical Analysis, Vol. 20, 1983, pp. 187–209. 19 Curtis, A. R., Powell, M., and Reid, J., “On the Estimation of Sparse Jacobian Matrices,” Journal of the Instituteof Mathematics and Applications, Vol. 13, 1974, pp. 117–120. 20 Gill, P. E., Murray, W., and Saunders, M. A., “SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization,” Dept. of Operations Research, Stanford Univ., TR SOL 96-0, Stanford, CA, 1996. 21 Bliss, G. A., Lectures on the Calculus of Variations, Univ. of Chicago Press, Chicago, IL, 1946. 22 Pontryagin, L. S., The Mathematical Theory of Optimal Processes, Wiley-Interscience, New York, 1962. 23 Bryson, A. E., Jr., and Ho, Y. C., Applied Optimal Control, Wiley, New York, 1975.
There are many techniques for numerically solving trajectory optimization problems, and it is sometimes helpful to classify the techniques as either indirect or direct. Indirect methods are characterized by explicitly solving the optimality conditions stated in terms of the adjoint differential equations, the maximum principle, and associated boundary (transversality) conditions. Using the calculus of variations, the optimal control necessary conditions can be derived by setting the rst variation of the Hamiltonian function to zero. The indirect approach usually requires the solution of a nonlinear multipoint boundary value problem. By analogy, an indirect method for optimizing a function of n variables would require analytically computing the gradient and then locating a set of variables using a root- nding algorithm such that the gradient is zero. There are three primary drawbacks to this approach in practice. First, it is necessary to derive analytic expressions for the necessary conditions, and for complicated nonlinear dynamics this can become quite daunting. Second, the region of convergencefor a root- nding algorithm may be surprisinglysmall, especiallywhen it is necessary to guess values for the adjointvariablesthat may not have an obvious physical interpretation.Third, for problems with path inequalities it is necessary to guess the sequence of constrainedand unconstrained subarcs before iteration can begin. In contrast, a direct method does not require an analytic expression for the necessary conditions and typically does not require initial guesses for the adjoint variables. Instead, the dynamic (state and control) variables are adjusted to directly optimize the objective function. All direct methods introduce some parametric representation for the control variables. For simple shooting, the control functions are de ned by a relatively small number of NLP variables. For multiple shooting and transcription methods, the number of NLP variables used to describe the controls increases, ultimately including values at each mesh point in the interval. Throughout the paper I have emphasized the similarity between methods. All of the methods utilize a Newton-based iteration to adjust a nite set of variables. The methods can be distinguished by identifyingthe set of iterationvariablesand constraints.The optimal control necessary conditions can be interpreted as limiting forms of the NLP Kuhn–Tucker necessary conditions. At the present time perhaps the most widely used methods are direct shooting, indirect multiple shooting, and direct transcription. Each method has
24 Ascher, U. M., Mattheij, R. M. M., and Russell, R. D., Numerical Solution of Boundary Value Problems in Ordinary Differential Equations, Prentice–Hall, Englewood Cliffs, NJ, 1988. 25 Brenan, K. E., Campbell, S. L., and Petzold, L. R., Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations, NorthHolland, New York, 1989. 26 Tsien, H. S., and Evans, R. C., “Optimum Thrust Programming for a Sounding Rocket,” Journal of the American Rocket Society, Vol. 21, 1951, pp. 99–107. 27 Speyer, J. L., “Periodic Optimal Flight,” Journal of Guidance, Control, and Dynamics, Vol. 19, No. 4, 1996, pp. 745–755. 28 Sachs, G., and Lesch, K., “Periodic Maximum Range Cruise with Singular Control,” Journal of Guidance, Control, and Dynamics, Vol. 16, No. 4, 1993, pp. 790–793. 29 Berkmann, P., and Pesch, H. J., “Abort Landing in Windshear: Optimal Control Problem with Third-Order State Constraint and Varied Switching Structure,” Journal of Optimization Theory and Applications, Vol. 85, 1995, pp. 21–57. 30 Pesch, H. J., “A Practical Guide to the Solution of Real-Life Optimal Control Problems,” Control and Cybernetics, Vol. 23, 1994, pp. 7–60. 31 Canon, M. D., Cullum, C. D., and Polak, E., Theory of Optimal Control and Mathematical Programming, McGraw –Hill, New York, 1970. 32 Polak, E., Computational Methods in Optimization, Academic, New York, 1971. 33 Tabak, D., and Kuo, B. C., Optimal Control by Mathematical Programming, Prentice–Hall, Englewood Cliffs, NJ, 1971. 34 Enright, P. J., and Conway, B. A., “Discrete Approximations to Optimal Trajectories Using Direct Transcription and Nonlinear Programming,” Journal of Guidance, Control, and Dynamics, Vol. 15, 1992, pp. 994–1002. 35 Herman, A. L., and Conway, B. A., “Direct Optimization Using Collocation Based on High-Order Gauss –Lobatto Quadrature Rules,” Journal of Guidance, Control, and Dynamics, Vol. 19, No. 3, 1996, pp. 592–599. 36 Von Stryk, O., and Bulirsch, R., “Direct and Indirect Methods for Trajectory Optimization,” Annals of Operations Research, Vol. 37, 1992, pp. 357– 373. 37 Von Stryk,O., “Numerical Solutionof Optimal ControlProblems by Direct Collocation,” Optimal Control, edited by R. Bulirsch, A. Miele, J. Stoer, and K. H. Wells, Vol. 111, International Series of Numerical Mathematics, Birkh¨ user Verlag, Basel, Switzerland, 1993, pp. 129–143. a 38 Calise, A. J., “Singular Perturbation Techniques for On-Line Optimal Flight-Path Control,” Journal of Guidance and Control, Vol. 4, 1981, pp. 398–405. 39 Calise, A. J., and Moerder, D. D., “Singular Perturbation Techniques for Real Time Aircraft Trajectory Optimization and Control,” NASA CR3597, Aug. 1982. 40 Bashforth, F., and Adams, J. C., Theories of Capillary Action, Cambridge Univ. Press, New York, 1883. 41 Moulton, F. R., New Methods in Exterior Ballistics, Univ. of Chicago, Chicago, IL, 1926. 42 Dahlquist, G., and Bj¨ rk, A., Numerical Methods, Prentice–Hall, En? o glewood Cliffs, NJ, 1974. 43 Stoer, J., and Bulirsch, R., Introductionto Numerical Analysis, Springer, New York, 1980. 44 Hindmarsh, A. C., “ODEPACK, A Systematized Collection of ODE Solvers,” Scienti c Computing, edited by R. S. Stepleman, North-Holland, Amsterdam, 1983, pp. 55 –64. 45 Shampine, L. F., and Gordon, M. K., Computer Solution of Ordinary Differential Equations: The Initial Value Problem, W. H. Freeman and Co., San Francisco, 1975. 46 Gear, C. W., Numerical Initial Value Problems in Ordinary Differential Equations, Prentice–Hall, Englewood Cliffs, NJ, 1971. 47 Seywald, H., “Trajectory Optimization Based on Differential Inclusion,” Journal of Guidance, Control, and Dynamics, Vol. 17, No. 3, 1994, pp. 480–487. 48 Gear, C. W., “The Simultaneous Numerical Solution of DifferentialAlgebraic Equations,” IEEE Transactions on Circuit Theory, Vol. CT-18, 1971, pp. 89–95. 49 Petzold, L. R., “Differential/Algebraic EquationsAre Not ODEs,” SIAM Journal on Scienti c and Statistical Computing, Vol. 3, 1982, pp. 367– 384. 50 Petzold, L. R., “A Description of DASSL, A Differential/Algebraic System Solver,” Scienti c Computing, edited by R. S. Stepleman, NorthHolland, Amsterdam, 1983, pp. 65–68. 51 Hallman, W. P., “Smooth Curve Fits for the ShuttleSolid Rocket Booster Data,” The Aerospace Corp., Interof ce Correspondence IOC A85-5752.505, El Segundo, CA, March 1985. 52 Luke, R. A., “Computational Ef ciency Considerations for HighFidelity Launch Vehicle Trajectory Optimization,” Proceedings of the AIAA Guidance, Navigation, and Control Conference, AIAA, Washington, DC, 1989 (AIAA Paper 89-3446). 53 De Boor, C., A Practical Guide to Splines, Springer–Verlag, New York, 1978.
54 Ferguson, D. R., and Mastro, R. A., “Modeling and Analysis of Aerodynamic Data II. Practical Experience,” Proceedings of the AIAA/AHS/ASEE Aircraft Design, Systems and Operations Conference, AIAA, Washington, DC, 1989 (AIAA Paper 89-2076). 55 Ferguson, D. R., and Mastro, R. A., “Modeling and Analysis of Aerodynamic Data II,” AIAA Paper 89-0476, June 1989. 56 Ferguson,D. R., “Constructionof Curves and Surfaces Using Numerical Optimization Techniques,” CAD, Vol. 18, 1986, pp. 15 –21. 57 Betts, J. T., “The Application of Sparse Least Squares in Aerospace Design Problems,” Optimal Design and Control, Proceedings of the Workshop on Optimal Design and Control, edited by J. Borggaard, J. Burkardt, M. Gunzburger, and J. Peterson, Vol. 19, Progress in Systems and Control Theory, Birkh¨ user, Basel, Switzerland, 1994, pp. 81–96. a 58 Brauer, G. L., Cornick, D. E., and Stevenson, R., “Capabilities and Applications of the Program to Optimize Simulated Trajectories (POST),” NASA CR-2770, Feb. 1977. 59 Meder, D. S., and Searcy, J. L., “Generalized Trajectory Simulation (GTS), Volumes I-V,” The Aerospace Corp., TR SAMSO-TR-75-255, El Segundo, CA, Jan. 1975. 60 Betts, J. T., “Optimal Three-Burn Orbit Transfer,” AIAA Journal, Vol. 15, 1977, pp. 861–864. 61 Brusch, R. G., “Constrained Impulsive Trajectory Optimization for Orbit-to-Orbit Transfer,” Journal of Guidance and Control, Vol. 2, 1979, pp. 204–212. 62 Bauer, T. P., Betts, J. T., Hallman, W. P., Huffman, W. P., and Zondervan, K. P., “Solving the Optimal Control Problem Using a Nonlinear Programming Technique Part 2: Optimal Shuttle Ascent Trajectories,” Proceedings of the AIAA/AAS Astrodynamics Conference, AIAA, Washington, DC, 1984 (AIAA Paper 84-2038). 63 Brenan, K. E., “Engineering Methods Report: The Design and Development of a Consistent Integrator/Interpolator for Optimization Problems,” The Aerospace Corp., ATM 88(9975)-52, El Segundo, CA, 1988. 64 Gear, C. W., and Van Vu, T., “Smooth Numerical Solutions of Ordinary Differential Equations,” Proceedings of the Workshop on Numerical Treatment of Inverse Problems for Differential and Integral Equations, Heidelberg, Germany, 1982. 65 Van Vu, T., “Numerical Methods for Smooth Solutions of Ordinary Differential Equations,” Dept. of Computer Science, Univ. of Illinois, TR UIUCDCS-R-83-1130, Urbana, IL, May 1983. 66 Betts, J. T., “Frontiers in Engineering Optimization,” Journal of Mechanisms, Transmissions, and Automation in Design, Vol. 105, 1983, pp. 151– 154. 67 Balkanyi, L. R., and Spurlock, O. F., “DUKSUP—A High Thrust Trajectory Optimization Code,” Proceedings of the AIAA/AHS/ASEE Aerospace Design Conference, AIAA, Washington, DC, 1993 (AIAA Paper 93-1127). 68 Edelbaum, T., Sackett, L., and Malchow, H., “Optimal Low Thrust Geocentric Transfer,” Proceedings of the AIAA 10th Electric Propulsion Conference, AIAA, New York, 1973 (AIAA Paper 73-1074). 69 Bischof, C., Carle, A., Corliss, G., Griewank, A., and Hovland, P., “ADIFOR: Generating Derivative Codes from Fortran Programs,” Scienti c Programming, Vol. 1, 1992, pp. 11–29. 70 Sch¨ pf, R., and Deulfhard, P., “OCCAL: A Mixed Symbolic-Numeric o Optimal Control CALculator,” Control Applications of Optimization, edited by R. Bulirsch and Dieter Kraft, Birkh¨ user Verlag, Basel, Switzerland, a 1993. 71 Mehlhorn, R., and Sachs, G., “A New Tool for Ef cient Optimization by Automatic Differentiation and Program Transparency,” Optimization Methods and Software, Vol. 4, 1994, pp. 225–242. 72 Bulirsch, R., “Die Mehrzielmethode zur numerischen L¨ sung von nichto linearen Randwertproblemen und Aufgaben der optimalen Steuerung,”Rept. of the Carl-Cranz Gesellschaft, Carl-Cranz Gesellschaft, Oberpfaffenhofen, Germany, 1971. 73 Keller, H. B., Numerical Methods for Two-Point Boundary Value Problems, Blaisdell, London, 1968. 74 Oberle, H. J., and Grimm, W., “BNDSCO—A Program for the Numerical Solutionof Optimal Control Problems, User Guide,” Institut f¨ r Dynamik u der Flugsysteme, Deutsche Forschungsanstalt f¨ r Luft und Raumfahrt DLR, u Internal Rept. DLR IB/515-89/22, Oberpfaffenhofen, Germany, 1989. 75 Callies, R., “Optimal Design of a Mission to Neptune,” Optimal Control, edited by R. Bulirsch, A. Miele, J. Stoer, and K. H. Wells, Vol. 111, International Series of Numerical Mathematics, Birkh¨ user Verlag, Basel, a Switzerland, 1993, pp. 341–349. 76 Kreim, H., Kugelmann, B., Pesch, H. J., and Breitner, M. H., “Minimizing the Maximum Heating of a Re-Entering Space Shuttle: An Optimal Control Problem with Multiple Control Constraints,” Optimal Control Applications and Methods, Vol. 17, 1996, pp. 45 –69. 77 Bock, H. G., and Plitt, K. J., “A Multiple Shooting Algorithm for Direct Solution of Optimal Control Problems,” Proceedings of the 9th IFAC World Congress, Pergamon, 1984, pp. 242–247. 78 Betts, J. T., and Huffman, W. P., “Trajectory Optimization on a Parallel Processor,” Journal of Guidance, Control, and Dynamics, Vol. 14, 1991, pp. 431– 439.
79 Dickmanns, E. D., and Well, K. H., “Approximate Solution of Optimal Control Problems Using Third-Order Hermite Polynomial Functions,” Proceedings of the 6th Technical Conference on Optimization Techniques, Vol. IFIP-TC7, Springer–Verlag, New York, 1975. 80 Russell, R. D., and Shampine, L. F., “A Collocation Method for Boundary Value Problems,” Numerische Mathematik, Vol. 19, 1972, pp. 13 –36. 81 Dickmanns, E. D., “Ef cient Convergence and Mesh Re nement Strategies for Solving General Ordinary Two-Point Boundary Value Problems by Collocated Hermite Approximation,” 2nd IFAC Workshop on Optimisation, Oberpfaffenhofen, Germany, Sept. 1980. 82 Ascher, U. M., Christiansen, J., and Russell, R. D., “COLSYS—A Collocation Code for Boundary-Value Problems,” Codes for Boundary Value Problems in Ordinary Differential Equations, edited by B. Childs, M. Scott, J. W. Daniel, E. Denman, and P. Nelson, Vol. 76, Lecture Notes in Computer Science, Springer–Verlag, Berlin, 1979. 83 Hull, D. G., “Conversion of Optimal Control Problems into Parameter Optimization Problems,” Journal of Guidance, Control, and Dynamics, Vol. 20, No. 1, 1997, pp. 57 –60. 84 Hargraves, C. R., and Paris, S. W., “Direct Trajectory Optimization Using Nonlinear Programming and Collocation,”Journal of Guidance, Control, and Dynamics, Vol. 10, 1987, p. 338. 85 Betts, J. T., and Huffman, W. P., “Application of Sparse Nonlinear Programming to Trajectory Optimization,” Journal of Guidance, Control, and Dynamics, Vol. 15, 1992, pp. 198–206. 86 Betts, J. T., and Huffman, W. P., “Path-Constrained Trajectory Optimization Using Sparse Sequential Quadratic Programming,” Journal of Guidance, Control, and Dynamics, Vol. 16, No. 1, 1993, pp. 59 –68. 87 Betts, J. T., “Issues in the Direct Transcription of Optimal Control Problems to Sparse Nonlinear Programs,” Computational Optimal Control, edited by R. Bulirsch and D. Kraft, Vol. 115, International Series of Numerical Mathematics, Birkh¨ user Verlag, Basel, Switzerland, 1994, pp. a 3 –18. 88 Betts, J. T., “Trajectory Optimization Using Sparse Sequential Quadratic Programming,” Optimal Control, edited by R. Bulirsch, A. Miele, J. Stoer, and K. H. Wells, Vol. 111, International Series of Numerical Mathematics, Birkh¨ user Verlag, Basel, Switzerland, 1993, pp. 115–128. a 89 Betts, J. T., and Huffman, W. P., “Sparse Optimal Control Software SOCS,” Boeing Information and Support Services, Mathematics and Engineering Analysis Technical Document MEA-LR-085, The Boeing Co., Seattle, WA, July 1997.
90 Betts, J. T., and Huffman, W. P., “Mesh Re nement in Direct Transcription Methods for Optimal Control,” Optimal Control Applications and Methods (to be published). 91 Betts, J. T., “Using Sparse Nonlinear Programming to Compute Low Thrust Orbit Transfers,” Journal of the Astronautical Sciences, Vol. 41, 1993, pp. 349–371. 92 Betts, J. T., “Optimal Interplanetary Orbit Transfers by Direct Transcription,” Journal of the Astronautical Sciences, Vol. 42, 1994, pp. 247–268. 93 Betts, J. T., and Cramer, E. J., “Application of Direct Transcription to Commercial Aircraft Trajectory Optimization,” Journal of Guidance, Control, and Dynamics, Vol. 18, No. 1, 1995, pp. 151–159. 94 Enright, P. J., and Conway, B. A., “Optimal Finite-Thrust Spacecraft Trajectories Using Collocation and Nonlinear Programming,” Journal of Guidance, Control, and Dynamics, Vol. 14, 1991, pp. 981–985. 95 Roenneke, A. J., J¨ nsch, C., and Markl, A., “Advanced Launch and a Reentry Trajectory Optimization Software Documentation (ALTOS),” European Space Science and Technology Center, TR, Software User Manual, Draft 1, Noordwijk, The Netherlands, May 1995. 96 Kraft, D., “On Converting Optimal Control Problems into Nonlinear Programming Problems,” Computational Mathematical Programming, Vol. F15, NATO ASI Series, Springer–Verlag, 1985. 97 Brenan, K. E., and Hallman, W. P., “A Generalized Reduced Gradient Algorithm for Large-Scale Trajectory Optimization Problems,” Optimal Design and Control, Proceedings of the Workshop on Optimal Design and Control, edited by J. Borggaard, J. Burkardt, M. Gunzburger, and J. Peterson, Vol. 19, Progress in Systems and Control Theory, Birkh¨ user Verlag, Basel, a Switzerland, 1994, pp. 117–132. 98 Steinbach, M. C., “A Structured Interior Point SQP Method for Nonlinear Optimal Control Problems,” Computational Optimal Control, edited by R. Bulirsch and D. Kraft, Vol. 115, International Series of Numerical Mathematics, Birkh¨ user Verlag, Basel, Switzerland, 1994, pp. 213–222. a 99 Downey, J. R., and Conway, B. A., “The Solution of Singular Optimal Control Problems Using Direct Collocation and Nonlinear Programming,” AAS/AIAA Astrodynamics Specialist Conf., AAS Paper 91-443, Durango, CO, Aug. 1991. 100 Logsdon, J. S., and Biegler, L. T., “Accurate Solution of DifferentialAlgebraic Optimization Problems,” Industrial Engineering Chemical Research, Vol. 28, 1989, pp. 1628–1639. 101 Bellman, R., Dynamic Programming, Princeton Univ. Press, Princeton, NJ, 1957.