Optimal Control of Mechanical Systems

In the present work, we consider a class of nonlinear optimal control problems, which can be called “optimal control problems in mechanics.” We deal with control systems whose dynamics can be described by a system of Euler-Lagrange or Hamilton equations. Using the variational structure of the solution of the corresponding boundary-value problems, we reduce the initial optimal control problem to an auxiliary problem of multiobjective programming. This technique makes it possible to apply some consistent numerical approximations of a multiobjective optimization problem to the initial optimal control problem. For solving the auxiliary problem, we propose an implementable numerical al-gorithm.


Introduction
The control of mechanical systems has become a modern application focus of nonlinear control theory [1][2][3][4].In this paper, we study a class of controlled mechanical systems governed by the second-order Euler-Lagrange equations or Hamilton equations.It is well known that a large class of mechanical and physical systems admits, at least partially, a representation by these equations which lie at the heart of the theoretical framework of physics.The important examples of controlled mechanical systems are mechanical and electromechanical plants such as diverse mechanisms, transport systems, robots, and so on [4].
In practice, the controlled mechanical systems are strongly nonlinear dynamical systems of high order.Moreover, the majority of applied optimal control problems are constrained problems.The most real-world mechanical problems are becoming too complex to allow analytical solution.Thus, computational algorithms are inevitable in solving these problems.There is a number of results scattered in the literature on numerical methods for optimal control problems that are very often closely related, although apparently independent.One can find a fairly complete review in [5][6][7][8][9].
Computational methods based on the Bellman optimality principle were among the first proposed for optimal control problems [10,11].Application of necessary conditions of optimal control theory, specifically of the Pontryagin maximum principle, yields a boundary-value problem with ordinary differential equations.Clearly, the necessary optimality conditions and the corresponding boundary-value problems play an important role in optimal control computations (see, e.g., [12,13]).An optimal control problem with state constraints can also be solved using some modern nonlinear programming algorithms.For example, the implementation of the interior point method is presented in [14].The application of the trust-region method to optimal control is discussed in [15].
The gradient-type algorithms [7] can also be applied to optimal control problems with constraints if the problem is discretized a priori and the discretization for states coincides with that for controls.There are many variants of gradient algorithms depending on whether the problem is a priori discretized in time, and on the optimization solver used.A gradient-based method evaluates gradients of the objective functional [5,6].The calculation of second-order derivatives of the objective functional can be avoided by applying a sequential-quadratic-programming-(SQP-)type optimization algorithm in which these derivatives are approximated by quasi-Newton formulas.The application of SQP-type methods to optimal control is comprehensively discussed in [16,17].
The aim of our investigations is to use the variational structure of the solution to the two-point boundary-value problem for the controllable Euler-Lagrange or Hamilton equation and to propose a new computational algorithm for optimal control problems in mechanics.We consider an optimal control problem in mechanics in the general nonlinear formulation and reduce the initial optimal control problem to an auxiliary multiobjective optimization problem with constraints.This optimization problem provided a basis for solving the original optimal control problem.
The outline of the paper is the following.Section 2 contains an overview and some basic facts about controllable mechanical systems.In Section 3, we consider the constrained optimal control problem in mechanics.In Section 4, we study the variational properties of the initial problem.Section 5 deals with an implementable numerical scheme for optimal control problems in mechanics.Section 6 summarizes the paper.

Preliminary results and overview
The basic inspiration for modeling systems in analytical mechanics is the following variational problem: where L is the Lagrangian function of the (noncontrolled) mechanical system and q(•) is a continuously differentiable function, q(t) ∈ R n .We consider a mechanical system with n degrees of freedom, locally represented by n generalized configuration coordinates Vadim Azhmyakov 3 q(t) = (q 1 (t),..., q n (t)).The components qλ (t), λ = 1,...,n of q(t) are so-called generalized velocities.We assume that the function L(t,•,•) is a twice continuously differentiable function.It is also assumed that the function L(t, q,•) is a strongly convex function.
The necessary conditions for the variational problem (2.1) describe the equations of motion for many mechanical systems, which are free from external influence, for appropriate choice of the Lagrangian function L. This necessary conditions are the second-order Euler-Lagrange equations [18,19], (2. 2) The principle of Hamilton (see, e.g., [18,19]) gives a variational description of the solution of the two-point boundary-value problem for the Euler-Lagrange equations (2.2).For a controlled mechanical system of n degrees of freedom with a Lagrangian L(t, q, q,u), we introduce the equations of motion: where u(•) ∈ ᐁ is a control function from the set of admissible controls ᐁ.Let where b 1,ν , b 2,ν , ν = 1,...,m, are constants and L 2 m ([0,1]) is the usual Lebesgue space of all square-integrable functions from [0,1] into R m .The introduced set ᐁ provides a standard example of an admissible control set (see, e.g., [20]).In specific cases, we consider the following set of admissible controls ᐁ ∩ C 1 m (0,1).We also examine the given controlled mechanical system in the absence of external forces.The Lagrangian function L depends directly on the control function u(•).We assume that the function L(t,•,•,u) is a twice continuously differentiable function and L(t, q, q,•) is a continuously differentiable function.For a fixed admissible control u(•) ∈ ᐁ, we obtain the usual (noncontrolled) mechanical system with L(t, q, q) ≡ L(t, q, q,u(t)) and the corresponding Euler-Lagrange equation (2.2).It is assumed that the function L(t, q,•,u) is a strongly convex function, that is, for any (t, q, q,u) holds.This convexity condition is a direct consequence of the representation for the kinetic energy of a mechanical system.The matrix M(t,u) here is a positive definite matrix.Under the above-mentioned assumptions for the Lagrangian function L, the twopoint boundary-value problem (2.3) has a solution for every u(•) ∈ ᐁ [21].We assume that (2.3) has a unique solution for every u(•) ∈ ᐁ.Given an admissible control function u(•) ∈ ᐁ, the solution to the boundary-value problem (2.3) is denoted by q u (•).We will call (2.3) an Euler-Lagrange control system.Note that (2.3) is a system of implicit secondorder differential equations.
Example 2.1.We consider a linear mass-spring system [4] attached to a moving frame.
The control u(•) ∈ ᐁ ∩ C 1 1 (0,1) is the velocity of the frame.By ω we denote the mass of the system.The kinetic energy (1/2)ω( q + u) 2 depends directly on u(•), and so does the Lagrangian function yielding the equation of motion (2.3) By κ we denote here the elasticity coefficient of the system.Some important controlled mechanical systems have the Lagrangian function of the form (see, e.g., [4]) (2.9) In this special case, we have and the control function u(•) can be interpreted as an external force.
Let us now pass on to the Hamiltonian formulation.For the Euler-Lagrange control system (2.3), we introduce the generalized momenta and define the Hamiltonian function H(t, q, p,u) as a Legendre transform of L(t, q, q,u), that is p λ qλ − L(t, q, q,u). (2.12) In the case of hyperregular Lagrangians L(t, q, q,u) (see, e.g., [18]) the Legendre transform ᏸ is a diffeomorphism.Using the introduced Hamiltonian H(t, q, p,u), we can Vadim Azhmyakov 5 rewrite the equations of motion (2.3): (2.13) Under the above-mentioned assumptions, the boundary-value problem (2.13) has a solution for every u(•) ∈ ᐁ.We will call (2.13) a Hamilton control system.A main advantage of (2.13) in comparison with (2.3) is that (2.13) immediately constitutes a control system in standard state space form [20] with state variables (q, p) (in physics usually called the phase variables).Consider the system of Example 2.1 with The Hamilton equations in this case are given as ( Note that if L(t, q, q,u) is given as then we have where H 0 (t, q, p) is the Legendre transform of L 0 (t, q, q).

Optimal control problems in mechanics
Let us consider the following optimal control problem with constraints: where R be a continuous function.By I and K we denote finite sets of index values.
In the ensuring analysis, we assume that function f 0 (t,•,•) and functionals h j (•), j ∈ I, g k (•),k ∈ K, are proper convex.We also assume that the boundary-value problem (2.3) has a unique solution and that (3.1) has an optimal solution.The class of optimal control problems of the type (3.1) is broadly representative [20,8].Let (q opt (•),u opt (•)) be an optimal solution of (3.1).Note that we formulate the initial optimal control problem for the Euler-Lagrange control system.Clearly, it is also possible to use the Hamiltonian formulation.Note that a variety of constraints may be represented in the form of inequalities including the initial conditions, boundary conditions, and interior point conditions of the general form.For example, if the initial optimal control problem contains the target constraints then h j (u(•)) := h j (q u (1)) for all j ∈ I.
We mainly focus our attention on the application of a direct numerical method to the constrained optimal control problem (3.1).A great amount of work is devoted to numerical methods for optimal control problems (see [5,[7][8][9] and references therein).One can find a fairly complete review of the main results in [6,8].
It is common knowledge that an optimal control problem involving ordinary differential equations can be formulated in various ways as an optimization problem in a suitable function space [5,8,20,22].For example, the original problem (3.1) can be expressed as an infinite-dimensional optimization problem over the set of control functions u( with the aid of the functions J : The minimization problem (3.4) can be solved by using some numerical algorithms (e.g., by applying a first-order method [7,23]).For example, the implementation of the method of feasible directions is presented in [8].
The above-mentioned conditions ω ≥ 4κ/π 2 and u(t We claim that u opt (t) ≡ 1/2 is an optimal solution of the given optimal control problem.Note that this result is consistent with the Bauer maximum principle of convex programming (see, e.g., [20]).For u opt (•) we obtain the optimal trajectory Evidently, we have where q opt (•) ∈ C 1 1 (0,1).

The variational approach
An effective numerical procedure, as a rule, uses the specific character of the concrete problem.Our aim is to consider the variational description of the optimal control problem (3.1).Let The following theorem is an immediate consequence of the classical Hamilton principle from analytical mechanics.
Theorem 4.1.Let the Lagrangian L(t, q, q,u) be a strongly convex function of the variables qi , i = 1,...,n.Assume that the boundary-value problem (2.3) has a unique solution for every u(•)

is the solution of the boundary-value problem (2.3) if and only if
For a fixed admissible control function u(•), we introduce two following functionals Let us also consider the set of all functions q(•) satisfying the constraints of problem (3.1).We assume that Θ = ∅.
Theorem 4.2.Let the Lagrangian L(t, q, q,u) be a strongly convex function of the variables qi , i = 1,...

,n. Assume that the boundary-value problem (2.3) has a unique solution for
Vadim Azhmyakov 9

is a solution of the boundary-value problem (2.3) that satisfies conditions
if and only if . Using the Hamilton principle, we obtain min If the condition (4.6) holds, then q u (•) is a solution of the boundary-value problem (2.3).This completes the proof.

Numerical aspects
A direct implementation of the necessary conditions (4.14) is often not practical.Using a discretization of (4.8), we can obtain a finite-dimensional approximating problem.Note that discrete approximation techniques have been recognized as a powerful tool for solving optimal control problems [6,8].Let N be a sufficiently large positive integer number and let Ᏻ N := {t 0 = 0, t 1 ,...,t N = 1} be a (possible nonequidistant) partition of [0,1] with max 0≤i≤N−1 and lim N→∞ ξ N = 0. Define and consider the following finite-dimensional optimization problem: where b 1 and b 2 are constant vectors, (5.4) In effect, we deal with the spaces L 2,N n (Ᏻ N ) and L 2,N m (Ᏻ N ) of the piecewise constant trajectories q N (•) and piecewise constant control functions u N (•).Note that the space L 2,N n (Ᏻ N ) is in one-to-one correspondence with the Euclidean space R nN .
The discrete optimization problem (5.3) approximates the infinite-dimensional optimization problem (4.8).We assume that the set of all Pareto optimal solution of the discrete problem (5.3) is nonempty.If P(•) is a convex functional, then the discrete multiobjective optimization problem (5.3) is also a convex problem.Let (5.5) Here, L 2,N n (Ᏻ N ) and L 2,N m (Ᏻ N ) are the finite-dimensional spaces of the corresponding piecewise constant functions.For (5.3) we also can introduce the Lagrange function Λ N (see [24]) (5.6)where σ := (σ 1 ,σ 2 ) T and σ 1 ,σ 2 ∈ R m .We now consider the corresponding necessary (Kuhn-Tucker) conditions for (q * N (•),u * N (•)) to be a Pareto optimal solution to (5.3).In this case, we have the following Kuhn-Tucker system: where ∇ qN (•) , ∇ uN (•) stand for partial derivatives, μ * , r * , s * , and σ * are the (Pareto) optimal Lagrange multipliers [24].By e ∈ R m we denote a unit vector.If P(•) is a convex functional, then the necessary condition (5.7) is also sufficient for (q * N (•),u * N (•)) to be a Pareto optimal solution to (5. 3).An optimal solution (q opt N (•),u opt N (•)) to the finitedimensional problem (5.3) belongs to the set of all Pareto optimal solutions of (5.3).Thus (q opt N (•),u opt N (•)) satisfies the presented conditions (5.7).In a similar manner, one can derive the Kuhn-Tucker conditions for a finite-dimensional optimization problem over the set of variables (q i ,u i ), i = 0,...,N.
The necessary optimality conditions (5.7) reduce the finite-dimensional multiobjective optimization problem to a problem of finding a zero of nonlinear functions.Such a problem can be solved by using some gradient-based or Newton-like methods [7,23].From the viewpoint of numerical mathematics, we solve the optimal control problem in mechanics approximately.We now propose a (conceptual) computational algorithm based on the finite-dimensional approximations (5.3) and on the corresponding Kuhn-Tucker system (5.7).
For the aims of solving the Kuhn-Tucker system, one can use, for example, a variant of the Newton-type method.Note that the similar approach can also be considered for the auxiliary problem (4.9).We are able to formulate the following convergence result (see [26] for the corresponding proof).
where μ 1 ,μ 2 ,r 2 ∈ R, r 1 ∈ R N , s ∈ R N−1 , σ 1 ,σ 2 ∈ R N+1 .For some positive constants and vectors r 1 , r 2 , s, σ 1 , σ 2 , we apply the Newton-Raphson method (see, e.g., [23]) to the corresponding Kuhn-Tucker system for problem under consideration.The above implementation of conceptual Algorithm 5.1 was carried out using the standard MATLAB packages.The computed optimal control u N (•) and the computed optimal trajectory q N (•) have the properties max 0≤i≤N u N t i − u opt t i ≤ 10 −3 , max 0≤i≤N q N t i − q opt t i ≤ 2 • 10 −3 . (5.14) The computed optimal objective value is (−1.0442).Note that the optimal objective value in this example is J q opt (•),u opt (•) ≈ −1.0463. (5.15) The implementation of the presented Algorithm 5.1 requires a first approximation u (0) (•) to an optimal control u opt (•).The efficiency of this algorithm essentially depends on u (0) (•).

Concluding remarks
In this paper, we propose a new computational method for some classes of constrained optimal control problems in mechanics.Using the variational approach to the nonlinear mechanical systems, one formulate an auxiliary problem of multiobjective optimization.This problem and the corresponding techniques of the multiobjective optimization theory are applied to numerical solution of the optimal control problem under consideration. )