Solution to an Optimal Control Problem via Canonical Dual Method

where F(·) is continuous on Rn, and P(·) is twice continuously differentiable in Rm. An admissible control, taking values on the unit ball D := {u ∈ Rm | ‖u‖ ≤ 1}, is integrable or piecewise continuous on [0,T]. In (2) we assume that A(t),B(t) are continuous matrix functions in C([0,T],Rn×n) and C([0,T],Rn×m), respectively. This problem often comes up as a main objective in general optimal control theory [1]. By the classical control theory [2], we have the following Hamilton-Jacobi-Bellman function:


Introduction
Consider the following optimal control problem (primal problem (P ) in short): where F(•) is continuous on R n , and P(•) is twice continuously differentiable in R m .An admissible control, taking values on the unit ball D := {u ∈ R m | u ≤ 1}, is integrable or piecewise continuous on [0, T].In (2) we assume that A(t), B(t) are continuous matrix functions in C([0, T], R n×n ) and C([0, T], R n×m ), respectively.This problem often comes up as a main objective in general optimal control theory [1].By the classical control theory [2], we have the following Hamilton-Jacobi-Bellman function: H(t, x, u, λ) = λ * (A(t)x + B(t)u) + F(x) + P(u).(3) The state and costate systems are In general, it is difficult to obtain an analytic form of the optimal feedback control for the problem (1)- (2).It is well known that, in the case of unconstraint, if P(u) is a positive definite quadratic form and F(x) is a positive semidefinite quadratic form, then a perfect optimal feedback control is obtained by the solution of a Riccati matrix differential equation.The primal goal of this paper is to present an analytic solution to the optimal control problem (P ).
We know from the Pontryagin principle [1] that if the control u is an optimal solution to the problem (P ), with x(•) and λ(•) denoting the state and costate corresponding to u(•), respectively, then u is an extremal control, that is, we have By means of the Pontryagin principle and the dynamic programming theory, many numerical algorithms have been suggested to approximate the solution to the problem (P ) (see, [3][4][5]).This is due to the nonlinear integrand in the cost functional.It is even difficult for the case of P(u) being nonconvex on the unit ball D in R m .We know that when P(u) is nonconvex on the unit ball D, sometimes the optimal control of the problem (P ) may exist.Let us see the following simple example for n = m = 1: In fact, it is easy to see that u(t) ≡ −1; t ∈ [0, T] is an optimal control.In this paper, we consider P(u) to be nonconvex.If the optimal control of the problem (P ) exists, we solve the problem (1) to find the optimal control which is an expression of the costate.We see that, with respect to u, the minimization in ( 7) is equivalent to the following global nonconvex optimization over a sphere: when P(u) is a nonconvex quadratic function, the problem (9) can be solved completely by the canonical dual transformation [6][7][8].In [9], the global concave optimization over a sphere is solved by use of a differential system with the canonical dual function.Because the Pontryagin principle is a necessary condition for a control to be optimal, it is not sufficient for obtaining an optimal control to solve only the optimization (9).In this paper, combing the method given in [6,9] with the Pontryagin principle, we solve problem (1)-( 2) which has nonconvex integrand on the control variable in the cost functional and present the optimal control expressed by the costate via canonical dual variables.

Global Optimization over a Sphere
In this section we present a differential flow to deal with the global optimization, which is used to find the optimal control expressed by the costate in the next section.Here we use the method in our another paper (see [9]).
In what follows we consider the function P(x) to be twice continuously differentiable and nonconvex on the unit ball in R m .Define the set Since P(x) is nonconvex and G is an open interval (ρ, +∞) for the nonnegative real number ρ depending on P(x).Let ρ * ∈ G and x ∈ {x * x ≤ 1} satisfy the following KKT equation: We focus on the flow x(ρ) defined near ρ * by the following backward differential equation: The flow x(ρ) can be extended to wherever ρ ∈ G ∩ (0, ρ * ] [10].The dual function [6] with respect to a given flow x(ρ) is defined as We have Consequently It means that dP d (ρ)/dρ decreases when ρ increases in Therefore, as long as ρ ≥ ρ.
defined by (11)-( 13), passes through a boundary point of the ball then x is a global minimizer of P(x) over the ball D. Further one has Proof.Since ρ ∈ G, ρ > 0. For each x ∈ D and whenever ρ ≥ ρ we have By ( 17), (18), we have This concludes the proof of Theorem 1.
To illustrate the canonical dual method, let us present several examples as follows.
Example 2. Let us consider the following one-dimensional concave minimization problem: We have By choosing ρ * = 10, we solve the following equation in {x 2 < 1}: to get a solution x = −0.1251.Next we solve the following boundary value problem of the ordinary differential equation: To find a parameter such that we get which satisfies Let x(10/3) be denoted by x.To find the value of x, we compute the solution of the following algebra equation: Example 3. We now consider the nonconvex minimization problem: By choosing ρ * = √ 72, we solve the following equation in {x 2 < 1}: to get a solution x = −2/(4 + 3 √ 2).Next we solve the following boundary value problem of the ordinary differential equation: To find a parameter such that we get which satisfies Let x(3) be denoted by x.To find the value of x, we compute the solution of the following algebra equation: Example 4. Given a symmetric matrix G ∈ R m×m and a nonzero vector f ∈ R m , let P(x) = (1/2)x * Gx − f * x be a nonconvex quadratic function.Consider the following global optimization problem over a sphere: By solving the boundary value problem of ordinary differential equation we get the unique solution Since G is symmetric, there exists an orthogonal matrix R such that RGR * = D := (a i δ i j ) (a diagonal matrix) and correspondingly R f = g := (g i ) (a vector).By (41), we have there exists ρ ∈ (−a 1 , ρ * ) uniquely such that By Theorem 1, we see that x( ρ) = (G + ρI) −1 f is a global minimizer of the problem.

Find an Analytic Solution to the Optimal Control Problem
In this section, we consider A(t), B(t) in problem ( 1)-( 2) to be constant matrices, F(x) = c * x and where , and G(∈ R m×m ) is a symmetric matrix.Suppose that G has p ≤ m distinct eigenvalues a 1 < a 2 < • • • < a p and a 1 < 0.Moreover, we need the following basic assumption: We consider the following optimal control problem: To solve the above problem, we define the function φ(t, x) = ψ * (t)x, where ψ(t) is the solution to the following Cauchy boundary value problem of the ordinary differential equation: By comparing (48)-( 49) with ( 6) in terms of this special problem (46)-(47), we see that Noting that ψ(T) = 0 and x(0) = x 0 , we have (51) Thus, min J(u) = −φ(0, x(0)) Consequently, we deduce that, for almost every t in [0, T], the optimal control is By the relation between ψ(t) and the costate in (50), for given t ∈ [0, T], we need to solve the following nonconvex optimization: with the dual variable ρ t > −a 1 satisfying We define the function ρ(λ) with respect to λ by the following equation: and obtain an analytic solution to the optimal control problem via a costate expression On the other hand, by the solution of the Cauchy boundary value problem of the ordinary differential equation ( 48)-(49), we have Example 5. Consider the following optimal control problem: This is a simple case of (46),(47).We have To find an analytic solution of the optimal control problem, we solve the equation to get By (58), we obtain an analytic solution of the optimal control problem which can be expressed as = e 1−t − 1 −1 1 − e 1−t = −1, (t / = 1). (64)

Concluding Remarks
In this paper, a new approach to optimal control problems has been investigated using the canonical dual method.Some nonlinear and nonconvex problems can be solved by global optimizations, and therefore, the differential flow defined by the KKT equation (see ( 11)) can produce an analytic solution of the optimal control problem.Meanwhile, by means of the canonical dual function, an optimality condition is proved (see Theorem 1).The global optimization problem is solved by a backward differential equation with an equality condition (see ( 12), ( 18)).More research needs to be done for the development of applicable canonical dual theory.