A GLOBAL METHOD FOR SOME CLASS OF OPTIMIZATION AND CONTROL PROBLEMS

The problem of maximizing a nonsmooth convex function over an arbitrary set is considered. Based on the optimality condition obtained by Strekalovsky in 1987 an algorithm for solving the problem is proposed. We show that the algorithm can be applied to the nonconvex optimal control problem as well. We illustrate the method by describing some computational experiments performed on a few nonconvex optimal control problems.


Introduction.
In the paper [2], we developed an algorithm for maximizing a differentiable convex function on a so-called simple set. Continuing this work, we give such a method for maximizing a nondifferentiable convex function, and also for maximizing a nonconvex optimal control problem. This paper is organized as follows. In Section 2, we consider the global optimality condition [13] for the problem of maximizing a convex function. In Sections 3 and 4, we construct a method based on the global optimality condition for solving the problem and show convergence of the algorithm. In Section 6, we apply the proposed algorithm to the nonconvex optimal control problem for a terminal functional. In Section 7, we present some computational results obtained with our algorithm on a few optimal control test problems.

Global optimality condition.
We consider the problem where f : R n → R is a convex function and D is an arbitrary subset of R n . This problem belongs to the class of global optimization problems. Any local maximizer found by well-known local search methods might be differ from the global maximizer (a solution) of the problem. There are many numerical methods [2,4,5,6,9,10,12] devoted to the solution of problem (2.1). The global optimality conditions for problem (2.1) were first given by Strekalovsky [13]. For future purposes, let us consider this result applied to a finite dimensional space of R n .
Theorem 2.1 [13]. If a point z ∈ D is a global solution of problem (2.1) then the following condition holds: ∀y : f (y) = f (z), ∀y * ∈ ∂f (y), y * ,x − y ≤ 0, ∀x ∈ D. (2.2) If, in addition to (2.2), the condition is satisfied, then condition (2.2) becomes sufficient. (Here and in the following , denotes the scalar product of two vectors.) Proof. Necessity. Assume that z is a solution of problem (2.1). Let the points y, y * and x be such that (2.4) Then, by the convexity of f , we have Sufficiency. In order to derive a contradiction, suppose that z is not a solution of problem (2.1), i.e., (2.6) Now we introduce the closed and convex set: Note that int L(f , z) ≠ ∅ and u ∈ L(f , z). Then there exists the projection of the point u on L(f , z), i.e., It is obvious that Taking into account (2.8), we conclude that the point y is characterized as a solution of the following quadratic programming problem.
Then the optimality condition for this problem at the point y is as follows: Now, we show that λ 0 > 0 and λ > 0. If λ 0 = 0, then (2.11) implies that f (y) = f (z) and 0 ∈ ∂f (y). This contradicts (2.3). The case λ = 0 is also impossible because of g (y) ∆ = y − u = 0 which contradicts (2.9). Thus, we can put λ 0 = 1 and write (2.11) as follows: Now, using this condition, we easily get which contradicts (2.2). This contradiction implies that the assumption that z is not a solution of problem (2.1) must be false. This completes the proof.

Construction of a method.
In order to use the global optimality condition (2.2) efficiently for solving problem (2.1) numerically, we need to make some further assumptions about the problem. These assumptions in problem (2.1) are: (a) The objective function f : R n → R is a strongly convex.
(b) The feasible set D is a simple. Now, let us recall the definition of a so-called simple set.
Definition 3.1 [2]. A set D is a simple set if it satisfies the following conditions: The problem of maximizing a linear function on D is solvable with a "Simple method". We say that a method is simple if it involves the use of the simplex method or the use of a method that gives an analytical solution to the problem of maximizing a linear function.
Throughout the rest of this paper we consider problem (2.1) under assumptions (a) and (b). Let f * denotes a global maximum of problem (2.1), i.e., f * = max x∈D f (x). We now define the auxiliary function π(y) by π(y) = max y * ∈∂f (y) max x∈D y * ,x − y , ∀y ∈ R n . (3.1) It is well known that ∂f (y) is convex and compact [16]. Let us introduce the function defined by Using Θ(x), we can reformulate Theorem 2.1 in the following way. Proof. The proof is an obvious consequence of the inequality: which holds for all x and y, y * such that x ∈ D, f (y) = f x 0 , y * ∈ ∂f (y). (3.4) We also note that 0 ∈ ∂f (y), therefore by Theorem 2.1, we see that x 0 is a solution of problem (2.1) and the proof is complete. Theorem 3.2 is used to verify the optimality condition (2.2).

Convergence of the algorithm
Step 2.
To determine the value of Θ(x k ) solve the constrained global maximization problem Let y k be a solution of this problem, i.e., Also suppose that v k is a solution of the following problem The point x k+1 can be considered as a solution of the problem: Step 3. If Θ(x k ) ≤ 0 then set x * = x k and stop, x * is a solution.
Now we show that this algorithm converges to a global maximum of problem (2.1). Proof. Note that from the construction of In fact, otherwise, there exists k such that Θ(x k ) ≤ 0. Then, by Theorem 3.2, we can conclude that x k is a solution of problem (2.1) and the proof is complete. Suppose, on the contrary, that {x k } is not a maximizing sequence of problem (2.1), i.e., where x * is a global maximizer of problem (2.1).
First, we show that the sequence {f (x k )} is strictly monotonically increasing. By the definition of Θ(x k ), we have By the convexity of f , this implies that Hence, we obtain f (x k+1 ) > f (x k ), for all k = 0, 1, 2,... . Because the sequence {f (x k )} is bounded by the value of f * , there exists a limit A, i.e., lim k→∞ f (x k ) = A. Then recalling (4.8) and (4.9), we obtain lim k→∞ Θ(x k ) = 0. Now we introduce the closed and convex sets L k (f , (4.10) Moreover, u k can be considered as a solution of the convex programming problem Then the optimality condition for this problem at the point u k is ∃y k ∈ ∂f u k , λ 0 g u k + λ k y k = 0 (4.12) for all k = 0, 1, 2,... . Now, we show that λ 0 ≠ 0 and λ k ≠ 0. In fact, if λ 0 = 0, then by (4.12), it follows that λ k > 0 and y k = 0, f (u k ) = f (x k ). This contradicts the fact that x k ≠ arg min x∈R n f (x) for each k = 0, 1, 2,... . Analogously, we show that λ k > 0. Since g (u k ) = u k − x * ≠ 0, the case λ k = 0 is also impossible and we can put λ 0 = 1 and λ k > 0. Then we can write (4.12) as follows: Thus, we have (4.14) On the other hand, from the definition of Θ(x k ), it follows that Using (4.13), (4.14), and (4.15), we have From the construction of {x k } the sequence u k is such that f (x k ) = f (u k ) and f (u k ) > f (u k−1 ) for all k = 0, 1, 2,... . Then, by Theorem 3.3, we obtain ∃δ > 0 : y k ≥ δ for all y k ∈ ∂f (u k ), k = 0, 1, 2,... . From (4.16), it follows that 0 ≤ δ u k − x * ≤ Θ(x k ). Taking into account that lim k→∞ Θ(x k ) = 0, we have lim k→∞ u k = x * . By the continuity of f on R n we conclude that This contradicts (4.7). This contradiction implies that the assumption that {x k } is not a maximizing sequence of problem (2.1) is false. Since D is a compact set, there exists a convergent subsequence which we relabel {x k } such that lim k→∞ x k =x. By (4.17), we obtain which completes a proof of the theorem.
Note. If f : R n → R is a differentiable convex function, then the function π(y) is defined as follows: In addition, if f is a twice differentiable, then there exists the directional derivative of π(y) at y in the direction h ∈ R n which is given by the following formula [2]: where z is such that In this case, Algorithm 4.1 is transformed into Algorithm 4.3 (see [2]) which has been implemented numerically.

Algorithm 4.3
Step 1. Choose x 0 ∈ D such that x 0 ≠ arg min x∈R n f (x). Set k = 0.
Step 2. Solve the problem Step 3. If Θ(x k ) ≤ 0 then x * = x k and stop.

Nonconvex optimal control problem.
Consider the following optimal control problem: x 2 (t),...,x n (t)) T . Here x 0 ∈ R n is an initial state, t 0 , t 1 , and x 0 are given. Matrix functions A(t), B(t), and C(t) are piecewise continuous on [t 0 ,t 1 ] with dimensions (n × n), (n × r ), and (n × 1), respectively. U ⊂ R r is a simple set, ϕ : R n → R is a strongly convex and differentiable function. The problem (5.1), (5.2), and (5.3) belongs to the class of nonconvex optimal control problem and application of Pontryagin's maximum principle [7,11] can give only stationary processes (x(·), u(·)). There are a number of numerical methods based on the sufficient optimality conditions of dynamic programming [1,16] and Krotov's condition [8] devoted to nonconvex optimal control problems. Based on optimality conditions (2.2) a global optimal control search "R"-algorithm using a so-called "resolving set" was proposed in [15].
We show how to apply Algorithm 4.1 or Algorithm 4.3 to the solution of problem (5.1), (5.2), and (5.3). It is well known that every admissible control u ∈ V corresponds to a unique solution x(t) = x(t, u) of the Cauchy problem (5.2) by the formula [16,17]: where F(t) is a fundamental matrix with dimension (n × n) that satisfies the matrix equationḞ on T with the matrix E. Note that a state x(t, u) is an absolutely continuous vectorfunction of time t, which satisfies the system (5.2) almost everywhere in T [16]. Let us denote byD the reachable set of the control system (5.2) and (5.3). Under the above assumptions the setD ⊂ R n is convex and compact [11,17]. Then we can present problem (5.1), (5.2), and (5.3) in the following form: Let a point x * be a solution of problem (5.7), then the admissible control u * = u * (t), t ∈ T corresponding to x * is the global optimal control solution to problem (5.1), (5.2), and (5.3). Therefore, Algorithms 4.1 and 4.3 for solving problem (2.1) can also be used for solving the problem (5.1), (5.2), and (5.3). As we have done before, write down the auxiliary function π(y) for problem (5.7) as follows: We first consider the following linearized optimal control problem ϕ (y), x → max, x ∈D. (5.9) In order to solve problem (5.9), we introduce a conjugate state as a solution of the following differential system: This system has the unique piecewise differentiable solution ψ(t) = ψ(t, y) defined on T . Then a solution of problem (5.9) can be given by the following.
Theorem 5.1 [17]. Let ψ(t) = ψ(t, y), t ∈ T be a solution of conjugate system (5.11) for y ∈ R n . Then for an admissible control z(t) = z(t, y) to be optimal in problem (5.11)

it is necessary and sufficient that the condition ψ(t, y), B(t)z(t, y) = min u∈U ψ(t, y), B(t)u be fulfilled for almost every t ∈ T .
Using Theorem 5.1, we can easily show that the reachable setD =D(t 1 ) is a simple set. Therefore, the valueπ(ỹ) is calculated by the following scheme: (1) Solve problem (5.11) for a givenỹ ∈ R n . Let ψ(t) = ψ(t,ỹ) be a solution of (5.11).
(2) Find the optimal control z = z(t,ỹ) as a solution of the problem ψ(t), B(t)u → min, u∈ U (5.12) at each moment of t ∈ T .

Algorithm 5.2
Step 1. Let k = 0 and an arbitrary control u k ∈ V be given. Find x k = x(t 1 ,u k ) by solving the system (5.2) for u = u k .
Step 2. Solve the problemπ(y) → max, ϕ(y) = ϕ(x k ). Let y k be the solution of this problem and z k = z k (t, y k ) be the solution of the corresponding problem ϕ (y k ), x → max, x ∈D, (5.14) or ψ k (t), B(t)z k = min u∈U ψ k (t), B(t)u , t ∈ T , (5.15) whereψ Step 3. If π(y k ) ≤ 0 then go to Step 5. Otherwise, go to Step 4.
Convergence of the algorithm is based on the following statement. The proof is analogous to the method used in Theorem 4.2.

Numerical experiments.
To check the efficiency of the algorithm proposed above, a few optimal control test problems have been considered. The proposed algorithm has been implemented on IBM PC/486 microcomputer in Pascal 7. The list of the test problems considered is the following: 1) 2) The results of the numerical experiments for these problems are shown in Table 6.1.

Conclusions.
In this paper, we considered some classes of global optimization problems including optimal control problem.
(1) Based on global optimality condition an algorithm for the solution of the problem of maximizing a convex function on a so-called "Simple" set has been proposed.
(2) The proposed algorithm was generalized to the nonconvex optimal control problem for a terminal functional.
(3) The proposed algorithm is shown to be convergent and was tested numerically on a few nonconvex optimal control problems.

Call for Papers
This subject has been extensively studied in the past years for one-, two-, and three-dimensional space. Additionally, such dynamical systems can exhibit a very important and still unexplained phenomenon, called as the Fermi acceleration phenomenon. Basically, the phenomenon of Fermi acceleration (FA) is a process in which a classical particle can acquire unbounded energy from collisions with a heavy moving wall. This phenomenon was originally proposed by Enrico Fermi in 1949 as a possible explanation of the origin of the large energies of the cosmic particles. His original model was then modified and considered under different approaches and using many versions. Moreover, applications of FA have been of a large broad interest in many different fields of science including plasma physics, astrophysics, atomic physics, optics, and time-dependent billiard problems and they are useful for controlling chaos in Engineering and dynamical systems exhibiting chaos (both conservative and dissipative chaos).
We intend to publish in this special issue papers reporting research on time-dependent billiards. The topic includes both conservative and dissipative dynamics. Papers discussing dynamical properties, statistical and mathematical results, stability investigation of the phase space structure, the phenomenon of Fermi acceleration, conditions for having suppression of Fermi acceleration, and computational and numerical methods for exploring these structures and applications are welcome.
To be acceptable for publication in the special issue of Mathematical Problems in Engineering, papers must make significant, original, and correct contributions to one or more of the topics above mentioned. Mathematical papers regarding the topics above are also welcome.
Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/ according to the following timetable: Manuscript Due March 1, 2009 First Round of Reviews June 1, 2009