Optimal Control Based on the Polynomial Least Squares Method

Optimal control problems occur inmany areas of science and engineering such as system mechanics, hydrodynamics, elasticity theory, geometrical optics, and aerospace engineering, and they are one of the several applications and extensions of the calculus of variations. The beginning of optimal control is represented by the Brachistochrone problem formulated by Galileo in 1683: A massmaterial pointmmoves without friction along a vertical curve joining the points (x0, y0) and (x1, y1). There is the question of finding such a curve for which the scroll time is minimal, curve called brachistochrone. Galileo’s attempts to resolve it were incorrect [1, 2]. The problem raised great interest at that time and solutions were proposed by many mathematicians like Bernoulli, Leibnitz, l’Hopital, and Newton [1]. These results were published by Euler in 1744, who concluded “nothing at all takes place in the universe in which some rule of maximum or minimum does not appear.” Euler also formulated the problem in general terms as the problem of finding the curve y(t) over the interval [a, b] (with y(a) and y(b) known) which minimizes: J = ∫b


Introduction
Optimal control problems occur in many areas of science and engineering such as system mechanics, hydrodynamics, elasticity theory, geometrical optics, and aerospace engineering, and they are one of the several applications and extensions of the calculus of variations.
The beginning of optimal control is represented by the Brachistochrone problem formulated by Galileo in 1683: A mass material point m moves without friction along a vertical curve joining the points ( 0 ,  0 ) and ( 1 ,  1 ).There is the question of finding such a curve for which the scroll time is minimal, curve called brachistochrone.Galileo's attempts to resolve it were incorrect [1,2].The problem raised great interest at that time and solutions were proposed by many mathematicians like Bernoulli, Leibnitz, l'Hopital, and Newton [1].These results were published by Euler in 1744, who concluded "nothing at all takes place in the universe in which some rule of maximum or minimum does not appear."Euler also formulated the problem in general terms as the problem of finding the curve () over the interval [, ] (with () and () known) which minimizes: for some given function (, (),   ()), where   = /.Euler presented a necessary condition of optimality for the curve ():      (,  () ,   ()) =   (,  () ,   ()) where    and   represent the partial derivatives with respect to   and , respectively.The solution techniques proposed initially had been of a geometric nature until 1755 when Lagrange described an analytical approach, based on perturbations or "variations" of the optimal curve and using his "undetermined multipliers, " which led directly to Euler's necessary condition, now known as the "Euler-Lagrange equation." Euler also adopted this approach and renamed the subject "the calculus of variations" [3].
In the years to come, considerable efforts have been made to develop optimal control techniques.A classification of methods for solving optimal control problems is presented by Berkani et al. in [3].Among the most used ones we mention the following.
(i) The Dynamic Programming method, based on the principle of optimality, was first formulated by Bellman [4] and often used in the analysis and design of automatic control systems.Bellman's partial differential equation and the boundary conditions included are necessary conditions for obtaining the minimum of the optimal control problem.
(ii) The Pontryagin Minimum Principle [5] is built on defining the Hamiltonian function by introducing adjoint variables.The optimal control law is obtained by solving the canonical differential equations (the Hamilton equations) which are the necessary conditions of optimality according to the minimum principle [6].The optimality conditions are in general not able to provide the exact optimum since the resulting two-point boundary value problem (Bellman partial differential equation) is not easy to be solved analytically and usually computational methods are employed [7][8][9].
In this paper we apply the Polynomial Least Squares Method (PLSM) in order to compute approximate analytical polynomial solutions for a optimal control problems.This method was used by C. Bota and B. Cȃruntu in 2014 to compute approximate analytical solutions for the Brusselator system which is a fractional-order system of nonlinear differential equations [10].In the following years the accuracy of the method is emphasized by its use in solving several types of differential equations [11][12][13].
The optimal control problem approached in this paper is the computation of the optimal control law () : [0,   ] ⊂ R → R which minimizes the performance index: where the state equation is and the state variable () satisfies the constraints (0) =  0 and (  ) =   .We will assume that F is of class  1 , so the solution of the optimal control problem exists and is unique for the given conditions.The state equation ( 4) may be linear or nonlinear but we also assume that () can be explicitly obtained from (4) as a function of ().In this case solving the optimal control problem is equivalent to solving the variational problem of finding the minimum of functional: with where the relation ( 5) is obtained from (3) by substituting the expression for () as a function of () (4).The necessary condition for the uniqueness of the solution to the problem (5)-( 6) is that () satisfies the conditions (6) where F may be linear or nonlinear.We associate with this equation the following operator: If we denote by ỹ() an approximate solution of (8), the error obtained by replacing the exact solution () with the approximation ỹ() is given by the remainder: Taking into account the boundary conditions ( 6), for  ∈ +, we will compute approximate polynomial solutions ỹ of the problem ( 8), ( 6) on the interval [0,   ] as follows.
We observe that from the hypothesis of the initial problem ( 8), ( 6) it follows that there exists a sequence of polynomials   () which converges to the solution of the problem.We will compute a weak -approximate polynomial solution, in the sense of the Definition 1, of the type where  0 ,  1 , . . .,   are constants which are calculated using the following steps: (i) By substituting the approximate solution ( 14) in ( 8) we obtain the remainder: We remark that if we could find  0 ,  1 , . . .,   such that R(, ỹ) = 0, ỹ(0) =  0 , ỹ(  ) =   , then by substituting  0 ,  1 , . . .,   in (14) we would obtain the exact solution of the problem ( 8), ( 6).This is not generally possible, unless the exact solution is actually a polynomial (ii) We attach to the problem ( 8), ( 6) the following functional: where  0 ,  1 are computed as functions of  2 ,  3 ⋅ ⋅ ⋅   using the conditions ( 6).
(iv) Using the constants  0 2 ,  0 3 , ⋅ ⋅ ⋅  0  previously determined we compute the polynomial Theorem 3. The sequence of polynomials   () from ( 17) satisfies the property is a weak -approximate polynomial solution of the problem ( 8), (6) Proof.Based on the way the polynomials   () are computed and taking into account the relations ( 15)-(17), the following inequalities are satisfied: where   () is the sequence of polynomials introduced in Definition 2. It follows that We obtain From this limit we obtain that ∀ > 0, ∃  ∈ N,  >  0 and it follows that   () is a weak -approximate polynomial solution of the problem (8), (6).
Remark 4. In order to find -approximate polynomial solutions of the problem ( 8), ( 6) by using the Polynomial Least Squares Method we will first determine weak approximate polynomial solutions, ỹ.If |R(, ỹ)| <  then ỹ is also an approximate polynomial solution of the problem.

Application of the Polynomial Least Squares Method for an Optimal Control
Problem.We will find the approximate solution of the optimal control problem (3)-(4) using the following steps: (i) We transform the optimal control problem (3)-( 4) in a variational problem ( 5)-( 6) as described in the introduction.
(iii) We compute the approximate solution ỹ() of the Euler-Lagrange equation using PLSM as described in the previous section.Thus ỹ() is an approximation of the state variable () of the optimal control problem.
(iv) Finally we compute an approximation ũ() of the optimal control law () by means of the state equation (4).

Applications
In this section we apply the Polynomial Least Squares Method in order to compute analytical approximate optimal control laws for three optimal control problems.

Application 1.
We consider the following optimal control problem [3]: where the state equation is and the boundary conditions are The exact solution of this problem is [3] In order to apply PLSM we follow the steps presented in the previous section: By minimizing the functional (16) J( 2 ,  3 ⋅ ⋅ ⋅ ,  9 ) (too large to be included here) we obtain the values for  2 ,  3 ⋅ ⋅ ⋅ ,  9 .We compute the corresponding values of  0 and  1 using again the conditions (24) and we replace all these values in ỹ() to obtain our approximation:  [3] given by VIM (red curve) and approximate solution given by PLSM (blue curve).
(iv) Finally we can easily compute an approximation for ũ() (also not included here because of its large size) by means of (26): Table 1 presents the absolute errors (as differences in absolute value between the exact value and the approximate one) corresponding to our approximations of the state variable ỹ() and of the optimal control law ũ() obtained by using PLSM.
Figures 1 and 2 present the comparison between our results and previous ones computed in [3] by using the Variational Iteration Method (VIM).It can be easily observed that not only is our approximation more precise, but while the error function corresponding to the VIM approximations shows a sizeable increase with , the error function corresponding to PLSM does not.Moreover, another advantage of PLSM is the fact that, evidently, the approximation has the The absolute errors corresponding to the approximations of the optimal control law ũ() in Application 1: approximate solution from [3] given by VIM (red curve) and approximate solution given by PLSM (blue curve).
simplest possible form, namely, a polynomial, and thus is very easy to use in any further computations.Finally, we mention the fact that by increasing the degree of the polynomial ỹ() we can obtain higher accuracy: for example, using a 10-th degree polynomial we obtain an overall error of 10 −14 .

Application 2.
Our second application is the optimal control problem: min where there state equation is and the boundary conditions are (i) Replacing the expression of () from (34) in the performance index (33) we obtain the variational problem [6]: with the same boundary conditions (0) = 0, (1) = 0.5.
The exact solution of this problem is [6] We apply the same steps as in the previous application: (ii) The corresponding nonlinear Euler Lagrange equation is (iii) We compute using PLSM an approximate analytical solution of the type From the boundary conditions (35) we obtain d0 = 0 and  1 = 1/2 −  2 −  3 −  4 −  5 −  6 −  7 .
We compute again the corresponding reminder (15) and by minimizing the functional (16) J( (iv) Using the state equation (34) we compute an approximation for the optimal control law ũ().
In Figures 3 and 4 we present the absolute errors corresponding to our approximations of the state variable ỹ() and of the optimal control law ũ() for the problem (33)-(35) obtained by using PLSM.
Figure 4: The absolute error corresponding to the approximation of the optimal control law ũ() in Application 2: approximate solution given by PLSM.

Application 3.
Our third application is the well-known linear quadratic regulator (LQR), more precisely the finitehorizon, continuous-time LQR.LQRs have a wide range of applications in engineering such as trajectory tracking and optimization in robotics, control system design for various types of vehicles, automatic voltage regulators in electrical generators, and optimal controls for various types of motors.
The corresponding optimal control problem may be formulated as () =  () +  () (42) We consider the following particular case of the problem (41)-(43) corresponding to the values A=1, B=1, S=8, P=3, Q=0, R=1, and   =1 [14,15]: The performance index is the state equation is and the boundary conditions are Approximate solutions for this problem were proposed in [14] using the Homotopy Analysis Method and in [15] using the Optimal Homotopy Analysis Method.The exact solution of the problem is The corresponding expression of the control is Using the same steps presented in the previous examples we computed the following approximation of the state variable: (49) In Figures 5 and 6 we present the absolute errors corresponding to our approximations of the state variable ỹ() and of the optimal control law ũ() for the problem (44)-(46) obtained by using PLSM.
The accuracy of our method is emphasized by a comparison with approximate solutions for Application 3 previously computed by means of other well-known methods.Table 2 presents a comparison of the absolute errors corresponding to the approximations of the state variable ỹ() obtained by using the Homotopy Analysis Method (HAM [14]) and by using the Optimal Homotopy Analysis Method (OHAM [15]).

Conclusion
In this paper the application of the Polynomial Least Squares Method to optimal control problems is presented.
Mathematical Problems in Engineering 7   In order to apply PLSM the optimal problem is transformed to a variational problem by substituting in the performance index the expression of the control variable given by the state equation.PLSM is able to find accurate approximations of the state variable by computing approximate analytical polynomial solutions of the Euler-Lagrange equation corresponding to the variational problem.The optimal control law is then computed by using the state equation.
The numerical examples included clearly illustrate the accuracy of the method by means of a comparison with solutions previously computed by other methods.

Figure 2 :
Figure2: The absolute errors corresponding to the approximations of the optimal control law ũ() in Application 1: approximate solution from[3] given by VIM (red curve) and approximate solution given by PLSM (blue curve).

Figure 3 :
Figure 3: The absolute error corresponding to the approximation of the state variable ỹ() in Application 2: approximate solution given by PLSM.

Figure 5 :
Figure 5: The absolute error corresponding to the approximation of the state variable ỹ() in Application 3: approximate solution given by PLSM.

Figure 6 :
Figure6: The absolute error corresponding to the approximation of the optimal control law ũ() in Application 3: approximate solution given by PLSM.

Table 1 :
Absolute errors of the approximations of the state variable ỹ() and the optimal control law ũ() obtained by using PLSM for Application 1.

Table 2 :
Comparison of the absolute errors corresponding to the approximations of the state variable ỹ() obtained by using HAM, OHAM, and PLSM for Application 3.