Optimal Control Problem with Necessary Optimality Conditions for the Viscous Dullin-Gottwald-Holm Equation

and Applied Analysis 3 imbedded in C([0, T];H2 0 (Ω)) and C([0, T]; L(Ω)), respectively (cf. Dautray and Lions [16, page 555]). From now on, we will omit writing the integral variables in the definite integral without any confusion. Lemma 1. Let φ satisfy the boundary conditions of (5) and φ − φ xx ∈ W(0, T). Then one has 󵄩󵄩󵄩󵄩φ 󵄩󵄩󵄩󵄩S(0,T) ≤ C 󵄩󵄩󵄩󵄩φ − φxx 󵄩󵄩󵄩󵄩W(0,T), (10) where C > 0 is a constant. Proof. According to the boundary conditions of φ, we have 󵄩󵄩󵄩󵄩φ − φxx 󵄩󵄩󵄩󵄩 2 W(0,T) = ∫ T 0 󵄨󵄨󵄨󵄨φx − φxxx 󵄨󵄨󵄨󵄨 2 2 dt


Introduction
Recently, in the study of shallow water wave, Dullin et al. [1] derived a new integrable shallow water wave equation with linear and nonlinear dispersion as follows: where  is fluid velocity,  2 and /2 are squares of length scales, and 2 = √ℎ is the linear wave speed for undisturbed water at rest at spatial infinity, where  and its spatial derivatives are taken to vanish.Letting  2 → 0, (1) reduces to the well-known Korteweg-de Vries (KDV) equation (linear dispersion case).And when letting  → 0, (1) reduces to the Camassa-Holm equation of [2] (nonlinear dispersion case).
Usually we can rewrite (1) into where  =  −  2   is a momentum variable.Physically, the third and fourth terms of the left side of (2) represent convection and stretching effects of unidirectional propagation of shallow water waves over a flat bottom, respectively (see [2,3]).
Many researchers studied the well-posedness of Cauchy problem for the DGH equation including various properties of the solution of it (see [4][5][6]).
Recently, Shen et al. [7] studied the optimal control problem of the following viscous DGH equation (cf.[3]): where  =  −   and ] > 0 stands for the viscosity constant of shallow water wave.As explained in Holm and Staley [8], the small viscosity makes sense to take into account the balance or relaxation between convection and stretching dynamics of shallow water wave.In [7] Shen et al. studied the distributive optimal control problems of (3) (cf.[3]).For this purpose, they modified (3) to Dirichlet boundary value problem in short interval and proved the existence and uniqueness of  in (3) by weak formulation.However the well-posedness of (3) with respect to  is unclear and the proof contained in [7] relies on the size of ] which is an extra condition.Further, in [7] they employed the quadratic cost objective functional to be minimized within an admissible control set with the distributive observation of  in (3) and only discussed the existence of optimal controls which minimize the quadratic cost.But the necessary optimality conditions of optimal controls have not been studied there.

Abstract and Applied Analysis
As for the necessary optimality condition of optimal controls, we can find a recent paper Sun [9].By employing the Dubovitskii and Milyutin functional analytic approach, Sun has established in [9,Theorem 3] the Pontryagin maximum principle of the optimal control for the viscous DGH equation with the quite general cost which depends on  and not on .Meanwhile, in this paper, we propose the quadratic cost functional for , which is actually more reasonable than that for , and establish the necessary optimality conditions of optimal controls due to Lions [10] in Theorems 5 and 7 for the physically meaningful observations  = () and  = (), respectively.To this end, we successfully characterize the Gâteaux derivative ()(V − ) of (V) in the direction V −  ∈ U, where U is a Hilbert space of control variables and  is an optimal control for quadratic cost.
Actually, the extension of optimal control theory to quasilinear equations is not easy.Some researches have been devoted to the study of optimal control or identification problems in specific quasilinear equations.For instance, we can refer to Hwang and Nakagiri [11,12] and Hwang [13,14].
The aim of this paper can be summarized as follows.Firstly, we clarify the well-posedness of (3) with respect to  in the Hadamard sense with appropriate initial value condition in short interval as posed in [7].Secondly, based on the well-posedness result, we expand the optimal control theory due to Lions [10] with emphasis on deriving necessary optimality conditions of optimal controls in the following distributive control system: where (V) = (V) −   (V),  is a forcing term,  is a controller, V is a control, and (V) denotes the state for a given V ∈ U.
In order to apply the variational approach due to Lions [10] to our problem, we propose the quadratic cost functional (V) as studied in Lions [10] which is to be minimized within U ad ; U ad is an admissible set of control variables in U. We show the existence of  ∈ U ad which minimizes the quadratic cost functional (V).Then, we establish the necessary conditions of optimality of the optimal control  for some physically meaningful observation cases employing the associate adjoint systems.For this we successfully prove the Gâteaux differentiability of the nonlinear solution mapping V → (V), which is used to define the associate adjoint systems.
Moreover, in this paper we discuss the local uniqueness of optimal control.As widely known, it is unclear and difficult to verify the uniqueness of optimal control in nonlinear control problems.By employing the idea given in Ryu [15], we show the strict convexity of the quadratic cost (V) in local time interval by utilizing the second order Gâteaux differentiability of the nonlinear solution mapping V → (V).Whence by proving strict convexity of the quadratic cost with respect to the control variable, we prove the local uniqueness of optimal control.This is another novelty of the paper.
We consider the following Dirichlet boundary value problem for the viscous Dullin-Gottwald-Holm (DGH) equation: where  = −  ,  is a forcing function, and  0 is an initial value.
In order to define weak solutions of (5), we define some Hilbert spaces.At first, S(0, ) is defined by endowed with the norm where   denotes the first order distributional derivatives of .Further, W(0, ) is defined by endowed with the norm where  0 ,  1 > 0 are constants.Thus we prove this lemma.
In order to verify the well-posedness of (5), we partially refer to the result by Shen et al. [7].The well-posedness of (5) in the sense of Hadamard can be given as follows.
Remark 4. In [7], the well-posedness of ( 5) is partially verified, which is indeed the case that the viscosity constant ] > 0 is large enough.However, as we will see in the Appendix, such an extra assumption can be removed.

Quadratic Cost Optimal Control Problems
In this section we study the quadratic cost optimal control problems for the viscous DGH equation due to the theory of Lions [10].Let U be a Hilbert space of control variables, and let  be an operator, called a controller.We consider the following nonlinear control system: where and V ∈ U is a control.By virtue of Theorem 3 and (26), we can define uniquely the solution map V → (V) of U into S(0, ).We will call the solution (V) of ( 27) the state of the control system (27).The observation of the state is assumed to be given by where  is an operator called the observer and  is a Hilbert space of observation variables.The quadratic cost function associated with the control system (27) is given by where   ∈  is a desired value of (V) and  ∈ L(U, U) is symmetric and positive; that is, for some  > 0. Let U ad be a closed convex subset of U, which is called the admissible set.An element  ∈ U ad which attains the minimum of (V) over U ad is called an optimal control for the cost (29).
In this section we will characterize the optimal controls by giving necessary conditions for optimality.For this it is necessary to write down the necessary optimality condition, and to analyze (31) in view of the proper adjoint state system, where () denote the Gâteaux derivative of (V) at V = .
And we study local uniqueness of the optimal control.As indicated in Section 1, we show the existence of an optimal control and give the characterizations of them.

Existence of the Optimal
Control.Now we show the existence of an optimal control  for the cost (29).
Theorem 5. Assume that the hypotheses of Theorem 3 are satisfied.Then there exists at least one optimal control  for the control problem (27) with the cost (29).
Proof.Set  = inf V∈U ad (V).Since U ad is nonempty, there is a sequence Obviously {(V  )} is bounded in R + .Then by (30) there exists a constant  0 > 0 such that This shows that {V  } is bounded in U. Since U ad is closed and convex, we can choose a subsequence (denoted again by {V  }) of {V  } and find a  ∈ U ad such that as  → ∞.From now on, each state   = (V  ) ∈ S(0, ) corresponding to V  is the solution of where   =   −  , .By (33) the term V  is estimated as By using the fact that  1 0 (Ω) →  2 (Ω) is compact and by virtue of (39), we can refer to the result of the Aubin-Lions-Temam's compact imbedding theorem (cf.Temam [17, page 271]) to verify that {  } is precompact in  2 (0, ;  2 (Ω)).Hence there exists a subsequence {   } ⊂ {  } such that We use (39)-(41) and apply the Lebesgue dominated convergence theorem to have as  → ∞.We replace   and   by    and    , respectively, and take  → ∞ in (35).Then by the standard argument in Dautray and Lions [16, pages 561-565], we conclude that the limit  satisfies in weak sense, where  = −  .Moreover the uniqueness of weak solutions in (43) via Theorem 3 enables us to conclude that  = () in S(0, ), which implies (V  ) → () weakly in S(0, ).Since  is continuous on S(0, ) and ‖ ⋅ ‖  is lower semicontinuous, it follows that But since () ≥  by definition, we conclude that () = inf V∈U ad (V).This completes the proof.

Gâteaux Differentiability of Solution Mapping.
In order to characterize the optimal control which satisfies the necessary optimality condition (31), we need to prove the Gâteaux differentiability of the mapping V → (V) of U → S(0, ).
The operator () denotes the Gâteaux derivative of () at V =  and the function () ∈ S(0, ) is called the Gâteaux derivative in the direction  ∈ U, which plays an important role in the nonlinear optimal control problem.
Theorem 7. The map V → (V) of U into S(0, ) is Gâteaux differentiable at V =  and such the Gâteaux derivative of (V) at V =  in the direction V −  ∈ U, say  = ()(V − ), is a unique solution of the following problem: where  = () −   () and Z =  −   .
Hence we can see from (52) to (57) that   →  = () weakly in S(0, ) as  → 0 in which  is a solution of (47).This convergence can be improved by showing the strong convergence of {  } also in the topology of S(0, ).
Theorem 7 means that the cost (V) is Gâteaux differentiable at  in the direction V −  and the optimality condition (31) is rewritten by where Λ  is the canonical isomorphism  onto   and   ∈  is a desired value.

Necessary Condition of Optimal Control.
In this section we will characterize the optimal controls by giving necessary condition (71) for optimality for the following physically meaningful observations.
The proof of Proposition 8 is given in the Appendix.
Remark 9.As we will see, there are some merits in taking ( 2 0 (Ω),  2 (Ω)) as the solution space for adjoint equations.For the observation (72), even though we can take the adjoint system in the space S(0, ) with additional boundary conditions, we can derive the same necessary optimality condition of optimal controls through the less regular solution () ∈ ( 2 0 (Ω),  2 (Ω)).Therefore, ( 2 0 (Ω),  2 (Ω)) is preferred solution space of adjoint equation for the observation (72).And also, for the observation (73), we need to solve adjoint equation in ( 2 0 (Ω),  2 (Ω)) because of the less regular data condition than that of the observation (72).

Case of the Observation (72).
In this subsection we consider the cost functional expressed by where   ∈  2 () is a desired value.Let  be the optimal control subject to ( 27) and (78).Then the optimality condition (71) is represented by where  is the solution of (47).Now we will formulate the following adjoint system to describe the optimality condition: where P = () −   ().By (47) for , we can verify by integration by parts that the left hand side of (81) yields where Z = −  .Therefore, by ( 81) and (82) we can deduce that the optimality condition (79) is equivalent to Hence, we give the following theorem.

Case of the Observation (73)
. We consider the following momentum's distributive cost functional expressed by where (V) = (V) −   (V) and   ∈  2 ().Let  be the optimal control subject to ( 27) and (87).Then the optimality condition (71) is rewritten as where Z =  −   and  is the solution of (47).As before we formulate the following adjoint system to describe the optimality condition: where P = () −   () and () = () −   ().
As we did before, we multiply both sides of the weak form of (89) by () and integrate it over [0, ].Then we have where P = () −   (), () = () −   (), and Z =  −   .By (47) for  the integration by parts of the left hand side of (90) yields where Z =  −   .Therefore, combining (90) and (91), we can deduce that the optimality condition (88) is equivalent to Hence, we give the following theorem.

Local Uniqueness of an Optimal Control.
We note that the uniqueness of an optimal control in nonlinear problem is not assured.However, referring to the result in [15], we can show the local uniqueness of the optimal control for our problem.
In order to show the uniqueness of optimal control by using strict convexity of quadratic cost (cf.[18]) we consider the following proposition.
Proposition 14.The map V → (V) of U into S(0, ) is second order Gâteaux differentiable at V =  and such the second order Gâteaux derivative of (V) at V =  in the direction V −  ∈ U, say  =  2 ()(V − , V − ), is a unique solution of the following problem: where G =  −   , Z =  −   , and z is the solution of (47).
Proof.The proof is similar to that of Theorem 7.
We prove the local uniqueness of the optimal control.
Theorem 16.When  is small enough, then there is a unique optimal control for the problem (29) for observations (72) and (73).
Proof.We prove the case (73).Then the same result will be followed for the case (72).
We show the local uniqueness by proving the strict convexity of the map V ∈ U ad → (V).Therefore as in [18], we need to show for all , V ∈ U ad ( ̸ = V) where G() = () −   ().