© Hindawi Publishing Corp. ON A SYSTEM OF HAMILTON-JACOBI-BELLMAN INEQUALITIES ASSOCIATED TO A MINIMAX PROBLEM WITH ADDITIVE FINAL COST

We study a minimax optimal control problem with finite horizon and additive final cost. After introducing an auxiliary problem, we analyze the dynamical programming principle (DPP) and we present a Hamilton-Jacobi-Bellman (HJB) system. We prove the existence and uniqueness of a viscosity solution for this system. This solution is the cost function of the auxiliary problem and it is possible to get the solution of the original problem in terms of this solution.


Introduction.
The optimization of dynamic systems where the criterion is the maximum value of a function is a frequent problem in technology, economics, and industry.This problem appears, for example, when we attempt to minimize the maximum deviation of controlled trajectories with respect to a given "model" trajectory.Minimax problems differ from those usually considered in the optimal control literature where a cumulative cost is minimized.Since in some cases, minimax problems describe more appropriately decision problems arisen in controlled systems whose performance is evaluated with a unique scalar parameter, the minimax optimization has received considerable attention in recent publications (see, e.g., [4,5,6,7,8,9,10,11,12,13,15,16,17,18,19,22]).Furthermore, minimax problems are related to the design of robust controllers (see [14]).
In addition, from the academic point of view, the minimax optimal control problem is of interest in the area of game theory because minimax problems can be seen as a game (see [18]) where a player applies ordinary controls and the other one-using complete and privileged information-chooses a stopping time.Problems of this type lead to the treatment of nonlinear partial differential inequalities akin to those appearing in the obstacle problem (with obstacle given in explicit or implicit form, see [10]).To find solutions of these systems, we must consider generalized solutions-even discontinuous solutions-since commonly there do not exist classical solutions of such systems (see [2,3]).The treatment of the infinite horizon problem also presents great analytical difficulties because the optimal cost is neither necessarily lower semicontinuous nor upper semicontinuous.Moreover, the optimal cost cannot be approximated with a sequence of finite horizon problems.Studies concerning these issues can be seen in [17,19].Besides, it is also important to develop numerical methods to compute these solutions in an approximate way because in general it is not possible to find exact analytical solutions.Numerical methods to obtain approximated open-loop optimal controls are analyzed in [24] and for closed loop, see [8,15].
Here, we analyze a minimax optimal control problem where the functional to be optimized not only depends on the maximum of a function along the complete trajectory of the system but also it takes into account (in an additive fashion) another function of the final state of the system.
For a problem with such a structure, a dynamical programming principle cannot be formulated merely in terms of the initial time and the initial state.To obtain a DPP, we introduce an auxiliary parameter which "remembers" the past maximum values (the use of a similar procedure can be seen in [4,5]).Using this parameter, we present an auxiliary problem which gives the solution of the original problem when a particular value of the auxiliary parameter is chosen.For this second optimal control problem, we establish the associated dynamical programing principle (DPP) and a Hamilton-Jacobi-Bellman (HJB) equation.Finally, we prove that the optimal cost of the auxiliary problem is the unique solution of the associated HJB equation.
The paper is organized as follows.In Section 2, we present the optimization problem.In Section 3, we describe the auxiliary problem and its relation with the original one.We also establish there the dynamical programming equation.In Section 4, we give the HJB equation associated to the problem and we prove the uniqueness of the solution in the viscosity sense.

The optimization problem
2.1.Presentation of the problem.We consider a minimax optimal control problem with finite horizon and final cost.More specifically, the problem consists in minimizing the functional J, where y α (•; t, x) represents the state of a dynamic system which evolves from the pair (t, x) according to the following differential equation: To simplify the notation, and whenever this simplification does not produce misunderstandings, we will write directly y α (•; t, x) = y(•).The set Ꮽ = L ∞ ((0, T ); A) is the set of admissible control policies and A is the control set.
The optimal cost function is given by This problem is an extension of another one analyzed by Barron and Ishii [10] and by Di Marco and González [15,18].The problem with an additive final cost considered here can represent scenarios where the performance of the applied control is measured jointly (in an additive fashion) by the maximum of a function along the trajectory and by a function of the final state.
To give an example of this type of problems, we consider the following economic case.

Elements of the problem.
(1) The vector y(t) of economic activities (manufacturing, services, etc.); (2) GDP(y(T )), gross domestic product at the end of the period [0,T ]; (3) a(t), policy of resources assignments; (4) f (y(t), a(t)), unemployment level.We consider the following functional, which measures the effectiveness of the economic policy.The functional measures both the positive aspects (the GDP) and the negative ones (the unemployment level f (y(t), a(t))) of the economic policy in the following form: f y(t), a(t) . (2.5) The maximization of J is equivalent to the maximization of the functional log GDP y(T ) − max t∈[0,T ] log f y(t), a(t) . (2.6) Hence, if we define the problem is equivalent to the minimization of the functional and so we must deal with a problem of type (2.2).As we will see, the new problem presents an additional difficulty because it is not possible to establish a DPP only in terms of the variables (t, x).To avoid this difficulty, we introduce an auxiliary problem which generalizes the problem presented above and we establish the DPP corresponding to the auxiliary problem.We also present an HJB equation associated to the optimal cost.Finally, using a methodology similar to that one presented in [20,23], we prove that the optimal cost is the unique solution of this HJB equation.

General assumptions.
Let BUC([0,T ] × R m × A) be the set of bounded and uniformly continuous functions in [0,T ] × R m × A. We assume that the following hypotheses hold: and for all t, t ∈ [0,T ], all x, x ∈ R m , and all a ∈ A; for all t, t ∈ [0,T ], all x, x ∈ R m , and all a ∈ A; (H 3 ) Ψ : R m R, Ψ ∈ BUC(R m ), and Note 2.1.The above-stated hypotheses are not the minimal ones under which the principal results of this paper hold.In particular, the Lipschitz continuity of f and Ψ can be replaced by the uniform continuity.We have used (H 1 ), (H 2 ), (H 3 ), and (H 4 ) in order to simplify the proof of the central results.

Auxiliary problem.
In the problem presented above, it is not possible to establish a DPP merely in terms of the variables (t, x) because the optimal control policy generally depends not only on the current state (t, x) but also on the past values of the trajectory.We see the following example.
The instantaneous cost f , the final cost Φ, and the dynamic g are given by Figure 3.1 shows the optimal trajectories corresponding to a couple of different initial points (t, x).So, the optimal control policy depends not only on the current state (t, x) but also on the past values of the trajectory.

Auxiliary variable and problem reformulation.
In order to develop the analytical and numerical treatment of the problem, we extend the state of the system introducing an auxiliary state y m+1 (•) (which is actually y m+1 (•) := y α,m+1 (•,t,x,ρ)).To simplify the notation, we will define h α (t, τ) := ess sup s∈[t,τ) f s, y(s), α(s) , τ ∈ (t, T ]. (3. 2) The auxiliary variable takes the following form, for all τ ∈ [t, T ], Note 3.2.The additional variable y m+1 "stores" the maximum value of the function f from the initial time t to the current time τ when its initial value is suitably chosen.More clearly, by considering ρ ≤ m f , from (3.3), it follows that y m+1 (T ) = h α (t, T ). (3.4)

In this way, the functional cost becomes J(t, x, α(•)) = y m+1 (T ) + Ψ (y(T ))
and the function v, defined as where t ∈ [0,T ], x ∈ R m , and ρ ∈ [m f ,M f ], can be seen as the optimal cost of an ordinary optimal control problem, that is, a problem with pure final cost.

Properties of the optimal cost v.
The following properties bring some relations between the optimal costs u and v of the original and auxiliary problems.They are almost evident and can be proved without difficulties using the definitions of the original and auxiliary problems described above.Here, we omit the complete proofs for the sake of brevity and we only sketch the lines of the argument.
(1) Let m f (t, x) := min a∈A f (t,x,a), then, for all t ∈ [0,T ] and for all x ∈ R m , , where û is the optimal cost of the optimal control problem where the functional to be minimized is In this case y m+1 (s) = M f , for all s ∈ [t, T ].Then, by replacing this equality in (3.5), we have This property follows from the definition of v and the boundedness of f and Ψ .
(5) The optimal cost v is Lipschitz continuous with respect to the variables t, x, and ρ.
This property follows from the hypotheses verified by f , g, and Ψ .The proof can be obtained essentially with the techniques used in [21].Note 3.4.As we have pointed out in Note 2.1, it is possible to consider f and Ψ uniformly continuous with respect to their variables.It also results that v is continuous with respect to the variables t, x, and ρ and similar results can be obtained.

Dynamical programming equation.
In this new problem with the state augmented by the variable ρ, the dynamical programming equation is given by

y(s), y m+1 (s) , ∀s ∈ (t, T ). (3.10)
We also have the final condition Both relations are almost evident and can be proved without difficulties using the classical argument of dynamic programming and the definition of the auxiliary problem described above.Here, we omit the complete proofs for the sake of brevity.with the final condition

Hamilton-Jacobi
and the boundary condition for all (t, x) ∈ [0,T ] × R m , where û is the optimal cost of the optimal control problem where the functional to be minimized is J(t, x, α(•)) = Ψ (y(T )).  with final condition v(T , x, 0) = Ψ (x), for all x ∈ R m .

Viscosity solution.
We say that a function v is a solution in the viscosity sense of system (4.2) when it is both a subsolution and a supersolution in the viscosity sense of (4.2).These concepts are defined as follows.
In order to simplify the writing we define Ω = (0,T ) × R m × (m f ,M f ).We also give the following definitions.
(1) The function w is a subsolution in the viscosity sense if (i) w is continuous on Ω; (ii) w is upper-bounded on Ω; (iii) w verifies the following final condition: with final condition w(T , x, M f )

The optimal cost as a viscosity solution
Theorem 4.4.The optimal cost v is a solution in the viscosity sense of the system (4.2).
Proof.The final condition is trivially verified from (3.11) and in Section 3.2 we have seen that v is continuous, bounded, and nondecreasing with respect to ρ.
From (3.3), for ρ = M f , the variable ρ remains constant and so, we are dealing with an ordinary optimal control problem.Then, with classical arguments (see [1,20]), we obtain that v verifies the upper condition (4.9).Now, it remains to prove the last condition of the subsolution's definition and the last one of the supersolution's definition.
First, we will prove that v is a subsolution of (4.2).
We suppose that f ≤ ρ.We will analyze the effect of this condition on the following relation (valid by virtue of (4.27)): We consider the set of controls Z ε given by Then, for any n ≥ n ε and s ∈ [t, t + n −1 ], it results that α n (s) ∈ Z ε .As (t,x,ρ) is a maximum point for the function φ − v, we have Since v is nondecreasing in "ρ," from (4.28), we get This inequality contradicts the initial assumption (4.27) and so, we obtain by reductio ad absurdum that v is a supersolution.

Uniqueness of the viscosity solution
Theorem 4.5.There is a unique viscosity solution of system (4.2).
Proof.Let w be a subsolution and z a supersolution of (4.2).We will prove by reductio ad absurdum that w ≤ z.
In the case m f = M f , the result is obvious because in that case the conditions (4.8) and (4.9) verified by w and the conditions (4.9) and (4.11) verified by z imply that w ≤ z (the proof now follows classical arguments (see [1,20]) and it is here omitted for the sake of brevity).So, we will consider only the case m f < M f .We suppose that r := sup w(t, x, ρ) − z(t, x, ρ) : (t,x,ρ) ∈ Ω > 0, (4.45) then, there exists (t r ,x r ,ρ r ) ∈ Ω such that For each > 0 and γ > 0, we consider the hump function (4.48) The elements of Φ have the following properties: We define the function φ(s, y, ζ, t, x, ρ) = w(s, y, ζ) − z(t, x, ρ) + Φ(s,y,ζ,t,x,ρ). (4.50) As w is a subsolution, it must verify H ρ (t,x,ρ,∇w) ≥ 0. This condition implies that, for all ρ ∈ [m f ,M f ) and for all > 0 such that ρ + ≤ M f , we have w(s, y, ρ + ) ≥ w(s, y, ρ) (the proof of this intuitive property is essentially contained in [23] and it is here omitted).Since ρ r < M f , we can suppose, without loss of generality, that is small enough to verify ρ r + √ < M f , and so Now, taking into account that Then, inequality (4.57) and the properties of ξ imply that Moreover, in the case m f < M f , the conditions (4.8) and (4.9) verified by w and the conditions (4.9) and (4.11) verified by z imply that (the proof is easy (see [1,20]) and it is here omitted).We denote From (4.59) and the continuity of w and z, we have that there exists δ r > 0 such that if x µ ∈ B r , y µ ∈ B r , and Finally, this condition and (4.58) imply-again for small enough-that Consequently, (s µ ,y µ ,ζ µ ,t µ ,x µ ,ρ µ ) is a bounded sequence which verifies (4.58).Then, we can suppose, without loss of generality, that In addition, from (4.58) and (4.65), we have and so, We define It is obvious that χ w,z is an increasing function verifying χ w,z (0) = 0.In addition, from the continuity of ψ in we get lim η→0 χ w,z (η) = 0.
From the optimality of (s ,y ,ζ ,t ,x ,ρ ), we have for all (t,x,ρ) From (4.76), we get (4.77) We define the function φ 1 ∈ C 1 ((0,T )× R m × (m f ,M f )) as follows: we have Letting successively and γ go to zero, we have r ≤ 0 which contradicts inequality (4.45).Then, z(t, x, ρ) ≥ w(t, x, ρ) for all (t,x,ρ) ∈ Ω.Finally, taking into account that any solution is at the same time a subsolution and a supersolution, we obtain the uniqueness of solution.

Conclusion.
In this paper, we have analyzed some issues concerning the viscosity solution of an HJB system associated to a minimax optimal control problem with finite horizon and additive final cost.
In the first step, we have introduced an auxiliary problem which generalizes the original one.It is possible to get the solution of the original problem in terms of the solution of the auxiliary problem.
For this auxiliary problem, a dynamical programming principle (DPP) holds.In relation with this DPP, we have presented an HJB system defined in terms of a discontinuous Hamiltonian and we have proved that the optimal cost of the auxiliary problem is the unique viscosity solution of this HJB system.
Although this problem could be seen as a particular (deterministic) case of those treated by Barles, Daher, and Romano [4], there are several differences between the results contained in [4] and those presented in this paper, among them are the following ones.
(1) We present direct proofs of the existence and uniqueness of solution without requiring the treatment of a sequence of L p problems.They are obtained through a different methodology based in control theory.
(2) In our paper the HJB equation is deduced without the hypothesis Im(f ) = [m f ,M f ].So, our HJB equation is valid, in particular, for the cases where the controls take a finite number of values.In those special cases, the L p penalization technique used by Barles, Daher, and Romano does not allow deducing it.
(3) The HJB system here obtained is simpler than that one presented in [4] because no conditions are required at the lower boundary (0,T ) × R m × {m f }.

Call for Papers
As a multidisciplinary field, financial engineering is becoming increasingly important in today's economic and financial world, especially in areas such as portfolio management, asset valuation and prediction, fraud detection, and credit risk management.For example, in a credit risk context, the recently approved Basel II guidelines advise financial institutions to build comprehensible credit risk models in order to optimize their capital allocation policy.Computational methods are being intensively studied and applied to improve the quality of the financial decisions that need to be made.Until now, computational methods and models are central to the analysis of economic and financial decisions.
However, more and more researchers have found that the financial environment is not ruled by mathematical distributions or statistical models.In such situations, some attempts have also been made to develop financial engineering models using intelligent computing approaches.For example, an artificial neural network (ANN) is a nonparametric estimation technique which does not make any distributional assumptions regarding the underlying asset.Instead, ANN approach develops a model using sets of unknown parameters and lets the optimization routine seek the best fitting parameters to obtain the desired results.The main aim of this special issue is not to merely illustrate the superior performance of a new intelligent computational method, but also to demonstrate how it can be used effectively in a financial engineering environment to improve and facilitate financial decision making.In this sense, the submissions should especially address how the results of estimated computational models (e.g., ANN, support vector machines, evolutionary algorithm, and fuzzy models) can be used to develop intelligent, easy-to-use, and/or comprehensible computational systems (e.g., decision support systems, agent-based system, and web-based systems) This special issue will include (but not be limited to) the following topics: • Computational methods: artificial intelligence, neural networks, evolutionary algorithms, fuzzy inference, hybrid learning, ensemble learning, cooperative learning, multiagent learning

•
Application fields: asset valuation and prediction, asset allocation and portfolio selection, bankruptcy prediction, fraud detection, credit risk management • Implementation aspects: decision support systems, expert systems, information systems, intelligent agents, web service, monitoring, deployment, implementation