© Hindawi Publishing Corp. DUALITY MODELS FOR SOME NONCLASSICAL PROBLEMS IN THE CALCULUS OF VARIATIONS

Parametric and nonparametric necessary and sufficient optimality conditions are established for a class of nonconvex variational problems with generalized fractional objective functions and nonlinear inequality constraints containing arbitrary norms. Based on these optimality criteria, ten parametric and parameterfree dual problems are constructed and appropriate duality theorems are proved. These optimality and duality results contain, as special cases, similar results for minmax fractional variational problems involving square roots of positive semidefinite quadratic forms as well as for variational problems with fractional, discrete max, and conventional objective functions, which are particular cases of the main problem considered in this paper. The duality models presented here subsume various existing duality formulations for variational problems and include variational generalizations of a great variety of cognate dual problems investigated previously in the area of finite-dimensional nonlinear programming by an assortment of ad hoc methods.


Introduction.
In this paper, we establish necessary and sufficient optimality conditions and construct a fairly large number of parametric and parameter-free duality models for the following unorthodox variational problem: (P) for all x satisfying the constraints of (P).Finite-dimensional counterparts of (P) are known as generalized fractional programming problems in the literature of mathematical programming.These problems have arisen in the areas of multiobjective programming [1], approximation theory [2,3,12,16], goal programming [5,11], and economics [15] among others.
The notion of duality for generalized linear fractional programming was initially considered by von Neumann [15] in his investigation of economic equilibrium problems.More recently, various optimality criteria, duality formulations, and computational algorithms for several classes of generalized linear and nonlinear fractional programming problems have appeared in the related literature.A fairly extensive list of references pertaining to various aspects of these problems is given in [20].
In contrast to the finite-dimensional case, infinite-dimensional problems of this type and, in particular, variational problems with generalized fractional objective functions have not yet received much attention in the literature of optimization theory and, consequently, at the present no significant results of any kind are available for these problems.
In the present study, we will establish, under suitable convexity assumptions, both parametric and nonparametric necessary and sufficient optimality conditions, construct several parametric and parameter-free duality models, and prove appropriate duality theorems.Our approach for achieving these goals is based on a set of necessary optimality conditions for a related problem discussed in [4] and two ancillary problems that are intimately linked to (P).These problems will enable us to treat (P) within the framework of convex programming.As pointed out earlier, the optimality and duality results established in the present study improve and extend a number of similar existing results for variational problems and provide continuous analogues of many cognate results previously obtained in the area of nonlinear programming.In particular, they generalize the results of [17] and are closely related to those given in [18,19].
The rest of this paper is organized as follows.In Section 2 we recall a set of necessary optimality conditions given in [4] for a special case of (P).In Section 3 we utilize these optimality conditions in conjunction with some other auxiliary results to establish both parametric and nonparametric necessary optimality principles for (P).We begin our discussion of duality for (P) in Section 4 where we introduce two parametric duality models and prove weak, strong, and strict converse duality theorems under appropriate convexity assumptions.In Sections 5 and 6 we formulate a total of eight parameter-free duality models for (P) and prove appropriate duality theorems.Finally, in Section 7 we briefly discuss an important special case of (P) which involves square roots of positive semidefinite quadratic forms.
It is evident that all the results obtained for (P) are also applicable, when appropriately specialized, to the following classes of variational problems with fractional, discrete max, and conventional objective functions, which are particular cases of (P): (P1) where F (assumed to be nonempty) is the feasible set of (P), that is, Although different concepts of duality have been discussed for various types of conventional variational (and optimal control) problems (see, e.g., [9,13] and the references therein), constrained variational problems like (P1) and (P2) with nonstandard objective functions have not received much attention in the area of optimization theory.In contrast, their static analogues have been studied extensively during the last three decades.Recent surveys of fractional programming are given in [6,14], and a fairly extensive bibliography is included in [14].Similarly, a detailed account of discrete (and continuous) minmax theory and methods is available in [7].
Evidently, a salient feature of (P) is the presence of arbitrary norms in its objective and constraint functions.Optimization problems involving norms occur in many areas of the decision sciences, applied mathematics, and engineering.These problems are encountered most frequently in location theory, approximation theory, and engineering design.A number of references dealing with various aspects of these problems are given in [17] (see also [4,18,19]).

Preliminaries.
In our derivation of optimality conditions for (P) in the next section, we will need an optimality result of [4] [4].However, it is easily seen from the abstract reformulation of the problem and proof of [4, Theorem 1] that such integral inequality constraints can indeed be incorporated in the problem under consideration without any difficulty.
In the above theorem, the argument t of the vector-valued functions x, ẋ, x * , and ẋ * was omitted for the sake of notational simplicity.This practice will be continued throughout the sequel.

Optimality conditions.
In this section, we adopt a Dinkelbach-type [8] indirect parametric approach for establishing a set of necessary optimality conditions for (P).The intermediate auxiliary problem making this possible has the following form: (Pλ) where λ ∈ R + ≡ [0, ∞) is a parameter.It is well known in the area of generalized fractional programming that this problem is closely related to (P).The relationship between (P) and (Pλ) needed for our present purposes is stated in the following lemma whose proof is straightforward and hence omitted.
Lemma 3.1.Let λ * be the optimal value of (P) and let v(λ) be the optimal value of (Pλ) for any λ ∈ R + such that (Pλ) has an optimal solution.If x * is an optimal solution of (P), then x * is an optimal solution of (Pλ * ) and v(λ * ) = 0.
It is clear that (Pλ) is in turn equivalent to the following problem: In view of Lemma 3.1 and the equivalence of (Pλ) and (EPλ), it is evident that if x * is an optimal solution of (P) with optimal value λ * , then (x * ,µ * ) = (x * , 0) is an optimal solution of (EPλ * ).We use this observation in the proof of Theorem 3.2 which is the main result of this section.We first specify our basic assumptions which will remain in force throughout the sequel.
(a) The functions (b) The constraints of (P) satisfy Slater's constraint qualification (see Theorem 2.1).Theorem 3.2.Let x * ∈ F be an optimal solution of (P).Then there exist ) ) Proof.Since x * is an optimal solution of (P), by Lemma 3.1, it is an optimal solution of (Pλ * ), where λ * is the optimal value of (P).This implies that (x * ,µ * ) = (x * , 0) is an optimal solution of (EPλ * ).By hypothesis, there exists x ∈ PWS n [a, b] with x(a) = x a and x(b) = x b , at which Slater's constraint qualification is satisfied.Because of the special structure of the constraints of (EPλ * ), it is obvious that for some μ ∈ R, Slater's constraint qualification holds for (EPλ * ) at ( x, μ).Therefore, by Theorem 2.1 (applied to (EPλ * )), there exist u * , v * , α * i , β * i , i ∈ k, and γ * j , j ∈ m, as specified above, such that (3.In order to demonstrate that the necessary optimality conditions of Theorem 3.2 are also sufficient for optimality of x * , we need the generalized Cauchy-Schwarz inequality [10]: for each w, z ∈ R n , one has w T z ≤ w z * . (3.9) We also need the following lemma which provides an alternative expression for the objective function of (P); its proof is straightforward and hence omitted. where Theorem 3.4.Let x * ∈ F, let λ * = ϕ(x * ), and assume that there exist (3.4), (3.5), (3.6), (3.7), and (3.8) hold for all t ∈ [a, b].Then x * is an optimal solution of (P).
Proof.Let x be an arbitrary feasible solution of (P).Keeping in mind that u * ≥ 0, λ * ≥ 0, and by (3.9) and integration by parts ≥ 0 (by the feasibility of x). (3.12) Now using this inequality and Lemma 3.3, we see that (3.13) Since x was an arbitrary feasible solution, we conclude from the above inequality that x * is an optimal solution of (P).
An examination of the above proof suggests the following modification of Theorem 3.4.Its proof is almost identical to that of Theorem 3.4.Theorem 3.5.Consider the assumptions in Theorem 3.4 except that (3.3) is replaced by either one of the following inequalities: Then x * is an optimal solution of (P).
Although Theorems 3.4 and 3.5 have almost identical proofs, it should be stressed that (3.3), (3.14), and (3.15) are essentially different conditions.First, it is evident that any (x,λ,u,v,α 1 ,...,α k ,β 1 ,...,β k ,γ 1 ,...,γ m ) that satisfies the conditions specified in Theorem 3.4 also satisfies the requirements of Theorem 3.5, but the converse is not necessarily true; second, (3.14) and (3.15) are not, in general, transformable to (3.3); and third, (3.3) is a system of n equations, whereas (3.14) and (3.15) are single inequalities.Evidently, from a computational point of view (3.3) is preferable to (3.14) and (3.15) because of the dependence of the latter two on the feasible set of (P).
The optimality conditions stated in Theorems 3.2, 3.4, and 3.5 contain the parameter λ * which was introduced as a result of our indirect approach requiring an auxiliary parametric problem.However, reviewing the structure of these optimality conditions, one can easily see that this parameter can, in fact, be eliminated.Indeed, this can be accomplished simply by solving for λ * in (3.5), substituting the result into (3.3),simplifying, and redefining the multiplier vector.This process leads to the following parameter-free versions of Theorems 3.2, 3.4, and 3.5.

Theorem 3.6. A feasible solution x • of (P) is optimal if and only if there exist u
) where (3.22) Theorem 3.7.A feasible solution x • of (P) is optimal if and only if there exist u (3.16), (3.17), (3.18), (3.19), (3.20), (3.21), and either of the following inequalities hold for all t ∈ [a, b]: 4. Duality model I. Making use of Theorems 3.2, 3.4, and 3.5, in this section we formulate two parametric dual problems for (P) and prove weak, strong, and strict converse duality theorems.A number of parameter-free dual problems will be discussed in Sections 5 and 6.
Consider the following two problems: ) ( DI) Maximize λ subject to (4.1), (4.3), (4.4), (4.5), and Evidently, the structures of (DI) and ( DI) are motivated primarily by the nature and contents of the optimality conditions established in Theorems 3.2, 3.4, and 3.5 which form the basis for the proofs of all the duality relations for (P)-(DI) and (P)-( DI).
Comparing (DI) and ( DI), we observe that ( DI) is relatively more general than (DI) in the sense that any feasible solution of (DI) is also feasible for ( DI), but the converse is not necessarily true.Moreover, we see that (4.2) is a system of n equations, whereas (4.6) and (4.7) are two inequalities which in general cannot be expressed as equivalent systems of equations.Evidently, (DI) is preferable to ( DI) from a computational point of view because of the dependence of the latter on the feasible set of (P).However, despite these apparent differences, it turns out that all the duality results that can be established for (P)-(DI) are also valid for (P)-( DI).Therefore, in the sequel we will consider only the pair (P)-(DI).
The next two theorems show that (DI) is a dual problem for (P).
Proof.Keeping in mind that λ ≥ 0, u ≥ 0, and by (3.9) and integration by parts (by the primal feasibility of x). (4.8) In view of (4.3), the above inequality reduces to (4.9) Now using this inequality and Lemma 3.3, we see, as in the proof of Theorem 3.4, that ϕ(x) ≥ λ.

Duality model II.
In the remainder of this paper we will formulate and discuss several parameter-free duality models for (P) whose forms and features are based on Theorems 3.6 and 3.7.We begin with the following pair of dual problems: (DII) subject to ( (5.4)
Throughout this section and the next one, it will be assumed that Φ(y, u) ≥ 0 and Ψ (y, u) > 0, for all (y, u) such that (y,u,v,α,β,γ) is a feasible solution of the dual problem under consideration.
We next proceed to state and prove weak, strong, and strict converse duality theorems for (P)-(DII).
Proof.Suppose to the contrary that x(t) = x * (t) on a subset of [a, b] with positive length.From Theorem 5.2 we know that there exist ) is an optimal solution of (DII) and ϕ(x * ) = ψ(z * ).Now proceeding as in the proof of Theorem 5.1 (with x replaced by x * and z by z), we arrive at the strict inequality Using this inequality and Lemma 3.3, it can be shown, as in the proof of Theorem 5.1, that ϕ(x * ) > ψ(z), in contradiction to the fact that ϕ(x * ) = ψ(z * ) = ψ(z).Therefore, it follows that x(t) = x * (t) for all t ∈ [a, b] and ϕ(x * ) = ψ(z).
Next, we turn to a brief discussion of certain variants of (DII) and ( DII).We show that the constraints (5.6) and (5.7) are superfluous and can be deleted without invalidating the foregoing duality results.More precisely, we demonstrate that the following reduced versions of (DII) and ( DII) are also dual problems for (P): (5.17) (5.18) where (5.24) Since it may not be immediately apparent that (DIIA) and ( DIIA) are dual problems for (P), we provide a proof for (P)-(DIIA).
Proof.The proof is similar to that of Theorem 5.3.
Proof.The proof is similar to that of Theorem 5.3.
We next identify two reduced versions of (DIII) and ( DIII), which are the counterparts of (DIIA) and ( DIIA) introduced in the preceding section.These problems are obtained by dropping the constraints (6.6) and (6.7) and modifying the remaining constraints and objective functions of (DIII) and ( DIII).They take the following forms: where Γ and Π are as defined in the description of (DIIA) and  7. Problems containing square roots of positive semidefinite quadratic forms.In this section, we briefly discuss an interesting special case of (P) obtained by choosing all the norms to be the 2 -norm.
Let • L(i) , • M(i) , i ∈ k, and • N(j) , j ∈ m, be the 2 -norm • 2 and define P i (t) = A i (t) T A i (t), Q i (t) = B i (t) T B i (t), i ∈ k, and R j (t) = C j (t) T C j (t), j ∈ m.Then it is clear that P i (t), Q i (t), i ∈ k, and R j (t), j ∈ m, are n × n symmetric positive semidefinite matrices and, consequently, the functions x(t) → [x(t) T P i (t)x(t)] 1/2 , x(t) → [x(t) T Q i (t)x(t)] 1/2 , i ∈ k, and x(t) → [x(t) T R j (t)x(t)] 1/2 , j ∈ m, are convex on R n .With these choices of the norms and matrices, (P), (P1), (P2), and (P3) take the following forms: ( where F * is the feasible set of (P * ), that is, To see more explicitly the changes that will be required in specializing the optimality conditions of Section 3 for (P * ), we next combine, modify, and restate Theorems 3.2 and 3.4 for (P * ).Mathematical programming problems containing square roots of quadratic forms have been the subject of numerous investigations.These problems have arisen in stochastic programming, multifacility location problems, and portfolio selection problems among others.Many optimality and duality results for several classes of these problems have appeared in the related literature.A fairly long list of references pertaining to various aspects of these problems is given in [17].
, b] for at least one index i ∈ k with the corresponding component ũi of ũ positive, or h j (t, •, •) is strictly convex throughout [a, b] for at least one j ∈ m with the corresponding component ṽj (t) of ṽ(t) positive on [a, b].Then x(t) = x * (t) for all t ∈ [a, b], that is, t) h j t, y, ẏ + C j (t)y N(j) ≥ 0, t ∈ [a, b],