MINIMIZATION OF NONSMOOTH INTEGRAL FUNCTIONALS

In this paper we examine optimization problems involving multidimensional nonsmooth integral functionals defined on Sobolev spaces. We obtain necessary and sufficient conditions for optimality in convex, finite dimensional problems using techniques from convex analysis and in nonconvex, finite dimensional problems, using the subdifferential of Clarke. We also consider problems with infinite dimensional state space and we finally present two examples.


INTRODUCTION.
The importance of the problem of minimization of a multidimensional integral functional defined on a Sobolev space, is well documented in the books of Ekeland-Temam [3]  and Ladyzhenskaya-Uraltseva [15].Various problems in calculus of variations, optimal control of distributed parameter systems and mechanics involve such a minimization.
In this note we obtain some necessary and sufficient conditions for the existence of a minimum or e-minimum of an integral functional defined on a Sobolev space.However contrary to most of the works in the literature, we consider nonsmooth integrands.Using concepts and techniques form nonsmooth analysis, we are able to obtain necessary and sufficient conditions for optimality in finite dimensional, convex problems (see theorem 3.1 and corollary I), in finite dimensional nonconvex problems (see theorem 4.1) and in infinite dimensional problems (see theorem 5.1).Finally we present two examples illustrating the applicability of our results.

PRELIMINARIES.
In this section we briefly recall some of the basic notions and facts from nonsmooth analysis that we will need in the sequel.For more details we refer to the works of Clarke [2], Let X be a real normed space and X" its dual.consider any function f:Xff =U{ +}.The conjugate of f(.)is the function f':X'--} defined by f*(x*)=sup{(x',x)-f(x):x X}.Here (-, .)denotes the duality brackets for the pair (X,X*).It is clear from this definition that for all x' X', we have (x',z) f'(z')+ f(z).
This inequMity is known as the "Young-Fenchel inequMity'.Given a proper convex function f:X (proper meing that f(.)is not identicMly + ), the convex subdifferential of f(. ), is the generally multivMued mapping Of:XX" defined by Of(x)= {x' X': (x',y-x) f(y)-f(x) for M1 y X}.The elements of Of(z) are cMled subgradients of f(. at x.It is cle that Of(x) is Mways a closed, convex, maybe empty subset of X'.Let C be a closed convex subset of X and let c(') be its indicator function; i.e. c(X)= 0 if x C, + otherwise.Then Oc(z) # if and only if z C and Oc(X) Nc(x) {z" X':(x',y-z) 0 for M1 y C}, the normal cone to C at x.If C is affine space parallel to a subspace V, then Nc(x V If f(. is Gateaux differentiable at x, then Of(z)= {v f(z)}.Also using the subdifferential, we can have the following generalization of a well known optimality condition concerning the minimum of f(-).So for a proper, convex function f(. ), the minimum (global) of f(.) over X is attained at xX if and only if 00f(x).Also it is easy to check that x' Of(x)if and only if f(x)+ f*(x*)= (x',x)("Young-Fenchel equality").More generally, given any e 0 and a proper convex function f(. ), the e-subdifferential of f(-) at x is defined by O,f(x) {x" X': (x', y-x)-e f(y)-f(x) for M1 y X}.Clearly if 0, we recover the convex subdifferential defined above.If f F0(X)= {proper, lower semicontinuous, convex functions} d x dom f {z X: f(z) < }, then Of(x) # for > 0. DuMly we can define O,f(x) by saying that x* O,f(x)if and only if f(x)+ f*(x')-(x',x) e. AgMn O,c(x Nb(x) {x* X*'(x*,y-z) e for all y C}, the set of e-normals to C at x.Note that Nb(x), e > 0 is no longer a cone.
Let f:XR be a locMly Lipschitz function.Following Clke [2], we define ff(x;h)=lim l(u+x)-l(u) the generalized directional derivative of f(.) at z in the M0 direction h.It is easy to check that f(x;is finite, positively homogeneous, subadditive and satisfies f(z;h) k h II-So we c define the set O,f(x) (x" X':(x',h) f(x;h) for all h }.We cM10f(x) the Clarke (or generalized) subdifferential of f(. at x X.If f(.)is also convex, then the Clarke and convex subdifferentials coincide; i.e.Of(x) Of(x).
Let f'(x;h)=li 1{+xh)-y()x If f'(x; h) exists for all hX and f'(x;h)=f(x;h), h X, then f(-) is said to be regul at x.This is a fairly large cls of functions that includes the convex continuous functions and the functions that are strictly differentiable at x.

Q.E.D.
Using partial conjugates of L(z, .,. with respect to x and y respectively, we can have the following necessary condition for optimality concerning (,).By L "1 (resp.L") we will denote the conjugate of L(z,.,y) (resp. of L(z,x,.)).
Q.E.D.In this section we drop the convexity hypothesis on L(z, .,. and instead we assume that L(z, .,. is Lipschitz.Using Clarke's subdifferential, we derive a necessary condition for optimality in this nonconvex problem. So our hypothesis about L(-,., -) is now the following: H(L)': L:Z IR R"R is an integrand s.t.

INFINITE DIMENSIONAL EVOLUTIONS.
In this section n 1 (time variable) and the state space is infinite dimensional.So let T =[0,b] and X be a separable, reflexive Banach space.We consider the following optimization problem: m in] L(,e())d: x(O)= ,()= k',x(. )_ WI'(T,X) (**) 0 Here W'(T,X) is the space of all X-valued distributions x(.) s.t.5c L(X) the derivative taken in the sense of distributions.Recall that WI'(T,X) is a Banach space with the norm w, -[11 x) / Ex)] '/-AXso it is well known that each function in WI'(T,X) is almost everywhere equal to an absolutely continuous function and so the boundary conditions in (**) make sense.
We have the following necessary and sufficient condition for (approximate) optimality in (**)' and hence in the equivalent problem (**).
By taking y=2(t) for every t6_T, we can see that e(t)>_ O. Also observe that given >_ 0, e(t)> if and only if there exists y 6_ X s.t.
Invoking theorem 3.1 of this paper (special case where the integrand is independent of (z,x)), we have that x(.E V is a solution of (***) if and only if there exists v*(. ) L(Z) s.t.
If we express those optimality conditions in terms of the subdifferential of the cost inegr=d (1 + )/=, we hve v'( OJ(x) V J(x) (1 + v and so clearly divv" 0. These are the optimality conditions obtained by Ekeland-Temam [3], chapter V, section 1.1 and chapter X, section 4.2.