MINIMUM PRINCIPLE AND CONTROLLABILITY FOR MULTIPARAMETER DISCRETE INCLUSIONS VIA DERIVED

We consider a multiparameter discrete inclusion and we prove that the reachable set of a certain variational multiparameter discrete inclusion is a derived cone in the sense of Hestenes to the reachable set of the discrete inclusion. This result allows to obtain sufficient conditions for local controllability along a reference trajectory and a new proof of the minimum principle for an optimization problem given by a multiparameter discrete inclusion with endpoint constraints.


Introduction
The concept of a derived cone to an arbitrary subset of a normed space has been introduced by Hestenes in [8] and successfully used to obtain necessary optimality conditions in control theory.However, in the last years, this concept has been largely ignored in favor of other concepts of tangents cones, that may intrinsically be associated to a point of a given set: the cone of interior directions, the contingent, the quasitangent and, above all, Clarke's tangent cone.
In our previous papers [3][4][5][6][7], we indentified certain derived cones to the reachable sets of "ordinary" differential inclusions, hyperbolic differential inclusions, and some other classes of discrete inclusions in terms of the variational inclusion associated to the differential inclusion and to the discrete inclusion.These results allowed to obtain a simple proof of the maximum principle in optimal control and sufficient conditions for local controllability along a reference trajectory.
In the present paper, we consider a multiparameter discrete inclusion that describes the Roesser model and we prove that the reachable set of a certain variational multiparameter discrete inclusion is a derived cone in the sense of Hestenes to the reachable set of the discrete inclusion.
As applications of our main result, we point out the possibility to obtain some refinements of the existing results in the theory of necessary optimality conditions and also in controllability theory for multiparameter discrete inclusions.
Optimal control problems for systems described by discrete inclusions have been studied by many authors ([2, 10, 12], etc.).In the framework of multivalued problems, necessary optimality conditions for a problem without endpoint constraints are obtained in [10] and improved afterwards in [12].The idea in [12] is to use a special (Warga's) open mapping theorem to obtain a sufficient condition for the discrete inclusion to be locally controllable around a given trajectory and as a consequence, via a separation result, to obtain the minimum (maximum) principle.
In contrast with the approach in [12], even if the problem studied in the present paper is more difficult, due to endpoint constraints, the method in our approach seems to be conceptually very simple, relying only on 2-3 clear-cut steps and using a minimum of auxiliary results.
The paper is organized as follows.In Section 2 we present the notations and preliminary results to be used in the sequel.Section 3 is devoted to our main result; while in Section 4 we present the above mentioned applications concerning controllability and necessary optimality conditions.

Preliminaries
For a set that is, in general, neither a differentiable manifold nor a convex set, its infinitesimal properties may be characterized only by tangent cones in a generalized sense, extending the classical concepts of tangent cones in differential geometry and convex analysis, respectively.
From the rather large number of "convex approximations," "tents," "regular tangents cones," and so forth, in the literature, we choose the concepts of a derived cone introduced by Hestenes in [8].Definition 2.1 [8].A subset M ⊂ R n is said to be a derived set to X ⊂ R n at x ∈ X if for any finite subset {v 1 ,...,v k } ⊂ M, there exist s 0 > 0 and a continuous mapping a(•) We will write in this case that the derivative of a(•) at s = 0 is given by A subset C ⊂ R n is said to be a derived cone of X at x if it is a derived set and also a convex cone.
Aurelian Cernea 3 For the basic properties of derived sets and cones we refer to Hestenes [8]; we recall that if M is a derived set, then M ∪ {0} as well as the convex cone generated by M, defined by is also a derived set, hence a derived cone.
The fact that the derived cone is a proper generalization of the classical concepts in differential geometry and convex analysis is illustrated by the following results [8]: if X ⊂ R n is a differentiable manifold and T x X is the tangent space in the sense of differential geometry to X at x: then T x X is a derived cone; also, if X ⊂ R n is a convex subset, then the tangent cone in the sense of convex analysis defined by is also a derived cone.By cl A we denote the closure of the set A ⊂ R n .Since any convex subcone of a derived cone is also a derived cone, such an object may not be uniquely associated to a point x ∈ X; moreover, simple examples show that even a maximal with respect to a set-inclusion derived cone may not be uniquely defined: if the set X ⊂ R 2 is defined by then C 1 and C 2 are both maximal derived cones of X at the point (0,0) ∈ X.
On the other hand, the up-to-date experience in nonsmooth analysis shows that for some problems, the use of one of the intrinsic tangent cones may be preferable.From the multitude of the intrinsic tangent cones in the literature (e.g., [1]), the contingent, the quasitangent, and Clarke's tangent cones, defined, respectively, by seem to be among the most oftenly used in the study of different problems involving nonsmooth sets and mappings.
We recall that, in contrast with It follows from Definition 2.1 and from (2.7) that if C ⊂ R n is a derived cone to X at x, then C ⊂ Q x X.On the other hand, example (2.6), for which C 0 X = {0}, shows that a derived cone may not be contained into the cone C x X.
We recall that two cones C 1 ,C 2 ⊂ R n are said to be separable if there exists (2.8) We denote by C + the positive dual cone of C ⊂ R n : The negative dual cone of The following "intersection property" of derived cones, obtained by Miricȃ [11], is a key tool in the proof of necessary optimality conditions. (2.10) For a mapping g(•) : X ⊂ R n → R which is not differentiable, the classical (Fréchet) derivative is replaced by some generalized directional derivatives.We recall only the upper right-contingent derivatives defined by .11)and in the case when g(•) is locally-Lipschitz at x ∈ int(X) by Clarke's generalized directional derivative defined by (2.12) The results in the next section will be expressed in the case where g(•) is locally-Lipschitz at x, in terms of the Clarke generalized gradient defined by (2.13) By ᏼ(R n ) we denote the family of all subsets of R n .
Corresponding to each type of tangent cones, say τ x X, one may introduce (e.g., [1]) a set-valued directional derivative of a multifunction G(•) : X ⊂ R n → ᏼ(R n ) (in particular of a single-valued mapping) at a point (x, y) ∈ Graph(G) as follows: (2.14) Aurelian Cernea 5 We recall that a set-valued map, A(•) : R n → ᏼ(R n ), is said to be a convex (resp., closed convex) process if Graph(A(•)) ⊂ R n × R n is a convex (resp., closed convex) cone.For the basic properties of convex processes we refer to [1], but we will use here only the above definition.
In what follows we are concerned with the discrete inclusion where Denote by S F the solution set of inclusion (2.15), that is and by R N F := {x NN ; x ∈ S F } the reachable set of inclusion (2.6).We consider x = (x 0 ,x 1 ,...,x N ) ∈ S F a solution of (2.15).In the sequel we will assume the following hypothesis.
In order to associate the linearized (variational) inclusion to our problem we need the following hypothesis.
Let A 0 be a derived cone to F 00 (0,0,0) at x 00 .To the problem (2.15) we associate the linearized problem with the boundary conditions w i j = 0, for i < 0 or j < 0. (2.18) Denote by S A the solution set of inclusion (2.18) and by R N A the reachable set of inclusion (2.18).
We recall that if A : R n → ᏼ(R m ) is a set-valued map, then the adjoint of A is the multifunction A * : R m → ᏼ(R n ) defined by (2.19) 6 Multiparameter discrete inclusions In the study of our optimization problem we need the next duality result.
Finally, we recall the definition of local controllability.Definition 2.7.Inclusion (2.15) is said to be locally controllable around the solution x if x NN ∈ int(R N F ).

The main result
We prove that the reachable set R N A of the variational multiparameter inclusion (2.18) is a derived cone to the reachable set R N F at x NN .Theorem 3.1.Let A 0 ⊂ R n be a derived cone to F 00 (0,0,0) and assume that Hypotheses 2.3 and 2.4 are satisfied.
From the compactness of F 00 (0,0,0) and the fact that the values of F i j (•,•,•) are compact, there exists a mapping r( x 00 − r 00 (x) = d x 00 ,F 00 (0,0,0) , Moreover, by convexity of the values of F i j (•,•), the mapping r(•) is continuous.On the other hand, we have (3.13) Therefore, there exists l > 0 depending only on L such that Finally, we define the mapping a(•) : Obviously, a(•) is continuous on S and satisfies a(0) = x NN .
To end the proof we need to show that a(•) is differentiable at s 0 = 0 ∈ S and its derivative is given by which is equivalent with the fact that lim (3.18)

Applications
An important application of Theorem 3.1 concerns the local controllability of the discrete inclusion (2.15) in the sense of Definition 2.7.
Apart from Theorem 3.1 characterizing a derived cone to the reachable set, the main tool in the study of controllability is the remarkable property ([8, Theorem 4.7.4]) of the derived cones, according to which x ∈ int(X) if and only if Therefore, a straightforward application of this result and of Theorem 3.1 gives the following result.over the solutions of the multiparameter discrete inclusion: with endpoint constraints of the form where F i j (•) : R 3n → ᏼ(R n ), i, j = 0,...,N, are given set-valued maps, X N ⊂ R n , and g(•) : R n → R is also a given function.
In what follows we obtain necessary optimality conditions for a solution x = (x 0 ,x 1 ,..., x N ) to the problem (4.1)-(4.3) in the form of minimum principle.The proof of maximum principle is due, mainly to the "intersection property" of derived cones obtained by Miricȃ (Lemma 2.2 above).A last step uses the Tuan and Ishizuka duality results in [12], that characterize the positive dual of the solution set of the variational inclusion associated to (4.2) in terms of the adjoint inclusion.Theorem 4.2.Let X N ⊂ R n be a closed set, let x ∈ S F be an optimal solution for problem (4.1)-( 4.3) such that Hypothesis 2.4 is satisfied, and let g(•) : R n → R be a locally Lipschitz function.
In the case when C N and C 1 are not separable, we have ( From a simple separation result (e.g., [11,Lemma 5.1]), from the definition of Clarke's generalized gradient, and from (4.8), we obtain the existence of q ∈ ∂ C g(x(N)) ∩ ((C N ) + + (C 1 ) + ).Hence there exist q 1 ∈ (C N ) + , q 2 ∈ (C 1 ) + such that q = q 1 + q 2 .As in the first case, using Corollary 2.6 we Aurelian Cernea 11 deduce the existence of u 1 i j ,u 2 i j ,u 3 i j ∈ R n such that u 1 01 + u 2 10 + u 3 11 ∈ A + 0 and (4.4) holds.As in the first case, from (4.10) we obtain (4.6).We take in this case λ = 1 and (4.7) is also verified.
(i) The function f i j (•,•,•,u i j ) is Lipschitz for every fixed u i j ∈ U i j and the function f i j (x i j−1 ,x i−1 j ,x i−1 j−1 ,•) is continuous for every fixed x i j−1 ,x i−1 j , x i−1 j−1 .(ii) The function f i j (•,•,•,u i j ) is differentiable at (x i j−1 ,x i−1 j ,x i−1 j−1 ).
Corollary 4.4.Let x = (x 0 ,x 1 ,...,x N ) ∈ S F be an optimal solution for problem (4.1)-(4.2),with F i j defined by (4.12), such that Hypothesis 4.3 is satisfied.Consider X N ⊂ R n a closed set and g(•) : R n → R a locally Lipschitz function.
Then for any derived cone C 1 of X N at x NN there exist q ∈ R n and a solution p i j ∈ R n such that p i j = A 1 i j+1 * p i j+1 + A 2 i+1 j * p i+1 j + A 3 i+1 j+1 * p i+1 j+1 , 0< i + j < 2n, p i j = 0, for i > N or j > N, p NN ∈ λ∂ C g x NN − C + 1 , p i j ,x i j = min p i j ,v ; v ∈ F i j x i j−1 ,x i−1 j ,x i−1 j−1 , 0< i + j.Proof.We put A i j = (A 1 i j ,A 2 i j ,A 3 i j ) and thus A i j satisfies Hypothesis 2.4.We apply Theorem 4.2 and we find q ∈ R n and u 1 i j ,u 2 i j ,u 3 i j ∈ R n such that u 1 i j ,u 2 i j ,u 3 i j = A * i j u 1 i j+1 + u 2 i+1 j + u 3 i+1 j+1 , 0< i + j < 2N, u 1 NN ,u 2 NN ,u 3 NN = A a NN st(q), (4.15) such that the minimum condition (4.6) holds.It remains to put p i j := u 1 i j+1 + u 2 i+1 j + u 3 i+1 j+1 , where u 1 i j = u 2 i j = u 3 i j = 0 if i > N or j > N and the proof is complete.