Optimality Conditions for Nonsmooth Generalized Semi-Infinite Programs

and Applied Analysis 3 The relationship between the Clarke sub-subdifferentials and basic sub-differentials is also referred to by Mordukhovich [17, Theorem 3.57]. Proposition 1 (see [16]). Let f be proper and lsc around x ∈ domf. Then, ∂f (x) = cl co [∂f (x) + ∂∞f (x)]. (15) If, in particular, f is Lipschitz continuous at x, then ∂f (x) = cl co ∂f (x). (16) The normal cone N A enjoys the robustness property N A (x) = lim sup x→x N A (x) provided that the setting is finite dimension [17, page 11]. However, this is not true for the convexified cone N A ; see, for example, Rockafellar [19]. Consider A := {x ∈ R3 | x 3 = x 1 x 2 or x 3 = −x 1 x 2 }, (17) x = (0, 0, 0). (18) The normal cone N A (x) is just the x 3 -axis, but N A (x) is the x 2 x 3 -plane for all x = (x 1 , 0, 0). The following proposition is from Rockafellar [19]. Proposition 2 (see [19]). IfA is convex, or if N A (x) is pointed, then the multifunctionN A is closed at x; that is, for all xk → x, yk NA(x k ) 󳨀󳨀󳨀󳨀󳨀→ y, one has y ∈ N A (x). Proposition 3. The Clarke normal cone has the robustness property


Introduction
Generalized semi-infinite programming problem (GSIP) is of the form min  () ∈ R  ,  (, ) ≤ 0,  ∈  (), where () := { ∈ R  | V(, ) ≤ 0}.GSIP is different from the standard semi-infinite programming in that its index set  is dependent on .This first systematic study of GSIP was Hettich and Still [1] where the reduction method was used to reduce GSIP into standard nonlinear programming problems and second-order optimality conditions were derived.Necessary optimality conditions at an optimal solution  for (1) with differentiable data are as follows there exist nonnegative numbers  0 , . . .,   , not all zero, such that where (, , , ) = (, ) − ⟨, V(, )⟩ and each (  ,   ) is the usual FJ-multiplier of the lower level problem at its optimal solution    () max  (, ) This condition was first derived by Jongen et al. [2] in an elementary way without any constraint qualifications or any kind of reduction approaches.They also proposed a constraint qualification under which it follows that  0 > 0 and discussed some geometrical properties of the feasible set which do not appear in standard semi-infinite case.The optimality conditions are further explored by Rückmann and Shapiro [3] and Stein [4].GSIP in itself is of complex and exclusive structures such as the nonclosedness of the feasible set, nonconvexity, nonsmoothness, and bilevel structure and thus a difficult problem to solve see, for example, [2,5,6].We also refer to [7][8][9][10] for some recent study on the structure of GSIP.
It is obvious that GSIP can be rewritten equivalently as the nonlinear programming problem min  () where () is the optimal value of ().Then we can relate GSIP to the min-max problem min  max { () −  () ,  ()}; (5) see [3] for more details.On the other hand, GSIP can be related to the following bilevel problem: min (,)  () The problem ( 6) is a special bilevel optimization problem in that its upper level constraint is the same as the objective function of its lower level problem.However, there is a slight difference between GSIP problem (1) and problem (6).The feasible set of ( 6) is a subset of (1) in that the feasible set of (1) is the combination of the feasible set of (6) and the complement of dom .For more comparisons between GSIP and bilevel problems, see Stein and Still [11].
For general references of bilevel optimization, see [13].
In this paper we concentrate on the optimality conditions of nonsmooth GSIP whose defining functions are Lipschitz continuous.Similar works are [14,15].We achieve this via the lower level optimal value function reformulation and then derive its necessary optimality conditions via the generalized differentiation.One of the key steps is to estimate the generalized gradients of the lower level optimal value function which involves parametric optimization.We will consider two cases with different approaches related to the two reformulations of GSIP as previously mentioned.Firstly, we develop optimality conditions via the min-max formulation with Lipschitz lower level optimal value function.Secondly, we develop optimality conditions via bilevel formulation under the assumption of partial calmness.

Preliminaries
In this section, we present some basic definitions and results from variational analysis [16,17].Given a set  in R  , the regular normal cone N of  at  ∈  is defined by The (general) normal cone   of  at  is defined by Given a function  : R  → R and a point  with () finite, denote by epi  the epigraph of .The regular subdifferential of  at  is defined by The general (basic, limiting) and singular subdifferential of  at  are defined, respectively, by The upper regular subdifferential of  at  is defined by ∂+ () := − ∂(−)(), and the upper subdifferential of  is defined by  + () := lim sup The Clarke (convexified) normal cone can be defined by two different approaches.On the one hand, it can be defined by the polar cone of the Clarke's tangent cone where T () = lim inf    →,↘0 ( − )/ or defined via the generalized directional derivative of the (Lipschitzian) distant function dist(⋅, ); see Clarke [18].On the other hand, it can be defined by the closed convex hull of the (general) normal cone For this definition and also the equivalence of the two definitions, see, for example, Rockafellar and Wets [16].The Clarke subgradients and Clarke horizon subgradients of  at  are defined by The relationship between the Clarke sub-subdifferentials and basic sub-differentials is also referred to by Mordukhovich [17,Theorem 3.57].
The following definitions are required for further development.
(i) Given  ∈ (), we say that the mapping  is inner semicontinuous at (, ) if for every sequence   →  there is a sequence   ∈ (  ) converging to  as  → ∞.
(ii)  is inner semicompact at  if for every sequence   →  there is a sequence   ∈ (  ) that contains a convergent subsequence as  → ∞.
Here the concept of -inner semicontinuity/semicompactness is important for our considerations.It is typical that the value function  of the lower level problem () of GSIP is not continuous, even taking value −∞.
The following two results are about continuity properties and estimates of subdifferentials of marginal functions which are crucial to our analysis for GSIP problems.