Exact Penalization in Stochastic Programming—Calmness and Constraint Qualification

. We deal with the conditions which ensure exact penalization in stochastic programming problems under finite discrete distributions. We give several sufficient conditions for problem calmness including graph calmness, existence of an error bound, and generalized Mangasarian-Fromowitz constraint qualification. We propose a new version of the theorem on asymptotic equivalence of local minimizers of chance constrained problems and problems with exact penalty objective. We apply the theory to a problem with a stochastic vanishing constraint.


Introduction
Constraints of real life optimization problems often depend on a random vector.If we know its probability distribution or if we have a good estimate, we can use various stochastic programming formulations to solve the problem and to get a highly reliable solution with respect to the realizations of the random vector.In chance constrained programming problems, a probability for fulfilling the random constraints is prescribed; see Prékopa [1,2] for an extensive review of the main results.In general, chance constrained problems are difficult to solve.Below, we give an overview of the main recent results in this area.P-level efficient points (pLEPs) introduced by Prékopa [3] can be seen as generalizations of the quantiles for multivariate probabilistic distributions and have been shown to be useful for solving chance constrained problems with random right-hand sides.The new definition of pLEPs proposed by Lejeune [4] uses elements from the combinatorial pattern recognition field.He demonstrated that the collection of pLEPs can be represented as a disjunctive normal form (DNF) and proposed a mixedinteger programming formulation allowing for the construction of the DNF.Bruni et al. [5] presented a heuristic approach EXPRESS for solving mixed-integer programming formulations of chance constrained problems.The heuristic is based on alternating phases of exploration using simplified integer problems and feasibility repairing using penalization.Computational results on a probabilistically constrained version of the multiknapsack problem confirmed good solution quality and time performance of the heuristic.Dentcheva and Martinez [6] analyzed nonlinear stochastic optimization problems with probabilistic constraints described by continuously differentiable nonconvex functions.They described the tangent and the normal cone to the level sets of the probability function and formulated first-and second-order optimality conditions.Moreover, they developed an augmented Lagrangian method for discrete distributions.Henrion and Möller [7] provided an explicit gradient formula for linear chance constraints under a multivariate Gaussian distribution and used the formula to solve a transportation network problem with stochastic demands.An important property of the feasibility set is convexity which can be ensured, for example, by log-concavity of the underlying distribution; see Shapiro et al. [8] for a review and generalizations.A recent result in this area has been proposed by Ninh and Prékopa [9] who investigated logconcavity of the compound distributions, which have many applications in insurance, finance, hydrology, and so forth.

Advances in Decision Sciences
Sample approximation techniques were studied by Luedtke and Ahmed [10], Branda [11] who derived exponential rates of convergence of the approximated solutions to the true one.Convergence of approximated probabilistic functions expressed as quantiles is investigated also by Miao et al. [12].The underlying probability distribution is usually not known precisely and possible influence of its improper choice has to be investigated.Dupačová and Kopa [13] accessed robustness of the probabilistic risk functions using the contamination technique and derived approximate contamination bound for the optimal value.
In this paper, we investigate an approach which uses penalized random constraints incorporated into the objective function.It can be verified that under some conditions we obtain an optimal solution with desirable reliability by increasing the penalty parameter and solving the resulting problems.It can be even shown that under mild conditions the penalty approach and the chance constrained problem are asymptotically equivalent; see Branda [14,15], Branda and Dupačová [16], and Ermoliev et al. [17].Recently, the equivalence was stated under exact penalization where a finite penalty parameter is sufficient to obtain a local optimal solution which is permanently feasible with respect to the random constraints; see Branda [18].The exact penalization property is ensured by a very general calmness property of the underlying optimization problem; compare Burke [19,20], Clarke [21].However, it is difficult or even impossible to verify the problem calmness directly.Thus, in this paper, some necessary conditions for stochastic programming problems are introduced and discussed.These conditions are based on the relations between constraint set calmness, local error bound property, and constraint qualification conditions.Moreover, we perform the whole analysis on the problems with general abstract constraints.This setting is important, for example, for the problems with vanishing constraints which are extended in this paper to take into account randomness.The deterministic case was considered, for example, by Hoheisel et al. [22].
There is an extensive development of the exact penalty methods for deterministic optimization in recent years.Antczak [23] established the exactness of the  1 penalty function method used for solving nonsmooth constrained optimization problems with both inequality and equality constraints under locally Lipschitz invexity assumptions.Antczak [24] used the exact minimax penalty function method for solving a general nondifferentiable extremum problem with both inequality and equality constraints.Lian and Han [25] proposed a method to smooth the squareorder exact penalty function for inequality constrained optimization and showed convergence of their algorithm under mild conditions.Liu et al. [26] solved nonconvex optimization problems using a penalty algorithm based on smooth functions that approximate the usual exact penalty function.Under a generalized Mangasarian-Fromovitz constraint qualification condition they proved the convergence of the generated points to Karush-Kuhn-Tucker points of the original problem.Zaslavski [27] developed an approach in order to prove the existence of an exact penalty involving Palais-Smale condition and nonexistence of a solution which is a critical point for the constraint map.In his recent paper, Zaslavski [28] studied a class of constrained minimization problems on complete metric spaces; he established the generalized exact penalty property and obtained an estimate of the exact penalty.Applications of the exact penalty method in stochastic programming are limited to problems with second-order stochastic dominance constraints; see Meskarian et al. [29], Sun et al. [30].
The paper is organized as follows.The basic notation and problem formulations are proposed in Section 2. In Section 3, the necessary conditions for the problem calmness are summarized.In Section 4, we propose a theorem which states an asymptotic equivalence of the chance constrained problems and problems with exact penalty objective.An example of the problem with a stochastic vanishing constraint is given in Section 5. Section 6 concludes the paper.

Problem Statement
Let  be a real random vector on the probability space (Ω, F, ) with values in R  and known probability measure .A general chance constrained problem with an abstract random constraint can be formulated as where  : R  → R,  : R  × R  → R  , and (, ) are continuous in  for arbitrary  and measurable in  for arbitrary , Λ ⊆ R  is closed, and  ∈ [0, 1] is a small prescribed level.We will focus on the case when the distribution is finite discrete with realizations  1 , . . .,   and positive probabilities  1 , . . .,   such that ∑  =1   = 1.Then the problem can be rewritten as min   () where  denotes the indicator function which is equal to one if the condition in the brackets is satisfied and to zero otherwise.We will use the following modified  1 -norm for a vector  ∈ R  defined as which is necessary for further steps and to show the asymptotic equivalence of the considered stochastic programming problems; compare Branda [18].An important role is put to the problem where all random constraints are fulfilled.To simplify the notation, we set and Λ  =   =1 Λ.For  ∈ R  , we formulate a perturbed mathematical program: The unperturbed problem for which  = 0 corresponds to the case when  = 0 in the problem (2); that is, the feasible solutions are permanently feasible with respect to the considered realizations of the random vector .We denote by the distance of a point  and the set Λ  which will serve as the penalty function later on.
We will employ two concepts of calmness.The first is related directly to the perturbed problem ( 5 Definition 1.Let  * be feasible for the unperturbed problem, that is, for  = 0. Then the problem ( 5) is said to be calm at  * if there exist constant Ñ ≥ 0 (modulus) and  > 0 (radius) such that for all (, ) ∈ R  × R  satisfying  ∈   ( * ) and F  () +  ∈ Λ  , one has Note that if the problem ( 5) is calm at  * , then  * is necessarily a local solution to the unperturbed problem.
The following proposition was formulated by Burke [19, Theorem 1.1], Theorem 1.1, and gives the equivalence of the exact penalty property and problem calmness.Proposition 2. Let  * be feasible for the unperturbed problem, that is, for  = 0. Then the unperturbed problem is calm at  * with modulus Ñ and radius  > 0 if and only if  * is a local minimum of the function over the neighbourhood   ( * ) for all  ≥ Ñ.
The form of the penalty problem follows from the theorem:

Necessary Conditions for Problem Calmness
In this section, we give necessary conditions to ensure calmness of the problem (5) which can be hardly verified directly from its definition.First, we review the Lipschitz-like properties of the set-valued mappings  : R  ⇒ R  with graphs defined by (ii) Aubin property at  if and only if ∃,  > 0, where  ∈ ()), It can be seen directly from the definitions that the calmness is the weakest of the conditions and is implied by the local upper Lipschitz property or by the Aubin property, whereas these two conditions are ensured by the local Lipschitz property.
For  ∈ R  , denote by the perturbation of the (original) constraint set (0) = (F  ) −1 (Λ).The following preposition gives us an important result which shows that the graph calmness together with the local Lipschitz continuity of the objective function is sufficient conditions for the problem calmness.The preposition shows that sufficient conditions for calmness of the constrained set are desirable.Global graph calmness is ensured in the following important case.

Advances in Decision Sciences
In general, it is not possible to obtain such strong result.Thus, we have to focus on conditions which ensure the calmness at least locally.These conditions are based on constraint qualification; see, for example, Bazaraa et al. [33], Kruger et al. [34] for a review.We start with the definitions of important cones.The Fréchet normal cone to Π ⊆ R  at  ∈ cl Π is defined as Note that in finite dimensional setting the Fréchet normal cone corresponds to the polar of the tangent cone.
The relevance of the GMFCQ to our analysis is confirmed by the following proposition.
Proposition 7. Let F  be continuously differentiable.If the generalized Mangasarian-Fromowitz constraint qualification is fulfilled at  * , then the perturbation map  is calm at (0,  * ).Moreover, if  is continuously differentiable, then the problem is calm at  * .
Proof.For the first part see Flegel et al. [35,Corollary 5].Note that Mordukhovich [36] even showed that GMFCQ implies the stronger Aubin property.The second part follows directly from Proposition 4.
Another important property, which is equivalent to graph calmness, is the existence of a local error bound of the constraint set.Proposition 8. Calmness of the constraints system (15) at (0,  * ) is equivalent to the existence of a local error bound: (18) for some constants L > 0 and  > 0.

An Asymptotic Equivalence of Stochastic Programming Problems
In this section, we propose a new version of the theorem which concerns asymptotic equivalence of chance constrained problems and problems with penalty objective.We use the notion of optimal values for local optimal values; that is, the vector of decision variables  is restricted to   ( * ) for some local optimal solution  * of (5) in the following theorem.For arbitrary  ∈ (0, 1),  > 0, and  ∈ (0, 1) put Then for any prescribed  ∈ (0, 1) there always exists  ≤ Ñ large enough so that minimization (9) generates local optimal solutions   which also satisfy the probabilistic constraint with the given , where Ñ denotes the calmness modulus of the problem (9).
Moreover, bounds on the local optimal value   of (1) based on the local optimal value   of (9) and vice versa can be constructed: with for any sequences of the local optimal solutions of the chance constrained problem    and the problem with penalty objective   , where ε < min    .
Proof.The proof is analogous to Theorem 3 proposed by Branda [18] if we realize that the GMFCQ together with differentiability implies calmness of the problem (5) and that the local solution of ( 5) is permanently feasible.
If the GMFCQ is fulfilled and the objective function is continuously differentiable, then the problem ( 5) is calm and the corresponding problem with the penalty objective derived above is exact.Thus, in this case Theorem 9 applies to the problem with a stochastic vanishing constraint.

Conclusions
In this paper, we have discussed conditions which ensure exact penalization in stochastic programming.We have started with a general calmness condition of the underlying problem, which is equivalent to the exact penalty property.However, this condition cannot be verified directly; therefore necessary conditions have been proposed.We have shown that the local Lipschitz continuity of the objective function together with the calmness of the constraint set, which is implied, for example, by generalized Mangasarian-Fromovitz constraint qualification, is sufficient.These findings have enabled us to propose a new version of the theorem concerning asymptotic equivalence of the chance constrained problems and the problems with exact penalty objective, which is the main contribution of this paper.The theoretical results have been applied to the problem with a stochastic vanishing constraint where a sufficient condition for fulfilling the GMFCQ has been formulated.

Proposition 4 .
Let  * be a local minimizer such that  is calm at (0,  * ).Let the objective function  be locally Lipschitz continuous around  * .Then the original problem is calm at  * with modulus which is equal to the product of the moduli of the constraint set and the objective function.Proof.See Hoheisel et al. [22, Proposition 3.5 and its proof].

Proposition 5 .
The set-valued mapping  is calm at each point of its graph whenever this graph is polyhedral.Proof .See Rockafellar and Wets [32, Example 9.57].

Theorem 9 .
One considers the chance constrained problem (1) and the problem with the penalty objective (9).One assumes that (i) the problem (5) has a local solution  * , (ii) , (⋅, ) are continuously differentiable around the local solution  * , (iii) the GMFCQ is satisfied at the local solution  * .