Distributionally Robust Joint Chance Constrained Problem under Moment Uncertainty

We discuss and develop the convex approximation for robust joint chance constraints under uncertainty of first- and second-order moments. Robust chance constraints are approximated by Worst-Case CVaR constraints which can be reformulated by a semidefinite programming. Then the chance constrained problem can be presented as semidefinite programming. We also find that the approximation for robust joint chance constraints has an equivalent individual quadratic approximation form.


Introduction
Chance constraints, also called probabilistic constraints in the literature, have a long history in stochastic programming and are the direct way to treat stochastic data uncertainty.With a large class problem involved, it can be formulated in the following form: P  ( 0  (x) + w  (x)   ≤ 0, ∀ = 1, 2, . . ., ) ≥ 1 − , where  is undetermined vector, x ∈  is the decision vector,  ∈ R  is a bounded, convex closed set which can be represented by a set of additional deterministic semidefinite constraints, and c ∈ R  is the deterministic cost vector.The chance constraint in the above problem requires all the  uncertainty-affected constraints to be jointly feasible with probability at least 1 − , where  is a desired safety factor given by the decision-maker.Problem (1) can be classified as an individual chance constrained problem when  = 1 and a joint chance constrained problem when  ≥ 2. This problem has been considered by Charnes et al. [1], Miller and Wagnet [2], and Prékopa [3].Due to the feasible set of problems, (1) is typically nonconvex and sometimes even disconnected; at the same time, the full and accurate information about the probability distribution P  cannot be required; the problem has not found interest and wide application in theory and practice for a long time.
One interesting issue on the chance constrained problem is to determine the distributional condition under which the problem can be reformulated as tractable convex programming.Alizadeh and Goldfarb [4], Calafiore and EI Ghaoui [5], Erdogan and Iyengar [8], and Zymler et al. [6] showed that the chance constraint can be reformulated as tractable convex and cone constraints under some special exact information, respectively.However, the computation of chance constrained problems for general case is intractable.Nemirovski and Shapiro [7] pointed out that computing the probability of a weighted sum of uniformly distributed variables being nonpositive is already NP-hard.The intractability of a chance constrained problem using exact information has spurred recent interest in robust optimization in which data uncertainties are controlled in several types of uncertainty sets [9,10].Moreover, robust optimization generally needs segmental information on probability distributions such as known supports and covariances.Zymler et al. [6] showed an exact LMIS reformulation for chance constrained problem which can be computed by solving a tractable SDP under 2 Journal of Applied Mathematics known first-and second-order moments information.Usually, in practice, one only has limited information about the probability distribution driving the uncertain parameters, involved in the decision-making process.It implies that we cannot obtain the exact moments information.In this paper, we extend the framework of Zymler et al. [6] to the case of inexact known moments information.We use the following two constraints parameterized by  ≥ 0 which rely on empirical estimates of the mean  0 and covariance matrix Σ 0 of the random vector to construct the distributional information: ( It describes how likely  is to be close to  0 controlled by the vector .At the same time, the parameter  provides natural means of quantifying one's confidence in  0 and Σ 0 .In what follows, we consider the latest problem in this paper under the distributional set where D is the set of all probability distributions on the measurable space (R  , B) and B is the Borel -algebra on R  .To this end, let P denote the set of all probability measures corresponding to D. Now let us consider the following distributionally robust chance constrained program: It is easy to verify that the feasible set of the above inequality is a subset of the feasible set of problem (1).This yields the following distributionally robust chance constrained program: which constitutes a conservative approximation for problem (1) in the sense that it has the same objective functions but a smaller feasible set.
In this paper, we discuss approximations for distributionally robust joint chance constraints under inexact information of first-and second-order moments, which extends the framework of Zymler et al. [6] from exact known first-and second-order moments.We prove that it can be approximated by a Worst-Case CVaR constraint which can be represented as a semidefinite programming.Then, we show that distributionally robust joint chance constrained problem has an equivalent quadratic approximation form.The advantage of the new framework lies on the limited information about distribution and the tractable convex approximation.

Distributionally Robust Joint Chance Constraints for LP
Let (x, ) be the chance constraint with the decision vector x and the random vector .Now, we consider the general robust individual chance constraint whose feasible set is denoted by Π  = { ∈ R  : inf P∈P P((, ) ≤ 0) ≥ 1 − }.Shapiro et al. [11] showed that the feasible set is convex if the probability distribution function is -concave and (x, ) are quasiconcave jointly in both arguments.Unfortunately, the above chance constraint is not necessarily convex in the decision variables x.
It is well-known that CVaR method, popularized by Rockafellar and Uryasev [12], is the tightest convex approximation to the general individual probabilistic constraint (see, e.g., [7,13]).Then, using the CVaR method, we get the tractability and convex approximation of the individual above chance constraint.The corresponding conditional value-at-value is defined as follows: Next, we show that CVaR can be used to construct convex approximation for general chance constraint.By definition (7), we have Then Thus, from (9), we obtain Therefore, the worst-case constraint on the left hand side constitutes a conservative approximation for the distributionally robust chance constraint on the right hand side.The above discussion makes us define a feasible set as follows: Theorem 1.The feasible set Θ  constitutes a conservative approximation for Π  , in which Θ  ⊆ Π  .Lemma 2. For any fixed , let (, ) : R  × R  → R be a measure function and F-integrable for all  ∈ D (R  ,  0 , Σ 0 , ).We define the worst-case expectation problem as follows: Consequently, the problem can be rewritten into the following: (( where Proof.The worst-case expectation Ψ(x, ) can equivalently be unfolded as We can formulate the Lagrangian dual problem of the above problem which takes the following form: where  0 ∈ R, y 1 , y 2 ∈ R  , and Y 1 , Y 2 ∈ S  are the dual variables for the constraints, respectively.It is obvious that the conditions on  and Σ 0 are sufficient to ensure that the Dirac measure   0 lies in the relative interior of the feasible set of the above original problem.It implies that Ψ(x, ) must be equal to the optimal value of the above dual problem.Rewriting the above model into the form of LMIS, we have inf (( Actually, we complete this proof when we divide the first inequality into two parts.
We define the feasible set Π JCC of the distributionally robust joint chance constraint as A popular approximation for Π JCC is based on Bonferroni' inequality, but this method can be overly conservative even if  is divided among the  individual chance constraints, and Chen et al. [14] give an example which highlights this shortcoming.In order to mitigate the potential overconservatism of the Bonferroni approximation, we proposed the following approximation based on a combined inequality: The problem becomes an individual chance constraint which can be conservatively approximated by a Worst-Case CVaR constraint.We define the approximated set as follows: which is a tight approximation for Π JCC .The following theorem proves that the set Θ JCC has a tractable reformation in terms of LMIS and therefore promises to get out a convex approximation for Π JCC .
Theorem 3. The feasible set Θ JCC can be written as Proof.We find that the constraint x ∈ Θ ICC is coinciding with Υ(x) ≤ 0, where For any fixed  and , as shown in Lemma 2, the above subordinate maximization problem in (22) can be rewritten into inf (( The last inequality constraint in the above problem can be expanded into  simpler inequality.Representing the subordinate worst-case expectation problem in ( 22), we get inf Thus, we can easily get the exact representation of Θ JCC .This completes the proof.
Remark 4. When  = 1, we set  = 0, which implies that we have the exact first-and second-order moments information for the individual chance constrained problem.We can get an exact approximation for Π JCC : where we denote M = ( ) ∈ S  .This representation is exactly the one in [6]; then we can get the exact result of problem (5) (where  = 1) by computing the above problem.
By now, we can compute the feasible set Θ JCC by solving a tractable SDP.We appreciate that many modern methods can be used to solve such a convex problem to any precision  in polynomial time and we can use YALMIP [16] to solve it in practice.
Consider the robust individual chance constraint (18) which represents the robust joint chance constraint (17).In convex programming, we use the max piecewise linear function to approximate a convex function frequently; there, we use a quadratic function to approximate the max function in the chance constraint (18) inversely.Note that h() =   H +   h + ℎ 0 that satisfies For better argumentation, we define Actually, it is easy to find that the feasible set Θ JCC  constitutes a conservative approximation for Θ JCC , which means Θ JCC  ⊆ Θ JCC .
Theorem 5.The feasible set Θ ICC  can be written as Moreover, we find Θ JCC  = Θ ICC .
Proof.By a similar discussion as before, we get that the robust quadratic chance constraint (28) is equivalent to the Worst-Case CVaR constraint: We know that the above inequality can be reformulated as Note that the constraints in ( 27) are equivalent to Thus, we can get the tractable form of Θ JCC  in Theorem 5.The LMIs in Theorem 5 can be represented as Finally, we get Θ JCC  = Θ ICC with vanishing the middle matrix which is formed by the components of h().
This theorem presents that the approximation of a distributionally robust joint chance constraint by a Worst-Case CVaR constraint is equivalent to the approximation of the max function implied by the joint chance constraint by a quadratic majorant.