Robust Linear Programming with Norm Uncertainty

Robust optimization is a rapidly developing methodology to address the optimization problems under uncertainty. Compared with sensitivity analysis and stochastic programming, the robust optimization approach can handle cases where fluctuations of the data may be large and can guarantee satisfaction of hard constraints which are required in some practical settings. The advantage of the robust optimization is that it could prevent the optimal solution against any realization of the uncertainty in a given bounded uncertainty set. The robust linear optimization problem where the data are uncertain was first introduced by Soyster [1]. The basic idea is to assume that the vector of uncertain data can be any point (scenario) in the uncertainty set, to find a solution that satisfies all the constraints for any possible scenario from the uncertainty set, and to optimize the worst-case value of the objective function. Ben-Tal and Nemirovski [2, 3] and El Ghaoui et al. [4, 5] addressed the overconservatism of robust solutions by allowing the uncertainty sets for the data to be ellipsoids and proposed some efficient algorithms to solve convex optimization problems under data uncertainty. Bertsimas et al. [6, 7] proposed a different approach to control the level of conservatism on the solution that has the advantage that leads to a linear optimizationmodel. Formore about the robust optimization, we refer to [8–15]. Consider the following linear programming problem:


Introduction
Robust optimization is a rapidly developing methodology to address the optimization problems under uncertainty.Compared with sensitivity analysis and stochastic programming, the robust optimization approach can handle cases where fluctuations of the data may be large and can guarantee satisfaction of hard constraints which are required in some practical settings.The advantage of the robust optimization is that it could prevent the optimal solution against any realization of the uncertainty in a given bounded uncertainty set.The robust linear optimization problem where the data are uncertain was first introduced by Soyster [1].The basic idea is to assume that the vector of uncertain data can be any point (scenario) in the uncertainty set, to find a solution that satisfies all the constraints for any possible scenario from the uncertainty set, and to optimize the worst-case value of the objective function.Ben-Tal and Nemirovski [2,3] and El Ghaoui et al. [4,5] addressed the overconservatism of robust solutions by allowing the uncertainty sets for the data to be ellipsoids and proposed some efficient algorithms to solve convex optimization problems under data uncertainty.Bertsimas et al. [6,7] proposed a different approach to control the level of conservatism on the solution that has the advantage that leads to a linear optimization model.For more about the robust optimization, we refer to [8][9][10][11][12][13][14][15].
Consider the following linear programming problem: where  ∈  ×1 , Ã ∈  × is a uncertain matrix which belongs to an uncertainty set ,  ∈  ×1 , and   is a given set.The robust counterpart of problem (1) is An optimal solution  * is said to be a robust solution if and only if it satisfies all the constraints for any Ã ∈ .
In this paper, we consider the linear optimization problem (1) with uncertainty set described by (, )-norm for the reason not only to make up the disadvantages of the uncertain parameters of all possible values that will give the same weight, but also to consider the robust cost of the robust optimization model which is mentioned in [6].We suggest the robust counterpart of problem (1) that is a computationally convex optimization problem.We also provide probabilistic guarantees on the feasibility of an optimal robust solution when the uncertain coefficients obey independent and identically distributed normal distributions.
Here is the structure of this paper.In Section 2, we introduce the (, )-norm and its dual norm and give the comparison with the Euclidean norm.In Section 3, we show that the linear optimization problem (1) with uncertainty set described by (, )-norm is equivalent to a convex programming.In Section 4, we provide probabilistic guarantees on the feasibility of an optimal robust solution when the uncertainty set  is described by the (, )-norm.

The (𝑝,𝑤)-Norm
In this section, we introduce the (, )-norm and its dual norm.Furthermore, we show worst-case bounds on the proximity of the (, )-norm as opposed to the Euclidean norm considered in Ben-Tal and Nemirovski [2,10] and El Ghaoui et al. [4,5].
2.1.The (,)-Norm and Its Dual.We consider the th constraint of the problem (1),     ≤   .We denote by   the set of coefficients   , and ã ,  ∈   takes values in the interval [  − â ,   + â ] according to a symmetric distribution with mean equal to the nominal value   .For every , we introduce a parameter   , which takes values in the interval [0, |  |].It is unlike the case that all of the   ,  ∈   will change, which is proposed by [1].Our goal is to protect the cases that are up to ⌈  ⌉ of these coefficients which are allowed to change and take the worst-case values at the same time.Next, we introduce the following definition of (, )-norm.Definition 1.For a given nonzero vector,  ∈   with   > 0,  = 1, . . ., , we define the (, )-norm as (5) then the (, )-norm degenerates into -norm studied by Bertsimas and Sim [6]; that is, (2) If  = (1, . . ., 1)  and  = , then (, )-norm degenerates into  1 , and we can get ‖‖ , = ‖‖ , = ∑  =1   ,  = 1, . . ., .
Next we derive the dual norm.
Proof.The norm ‖‖ , is equivalent to According to linear programming strong duality, we have Then, ‖‖ , ≤ 1 if and only if is feasible.We give the following dual norm ‖‖ * , by From ( 10) we obtain that Using the LP duality again we have (2) When the (, )-norm degenerates into  1 , we can get its dual norm: (3) When the (, )-norm degenerates into  ∞ , we can get its dual norm: Our goal is to maximize the convex function over a polytope; then there exists an extreme point optimal solution for the above problem.We can get the || + 1 extreme points: where   is the unit vector with the th element equal to one and the rest is equal to zero.Obviously, the problem can get the optimum value of max{max ∈ {1/

Robust Counterpart
In this section, we will show that the robust formulation of (1) with the (, )-norm is equivalent to a linear programming problem.
We need the following proposition to reformulate (36) as a linear programming problem.Proposition 8. Given a vector  * , the protection function of the th constraint, is equivalent to the following linear programming problem: Proof.An optimal solution of problem (39) obviously consists of ⌈  ⌉ variables at 1, which is equivalent to a subset The objection function of problem (39) converts to which is equivalent to problem (38).
Next we will reformulate problem (36) as a linear programming problem.Proof.First, we consider the dual problem of ( 39

Probabilistic Guarantees
In this section, we will provide probabilistic guarantees on the feasibility of an optimal robust solution when the uncertainty set  is described by the (, )-norm.
Proposition 11.We denote by  *  and  *  the set and the index, respectively, which achieve the maximum for   ( * ,   ) in (38).Assume that  * is an optimal solution of problem (42).The violated probability of the th constraint satisfies where where we use the knowledge of Markov's inequality, the independence and symmetric distribution, and   ≤ 1.
Remark 12. Clearly, Ξ is related to   and  = ( 1 ,  2 , . ..,   )  ; the role of the parameter   or   (for  is a given vector) is to adjust the robustness of the proposed method against the level of conservatism of the solution.We define Ξ and   as robust cost and protection level, which control the tradeoff between the probability of violation and the effect to the objective function of the nominal problem.Naturally we want to bound the probability Pr(∑ ∈      ≥ Ξ).The following result provides a bound that is independent of the solution  * .Let   ,  ∈   be independent and symmetrically distributed random variables in [−1, 1]; then we have ∈      )] ∈   [exp (    )] ã *  >   ) = Pr (∑     *  + ∑ ⌈  ⌉ =1   ; we get the result.2 / (2)!)    ()