Smoothing Techniques and Augmented Lagrangian Method for Recourse Problem of Two-Stage Stochastic Linear Programming

The augmented Lagrangian method can be used for solving recourse problems and obtaining their normal solution in solving two-stage stochastic linear programming problems. The augmented Lagrangian objective function of a stochastic linear problem is not twice differentiable which precludes the use of a Newton method. In this paper, we apply the smoothing techniques and a fast Newton-Armijo algorithm for solving an unconstrained smooth reformulation of this problem. Computational results and comparisons are given to show the effectiveness and speed of the algorithm.


Introduction
In stochastic programming, some data are random variables with specific possibility distribution [1], which was first introduced by the designer of linear programming problems, Dantzig, in [2].
In this paper, we consider the following two-stage stochastic linear program (slp) with recourse which involves the calculation of an expectation over a discrete set of scenarios: and  shows the expectation of function (, ) which depend on the random variable .The function  is defined as follows: (, ) = min ∈R  2   {()   | ()   ≥ ℎ () −  () } , where  ∈ R × ,  ∈ R  , and  ∈ R  .Also, in the problem (3) vector of coefficients (⋅) ∈ R  2 , matrix of coefficients   (⋅) ∈ R  2 × 2 , demand vector ℎ(⋅) ∈ R  2 , and matrix (⋅) ∈ R  2 × depend on the random vector  with support space Ω.The problems (1) and (3) are called master and recourse problems of stochastic programming, respectively.We assume that the problem (3) has a solution for each  ∈  and  ∈ Ω.
In general, the recourse function () is not differentiable everywhere.Therefore, the traditional methods use nonsmooth optimization techniques [3][4][5].However, in the last decade, it is proposed smoothing method for recourse function in standard form of recourse problem [6][7][8][9][10][11].In this paper, we apply a smooth approximation technique to smooth recourse function that the recourse problem has inequality linear constrained.For more explanation see Section 2. The approximated problem is based on the least two-norm solution of recourse problem.This paper considers the augmented Lagrangian method to obtain least two-norm solution (Section 3).For convenience, Euclidean least twonorm solution of linear programming problem is named normal solution.This effective method contains solving an unconstrained quadratic problem which its objective function is not twice differentiable.To apply a fast Newton method we use the soothing technique and replace plus function by an accurate smooth approximation [12,13].In Section 4, the smoothing algorithm and the numerical results are presented.Also, concluding remarks are given in Section 5.

Approximation of Recourse Function
As mentioned the objective function of (1) is nondifferentiable.This disadvantage property occurs on the recourse function.In this section, there is an attempt to approximate it to a differentiable function.
Using dual of the problem (3), function (, ) can be written as follows: Unlike the linear recourse function, the quadratic recours function is differentiable.Thus in this paper, the approximation is based on the following quadratic problem with helpful properties: The next theorem shows that, for the sufficiently small  > 0, the solution of this problem is the normal solution of the problem (4).
(c) The gradient of function   (, ) at point  is in which  *  (, ) is the solution of the problem (5).
Using the approximated recourse function   (, ), we can define a differentiable approximation function to the objective function of (1): By (6), the gradient of above function exists and is obtained by This approximation has paved the way to use the optimization algorithm for master problem (1) in which the objective function is substituted by   () In [7], it is considered slp problem with inequality constrained in master problem and equality constrained in recourse problem.Also, in Theorem 2.3 of [7], it is shown that a solution of the approximated problem is a good approximation to a solution of master problem.Here we can express a similar theorem for the problem (1) by using the similar technique in the proof of Theorem 2.3 in [7].
Theorem 2. Consider the problem (1).Then, for any  ∈ , there exists an () > 0 such that for any  ∈ (0, ()] where  is defined as follows: Let  * be a solution of (1) and  *  a solution of (11).Then, there exists an  > 0 such that for any 0 <  ≤ Further, one assumes that  or   are strongly convex on  with modulus  > 0.Then, According to Theorem 1, it can be found that for obtaining the gradient of function   () in each iteration, we need the normal solution of  linear programming problems (4).In this paper, the augmented Lagrangian method [17] is used for this purpose.

Smooth Approximation and Augmented Lagrangian Method
In the augmented Lagrangian method, the unconstrained maximization problem is solved which gives the project of a point on the solution set of the problem (4).Assume that ẑ is an arbitrary vector.Consider the problem of finding the least 2-norm projection ẑ * of ẑ on the solution set  * of the problem (4) In this problem, vector  and random variable  are constants; therefore, for simplicity, this is assumed to be  = ℎ() − (), and function Q() is defined in a way that Q() = (, ).
Considering that the objective function of the problem ( 16) is strictly convex, its solution is unique.Let us introduce the Lagrangian function for the problem ( 16) as follow: where  ∈ R  2 and  ∈ R are Lagrangian multipliers and , ẑ are constant values.Therefore, the dual problem of ( 16) becomes max (, , , ẑ, , ) .
By solving the inner minimization of the problem (18), duality of the problem ( 16) is obtained: where duality function is The following theorem states that if  is sufficiently large, solving the inner maximization of (19) gives the solution of the problem (16).
Theorem 3 (see [17]).Also, in special conditions, the solution for the problem (3) can be also obtained and the following theorem expresses this issue.
According to the theorems mentioned above, augmented Lagrangian method presents the following iteration process for solving the problem ( 16): where  0 is an arbitrary vector and here we can use zero vector as initial vector for obtaining normal solution of the problem (4).We note that the problem (23) is a concave problem and its objective function is piecewise quadratic and is not twice differentiable.Applying the smoothing techniques [18,19] and replacing  + by a smooth approximation, we transform this problem to a twice continuously differentiable problem.
Chen and Mangasarian [19] introduced a family of smoothing functions, which is built as follows.Let  :  → [0, ∞) be a piecewise continuous density function satisfying It is obvious that the derivative of plus function is step function, that is, () + = ∫  −∞ (), where the step function () is defined 1 if  > 0 and equals 0 if  ≤ 0. Therefore, a smoothing approximation function of the plus function is defined by where (, ) is smoothing approximation function of step function and is defined as By choosing specific cases of these approaches are obtained as follows: The function  with a smoothing parameter  is used here to replace the plus function of ( 22 (29) Therefore, we have the following iterative process instead of ( 23) and (28): It can be shown that as the smoothing parameter  approaches infinity any solution of smooth problem (29) approaches the solution of the equivalent problem (22) (see [19]).
We begin with a simple lemma that bounds the square difference between the plus function  + and its smooth approximation (, ).
Lemma 5 (see [13]).For  ∈ R and || < where (, ) is the  function of (28) with smoothing parameter  > 0. (40) Considering the advantage of the twice differentiability of the objective function of the problem (32) allows us to use a quadratically convergent Newton algorithm with an Armijo stepsize [20] that makes the algorithm globally convergent.

Numerical Results and Algorithm
In each iteration of the process (30), one concave, quadratic, unconstrained maximization problem is solved.For solving it, the fast Newton method can be used.
In the algorithm, the Hessian matrix may be singular, thus we use a modified Newton.The direction in each iteration for solving (30) is obtained through the following relation: where  is a small positive number,   2 is the identity matrix of order  2 , and   is the suitable step length that Armijo algorithm is used for determining it (see Algorithm 1).
The proposed algorithm was applied to solve some recourse problems.Table 1 compares this algorithm with CPLEX v. 12.1 solver for quadratic convex programming problems (5).As is evident from Table 1, most of recourse problems could be solved more successful by the algorithm which is based on smooth augmented Lagrangian Newton method (SALN) than CPLEX package (for illustration see the problems 21-25 in Table 1).This algorithm gives us high accuracy and the solution with minimum norm in suitable time (see last column of Table 1).Also, we can find that CPLEX is better than the algorithm proposed for some recourse problems in which the matrices are approximately square (Ex.line [5][6][7][8][9][10][11][12].
The test generator generates recourse problems.These problems are generated using the MATLAB code show in Algorithm 2.
The algorithm considered for solving several recourse problems was run on a computer with 2.5 dual-core CPU and 4 GB memory in MATLAB 7.8 programming environment.Also, in the generated problems, recourse matrix  is the Sparse matrix ( 2 ×  2 ) with the density .The constants  and  in the above algorithm in (44) were selected 1 and 10 −8 , respectively.

Conclusion
In this paper, a smooth reformulation process, based on augmented Lagrangian algorithm, was proposed for obtaining the normal solution of recourse problem of a stochastic linear programming.This smooth iterative process allows us to use a quadratically convergent Newton algorithm, which accelerates obtaining the normal solution.
Table 1 shows that the proposed algorithm has appropriate speed in most of the problems.This result, specifically, can be observed in recourse problems with the matrix of coefficients in which the number of constraints is noticeably more than the number of variables.The more challenging is solving the problems which their coefficient matrix is square (the numbers of constraints and variables get closer to each other), and more time is needed by the algorithm for solving the problem.
(16), assume that the set  * is nonempty, and the rank of submatrix   of  corresponding to nonzero components of ẑ * is  2 .In such a case, there is  * which for all  ≥  * , ẑ * = (ẑ +   () + ) + is the unique and exact solution for the problem(16), where () is the point obtained from solving the problem (21).