A Nonlinear Lagrange Algorithm for Stochastic Minimax Problems Based on Sample Average Approximation Method

. An implementable nonlinear Lagrange algorithm for stochastic minimax problems is presented based on sample average approximation method in this paper, in which the second step minimizes a nonlinear Lagrange function with sample average approximation functions of original functions and the sample average approximation of the Lagrange multiplier is adopted. Under a set of mild assumptions, it is proven that the sequences of solution and multiplier obtained by the proposed algorithm converge to the Kuhn-Tucker pair of the original problem with probability one as the sample size increases. At last, the numerical experiments for five test examples are performed and the numerical results indicate that the algorithm is promising.


Journal of Applied Mathematics
The remainder of this paper is organized as follows.Preliminaries are given in Section 2. The SAA method-based nonlinear Lagrange algorithm and convergence analysis are established in Section 3. Section 4 reports the numerical results by using the proposed algorithm to solve five test examples.Finally, conclusions are drawn in Section 5.

Preliminaries
This section serves as a preparation for the convergence analysis of the proposed SAA method-based nonlinear Lagrange algorithm.The assumptions on problem (1) are provided firstly.Furthermore, some results that are essential to our discussion are listed.At last, we recall the nonlinear Lagrange algorithm in [7].
holds with probability one.
That is, is a set of linear independent vectors.
where  > 0 is a constant.
which means that Lemma 3 is true.
Algorithm 4. We have the following.

The SAA Method-Based Nonlinear Lagrange Algorithm and Its Convergence
In view of the numerical computation difficulty in Algorithm 4 and motivated by the SAA method, we provide the following implementable nonlinear Lagrange algorithm based on the SAA method firstly.Furthermore we establish the convergence analysis of the SAA method-based algorithm under assumptions (A1)-(A7) in this section.Implementable SAA method-based Algorithm 5 is presented as follows.
Algorithm 5. We have the following.
Step 2. Solve and obtain the optimal solution x()  .
Proof.(i) We use the mathematical induction method to show that statement (i) is true below.
According to (a) and (b), we have that statement (i) holds.
(ii) From statement (i) and Theorem 6, we obtain that statement (ii) is true.
(iii) From statement (ii) and Theorem 5.3 in [11], one has that statement (iii) holds.
The above theorem shows that the sample average approximation Lagrange multiplier û()  converges to its counterpart  () with probability one, and the optimal value and optimal solutions of the subproblem min ∈( * ,) Ĝ (, û()  , ) converge to their counterparts of the subproblem min ∈( * ,) (,  () , ) with probability one under some mild conditions.Next we will analyze the convergence of Algorithm 5 under some mild conditions.
Furthermore, since the conclusion is obtained.
Remark 9. Theorem 8 shows that, under some mild assumptions, the sequence pair ( x()  , û()  ) generated by Algorithm 5 locally tend to the K-T pair ( * ,  * ) of the original problem (1) with probability one as  → ∞ and  → ∞ when the controlling parameter  is less than the threshold t.

Numerical Results
The numerical results for five test examples by using Algorithm 5 are presented in this section, where the five test problems are compiled based on the deterministic optimization problems in the literature [17,18].The numerical experiments are implemented in Matlab 7.1 runtime environment on the same computer, whose basic parameters are Intel CORE i3-2310 M@2.10 GHz and memory 2 Gb.In the experiments, the sample  1 , . . .,   with sample size  is generated by  in Matlab 7.1.For each problem, we choose  = 10 2 ,  = 10 3 ,  = 10 4 ,  = 10 5 ,  = 10 6 , and  = 10 7 , respectively, to make comparison.The initial value  (0)  = (1/, . . ., 1/)  for each example.Unconstrained minimization problem in Step 2 of Algorithm 5 is solved by BFGS quasi-Newton method combined with Wolf nonexact linear search rule, and the control precision is 10 −6 in this step.The stopping criterion in Step 3 is where  = 10 −8 .The obtained numerical results are reported in Tables 1-5, in which , 1/, iter., ‖ * − x()  ‖, and ‖V ()  − V * ‖ represent the sample size, the value of controlling parameter, the number of iterations, the error between the solution sequence x() by Algorithm 5 and the optimal solution  * of problem (1), and the error between the optimal value V() by Algorithm 5 and the optimal value V * of problem (1), respectively.
From the above numerical results, the following remarks are proposed.
Remark 10.The preliminary numerical results show that Algorithm 5 is feasible and promising.
Remark 11.Compared with the numerical results for the same test example with the different sample size , the above numerical results show that, with the sample size being chosen larger, the precision of the optimal solution and the optimal value by Algorithm 5 become higher, which coincides with the theoretical analysis in Section 3.

Conclusions
This paper investigates a nonlinear Lagrange algorithm for solving stochastic minimax problems based on the sample average approximation method.And the convergence theory of the proposed algorithm is established under some assumptions.Furthermore, the preliminary numerical results are reported to demonstrate the feasibility and effectiveness of the algorithm.The future works on improving the numerical experiments to obtain the solutions with higher precision and performing the numerical experiments for large-scale test examples deserve our further attention.And applying this proposed algorithm to some practical problems is also interesting.

Table 1 :
The numerical results for Example 1.

Table 3 :
The numerical results for Example 3.

Table 4 :
The numerical results for Example 4.

Table 5 :
The numerical results for Example 5.