Stochastic Methods Based on VU-Decomposition Methods for Stochastic Convex Minimax Problems

This paper applies sample average approximation (SAA) method based on VU-space decomposition theory to solve stochastic convex minimax problems. Under some moderate conditions, the SAA solution converges to its true counterpart with probability approaching one and convergence is exponentially fast with the increase of sample size. Based on the VU-theory, a superlinear convergentVU-algorithm frame is designed to solve the SAA problem.

SCMP is a natural extension of deterministic convex minimax problems (CMP for short).The CMP has a number of important applications in operations research, engineering problems, and economic problems.While many practical problems only involve deterministic data, there are some important instances where problems data contains some uncertainties and consequently SCMP models are proposed to reflect the uncertainties.
A blanket assumption is made that, for every  ∈   , [  (, )],  = 0, . . ., , are well defined.Let  1 , . . .,   be a sampling of .A well-known approach based on the sampling is the so-called SAA method, that is, using sample average value of   (, ) to approximate its expected value because the classical law of large number for random functions ensures that the sample average value of   (, ) converges with probability 1 to [  (, )] when the sampling is independent and identically distributed (idd for short).Specifically, we can write down the SAA of our SCMP (1) as follows: min where The problem (3) is called the SAA problem and (1) the true problem.
The SAA method has been a hot topic of research in stochastic optimization.Pagnoncelli et al. [1] present the SAA method for chance constrained programming.Shapiro et al. [2] consider the stochastic generalized equation by using the SAA method.Xu [3] raises the SAA method for a class of stochastic variational inequality problems.Liu et al. [4] 2 Mathematical Problems in Engineering give the penalized SAA methods for stochastic mathematical programs with complementarity constraints.Chen et al. [5] discuss the SAA methods based on Newton method to the stochastic variational inequality problem with constraint conditions.Since the objective functions of the SAA problems in the references talking above are smooth, then they can be solved by using Newton method.
More recently, new conceptual schemes have been developed, which are based on the VU-theory introduced in [6]; see else [7][8][9][10][11].The idea is to decompose   into two orthogonal subspaces V and U at a point , where the nonsmoothness of  is concentrated essentially on V and the smoothness of  appears on the U subspace.More precisely, for a given  ∈ (), where () denotes the subdifferential of  at  in the sense of convex analysis, then   can be decomposed into direct sum of two orthogonal subspaces, that is,   = U ⊕ V, where V = lin(() − ), and U = V ⊥ .As a result an algorithm frame can be designed for the SAA problem that makes a step in the V space, followed by a U-Newton step in order to obtain superlinear convergence.A VU-space decomposition method for solving a constrained nonsmooth convex program is presented in [12].A decomposition algorithm based on proximal bundle-type method with inexact data is presented for minimizing an unconstrained nonsmooth convex function in [13].
In this paper, the objective function in ( 1) is nonsmooth, but it has the structure which has the connection with VUspace decomposition.Based on the VU-theory, a superlinear convergent VU-algorithm frame is designed to solve the SAA problem.The rest of the paper is organized as follows.In the next section, the SCMP is transformed to the nonsmooth problem and the proof of the approximation solution set converges to the true solution set in the sense that Hausdorff distance is obtained.In Section 3, the VU-theory of the SAA problem is given.In the final section, the VU-decomposition algorithm frame of the SAA problem is designed.

Convergence Analysis of SAA Problem
In this section, we discuss the convergence of (3) to (1) as  increases.Specifically, we investigate the fact that the solution of the SAA problem (3) converges to its true counterpart as  → ∞.Firstly, we make the basic assumptions for SAA method.In the following, we give the basic assumptions for SAA method.
We now move on to discuss the exponential rate of convergence of SAA problem (3) to the true problem (1) as sample increases.
Proof.Let  > 0 be any small positive number.By Theorem The proof is complete.

The VU-Theory of the SAA Problem
In the following sections, we give the VU-theory, VUdecomposition algorithm frame, and convergence analysis of the SAA problem.The subdifferential of f at a point  ∈   can be computed in terms of the gradients of the function that are active at .More precisely, where is the set of active indices at , and Let  ∈   be a solution of (3).By continuity of the structure functions, there exists a ball   () ⊆   such that For convenience, we assume that the cardinality of () is  1 + 1 ( 1 ≤ ) and reorder the structure functions, so that () = {0, . . .,  1 }.From now on, we consider that The following assumption will be used in the rest of this paper.
Assumption 4. The set is linearly independent.
Theorem 5. Suppose Assumption 4 holds.Then   can be decomposition at  :   = U ⊕ V, where Proof.The proof can be directly obtained by using Assumption 4 and the definition of the spaces V and U.
Given a subgradient  ∈  f with V-component  V =   , the U-Lagrangian of f , depending on  V , is defined by The associated set of V-space minimizers is defined by Theorem 6. Suppose Assumption 4 holds.Let () =  +  ⊕ V() be a trajectory leading to  and let  := ∇ 2   (0, 0).Then for all  sufficiently small the following hold: (i) the nonlinear system, with variable V and the parameter , has a unique solution V = V() and Proof.Item (i) follows from the assumption that   are  2 and applying a Second-Order Implicit Function Theorem (see [14], Theorem 2.1).Since V() is  2 , () is  2 and the Jacobians V() exist and are continuous.Differentiating the primal track with respect to , we obtain the expression of () and item (ii) follows.

Algorithm and Convergence Analysis
Supposing 0 ∈  f (), we give an algorithm frame which can solve (3).This algorithm makes a step in the V-subspace, followed by a U-Newton step in order to obtain superlinear convergence rate.
Step 2. Find the active index set ().