Smoothing Nonmonotone Barzilai-Borwein Gradient Method and Its Application to Stochastic Linear Complementarity Problems

A new algorithm for nonsmooth box-constrained minimization is introduced. The method is a smoothing nonmonotone Barzilai-Borwein (BB) gradient method. All iterates generated by this method are feasible. We apply this method to stochastic linear complementarity problems. Numerical results show that our method is promising


Introduction
In this paper, we consider a problem min  () where ,  ∈   , and  :   → .If  is differentiable on the feasible set, there are many methods for (1), such as trust region method [1], projected gradient method [2], projected Barzilai-Borwein method [3], Newton method [4], and active-set projected trust region [5].If  is semismooth, there are few methods for (1).In this paper, we only consider that  is locally Lipschitzian, but not necessarily differentiable.
In [6], a smoothing projected gradient method (SPG) was introduced for nonsmooth optimization problems with nonempty closed convex set.This algorithm is easy to implement.At each iteration, authors approximate the objective function by a smooth function with a fixed smoothing parameter and employ the classical projected gradient method to obtain a new point.If a certain criterion is satisfied, then authors update the smoothing parameter using the new point for the next iteration.
The main motivation for the current work come from the numerical results of [6].Through analyzing the algorithm [6], we find that it takes a lot of time to calculate projection when test problems are large-scale.In order to avoid this shortcoming, we propose a smoothing nonmonotone BB gradient method.In our method, we use an active set and a nonmonotone line search strategy.The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method.We apply it to stochastic linear complementarity problems.
Throughout this paper, ‖ ⋅ ‖ will be the Euclidean norm.For all  ∈   , the orthogonal projection of  onto a set  will be denoted by   ().For a given matrix  = [  ] ∈  × ,  ⋅ will be the th row of .
The paper can be outlined as follows.In Section 2, we describe our method.In Section 3, stochastic linear complementarity problems are introduced.In Section 4, we apply our method to stochastic linear complementarity problems and numerical results are illustrated and discussed.Finally, we make some concluding remarks in Section 5.

Smoothing Nonmonotone BB Gradient Method
In this section, we propose a smoothing nonmonotone BB gradient method for (1), where  is a general locally Lipschitz continuous function.
where  min ≤   ≤  max (0 <  min <  max ).The set (  ) ∪ (  ) is an estimate of the active set at point   .For simplicity, we abbreviate (  ), (  ), and (  ) defined by (5) as   ,   , and   , respectively.We determine   by the following process: It is easy to show   +   ∈ .Now, we state algorithm as follows.
The convergence of Algorithm 2 is similar to literature [6,7], so we omit the process here.

Stochastic Linear Complementarity Problems
Let (Ω, F, P) be a probability space with Ω being a subset of   .Suppose that the probability distribution P is known.
In general, there is no  satisfying (12) for almost  ∈ Ω.A deterministic formulation for the SLCP provides a decision vector which is optimal in a certain sense.Different deterministic formulations may yield different solutions that are optimal in different senses.There are two reformulations of (12) that have been proposed: the expected value (EV) formulation [15] and the expected residual minimization (ERM) formulation [8].In this paper, we concentrate on ERM.ERM is to find a vector  ∈   + that minimizes the expected residual of the SLCP ((), ()); that is, min where Φ :   × Ω →   is defined by . . .
In [6], authors use sample average approximation (SAA) [21], which replaces ( 14) by its approximation Here, the sample  1 , . . .,   is generated by Monte Carlo sampling method, following the same probability distribution as .So, a smoothing function of ĝ () is In the next section, we apply Algorithm 2 to (20).

Numerical Results
The test problems are randomly generated.The procedure of generating the tested problems is employed from [16,20].We omit the procedure here.All these problems are tested in Matlab (version 7.5).Several parameters are needed to generate the problem:   , , and .A vector x ∈   + is randomly generated.When the parameter  = 0, then x is the unique global solution of the test problems and ( x) = 0.If  > 0, the global solution is unknown.in all numerical experiments.We terminate the iteration if one of the following conditions is satisfied: We start from the same randomly generated initial point and compare our method with SPG.The numerical results can be seen in Tables 1-2.Let  () =     min (, ∇ ())     (25 if  is differentiable at .In Tables 1-2,  denote the number of variables; (  ) and (  ) present (⋅) at the finial iterates   and   , respectively; (  ) and (  ) denote the value of (⋅) at the final iterates   and   , respectively.The results reported in Tables 1-2 show that smoothing nonmonotone BB gradient method is quite promising.From results shown in Tables 1-2, we observe that, firstly, Algorithm 2 has less time for all test problems.Secondly, the function values of Algorithm 2 and SPG are close for most test problems.Through analysis, we find that Algorithm 2 drops faster than SPG in each step.This is mainly attributed to the active set and BB step.

Concluding Remarks
In this paper, we present a smoothing nonmonotone Barzilai-Borwein gradient method for nonsmooth box-constrained minimization.The main idea of our method is to use a parametric smoothing approximation function in nonmonotone BB gradient method.In order to have a high speed of convergence, we calculate two-step size.We use BB method and inexact line search, which make sure that the algorithm has high efficiency.We apply it to stochastic linear complementarity problems.Numerical results show that our method is promising.
Remark 4. Algorithm 2 employs a two-loop approach: the outer loop updates smoothing parameter , and the inner loop computes a stationary point of (⋅,   ).In Step 4, the alternate BB step is used to compute the direction    .Remark 5.