An Improved Two-Step Method for Generalized Variational Inequalities

We propose an improved two-step extragradient algorithm for pseudomonotone generalized variational inequalities. It requires two projections at each iteration and allows one to take different stepsize rules. Moreover, from a geometric point of view, it is shown that the new method has a long stepsize, and it guarantees that the distance from the next iterative point to the solution set has a large decrease. Under mild conditions, we show that the method is globally convergent, and then the R-linearly convergent property of the method is proven if a projection-type error bound holds locally.


Introduction
Let  be a multivalued mapping from   into 2   with nonempty values, where   is a Euclidean space.Let  be a nonempty, closed, and convex subset of the Euclidean space   .The generalized variational inequality, abbreviated as GVI, is to find a vector  * ∈  such that there exists  * ∈ ( * ) satisfying where ⟨⋅, ⋅⟩ stands for the inner product of vectors in   .The solution set of problem ( 1) is denoted by  * .If the multivalued mapping  is a single-valued mapping from   to   , then the GVI collapses to the classical variational inequality problem [1][2][3][4].
For the problem GVI, we all know that it plays a significant role in economics and transportation equilibrium, engineering sciences, and so forth, and it has received considerable attention in the past decades [1,2,[5][6][7][8][9][10][11].Solution methods for GVI have been studied extensively.They can be roughly categorized into two popular approaches to attack the solution existence problem of the GVI.The first is analytic approaches.Instead of solving problem directly, the analytic approach reformulates the GVI as a well-studied mathematical problem first and then invokes an existence theorem for the latter problem [12].The second is a constructive approach in which the existence can be verified by the behavior of the proposed method which will be considered in this paper.
To the best of our knowledge, the extragradient method [2,13] is a popular constructive approach which was proposed by Korpelevich [13].It has been proved that the method has a contract property; that is, the generated sequence {  } by the method satisfies that       +1 −  *      ≤        −  *      , ∀ ≥ 0, for any solution  * of the GVI.It should be noted that the proximal point algorithm also possesses this property [14].
In [15], the authors proposed a new type extragradient projection method for variational inequalities (VI).The method proposed in [15] required only two projections at each iteration and allowed one to take different stepsize rules.Moreover, it was shown that this method had a long stepsize, and it guaranteed that the distance from the next iterative point to the solution set had a large decrease.Some elementary numerical experiments showed its efficiency.Now a question is posed naturally: as the problem GVI is an extension of the problem VI, can this theory be extended to the GVI?This constitutes the main motivation of the paper.
In this paper, inspired by [15], we presented an improved extragradient method to the GVI problem.Under mild conditions, we first show that the generated sequence of the proposed method globally converges to the solution of the problem, and then we show that the method is -linearly convergent if in addition a projection-type error bound holds locally.The rest of this paper is organized as follows.In Section 2, we give some related concepts and conclusions needed in the subsequent analysis.In Section 3, we present our designed algorithm and establish the convergence and convergent rate of the algorithm.

Preliminaries
In this section, we first give some related concepts and conclusions which are useful in the subsequent analysis.Let  ∈   and let  be a nonempty closed convex set in   .A point  0 ∈  is said to be the orthogonal projection of  onto  if it is the closest point to  in ; that is, and denote  0 by   ().The well-known properties of the projection operator are as follows.
Lemma 1 (see [16]).Let  be a nonempty, closed, and convex subset in   .Then, for any ,  ∈   and  ∈ , the following statements hold: Remark 2. In fact, (i) in Lemma 1 also provides a sufficient condition for a vector  to be the

Main Results
For any  ∈   and  ∈ (), set Then the projection residue (, ) can verify the solution set of the GVI [17].
Proposition 8. Let  ∈  and  ∈ ().Then (, ) solves the problem (1) if and only if The basic idea of the designed algorithm is as follows.At each step of the algorithm, compute the projection residue (  ,   ) at iterate   .If (  ,   ) = 0, then stop with   being a solution of the GVI; otherwise, find a trial point   by a back-tracking search at   along the residue (  ,   ), and the new iterate is obtained by using a projection.Repeat this process until the projection residue is a zero vector.Now, we describe carefully our algorithmic framework for solving GVI.Algorithm 9. Choose ,  ∈ (0, 1),  0 ∈ ,  = 0.
First, we give a conclusion which addresses the feasibility of the stepsize rule (14), that is, the existence of point   .Lemma 10.If   is not a solution of the problem (1), then there exists the smallest nonnegative integer  satisfying (14) under Assumption 7.
Proof.By the definition of (  ,   ) and Lemma 1, it follows that Since  ∈ (0, 1), we get lim Combining this with the fact that  is lower semicontinuous, we know that there exists Hence, by (18), one has This completes the proof.Now, for the sake of convenience, we define (25) Since  * ∈  * , it follows that there exists Combining this and the fact that  is pseudomonotone, one has On the other hand, By (27), we have It is obvious that So, by the definition of   , we obtain and the proof is completed.

ISRN Mathematical Analysis
To prove the existence of   in Step 2 of Algorithm 9, we first consider the following optimization problem: which is very necessary for the feasibility proof of   .By Lemma 4 and the definition of   (), it follows that Note that   (0) = 0 and where the first inequality follows from (14).Then if the maximal value exists.By Lemma 3, we know that    () is nonincreasing and continuous for  ≥ 0. So if    () = 0 is solvable on (0, +∞), then its solution coincides with the solution to the optimization problem max {  () |  ≥ 0} . (36) Next, from a geometric point of view, we will show that the equation    () = 0 is solvable on (0, +∞).
Lemma 12.If   is not a solution of the problem (1), then equation    () = 0 is solvable for  ≥ 0 under the Assumption 7.
Proof.For the sake of simplicity, we first define halfspaces as follows: where   is the same as in (14).Since   ∉  * , by the iterative process of Algorithm 9, one has It is obvious that By the fact that ⟨  ,   −   −   ⟩ is nonincreasing for  > 0, we have for  >  1  .Now, let  be any point in  1   ∩  and let  be any point in  2  ∩ .In the triangle composed by the points , , and () =   −   , we denote the inner corners at points  and  by   and   , respectively.By geometric consideration, if  >  1   is sufficiently large, we obtain By the arbitrariness of  ∈  2  ∩  and the definition of projection, there exists On the other hand, by (39), it follows that which implies that   (  − 0  ) ∈  2  .Then, by the continuity of the projection operator, there exists  2  ∈ (0,    ) such that which means that So    ( 2  ) = 0, and the desired result follows.
In order to maintain consistency in the sequel, we denote the smallest positive solution to the equation    () = 0 by and ( 15) holds.The desired result follows.
Since    () > 0 for all  ∈ [0,  2  ), we know that ( 15) and ( 16) hold for any   ∈ [ 1   , where  * is chosen from  * .Hence, the sequence {‖  −  * ‖} is nonincreasing and {‖  −  * ‖} is bounded.Then, it follows that lim from which we obtain lim Since F is continuous with compact values, Proposition 3.11 in [18] implies that {(  ) :  ∈ } is a bounded set, and so the sequence By the iterative process of Algorithm 9 and since we have lim Without loss of generality, if lim  → ∞   ̸ = 0, by (60), one has lim and the desired result can be obtained.On the other hand, suppose lim  → ∞   = 0.By the fact that {‖  ‖} is bounded, so it has a convergent subsequence    , and the limit is denoted by ; that is, lim Therefore, Since  is continuous on , so for all  ∈ () we know that there exist and there exist    ∈ ( The study of the following results is in the spirit of convergence rate results in [19,20] in   , which are based on error bounds.The research on error bounds is a large topic in mathematical programming.One can refer to the surveys [21] for some sufficient conditions ensuring the existence of error bounds and for the roles played by error bounds in the convergence analysis of iterative algorithms.Now, we first give the definition of Lipschitz continuous for a multivalued mapping.The convergence rate of projection methods for GVI has been considered by many researchers [20,22], and the following assumption is needed.
and it is obvious that {  } converges -linearly to  * .

Discussion
Certainly, the proposed extragradient method for GVI in this paper has a good theoretical property in theory, as the generated sequence not only requires two projections at each iteration but also take different stepsize rules.Moreover, from a geometric point of view, it is shown that the new method has a long stepsize, and it guarantees that the distance from the next iterative point to the solution set has a large decrease.However, the proposed algorithm is not easy to be realized in practice as the residue and the trial point are not easy to execute.This is an interesting topic for further research.