1. Introduction
Let Ω be a nonempty closed convex subset of a real Hilbert space H, and let f:Ω→Ω be a continuous mapping. The variational inequality problem, denoted by VI(f,Ω), is to find a vector x*∈Ω, such that
(1)〈x-x*,f(x*)〉≥0, ∀ x∈Ω.
Throughout the paper, let Ω* be the solution set of VI(f,Ω), which is assumed to be nonempty. In the special case when Ω is the nonnegative orthant, (1) reduces to the nonlinear complementarity problem. Find a vector x*∈Ω, such that
(2)x*≥0, f(x*)≥0, f(x*)Tx*=0.
The variational inequality problem plays an important role in optimization theory and variational analysis. There are numerous applications of variational inequalities in mathematics as well as in equilibrium problems arising from engineering, economics, and other areas in real life, see [1–16] and the references therein. Many algorithms, which employ the projection onto the feasible set Ω of the variational inequality or onto some related sets in order to iteratively reach a solution, have been proposed to solve (1). Korpelevich [2] proposed an extragradient method for finding the saddle point of some special cases of the equilibrium problem. Solodov and Svaiter [3] extended the extragradient algorithm through replying the set Ω by the intersection of two sets related to VI(f,Ω). In each iteration of the algorithm, the new vector xk+1 is calculated according to the following iterative scheme. Given the current vector xk, compute r(xk)=xk-PΩ(xk-f(xk)), if r(xk)=0, stop; otherwise, compute
(3)zk=xk-ηkr(xk),
where ηk=rmk and mk being the smallest nonnegative integer m satisfying
(4)〈f(xk-rmr(xk)),r(xk)〉≥δ∥r(xk)∥2,
and then compute
(5)xk+1=PΩ∩Hk(xk),
where Hk={x∈Rn∣〈x-zk,f(zk)〉≤0}.

On the other hand, Nadezhkina and Takahashi [11] got xk+1 by the following iterative formula:
(6)xk+1=αkxk+(1-αk)SPΩ(xk-λkf(xk)),
where {αk}k=0∞ is a sequence in (0,1), {λk}k=0∞ is a sequence, and S:Ω→Ω is a nonexpansive mapping. Denoting the fixed points set of S by F(S) and assuming F(S)∩Ω*≠ ∅, they proved that the sequence {xk}k=0∞ converges weakly to some x*∈F(S)∩Ω*.

Motivated and inspired by the extragradient methods in [2, 3], in this paper, we study further extragradient methods and analyze the weak converge property of three sequences generated by our method.

The rest of this paper is organized as follows. In Section 2, we give some preliminaries and basic results. In Section 3, we present an extragradient algorithm and then discuss the weak convergence of the sequences generated by the algorithm. In Section 4, we modify the extragradient algorithm and give its convergence analysis.

2. Preliminary and Basic Results
Let H be a real Hilbert space with 〈x,y〉 denoting the inner product of the vectors x,y. Weak converge and strong converge of the sequence {xk}k=0∞ to a point x are denoted by xk⇀x and xk→x, respectively. Identity mapping from Ω to itself is denoted by I.

For some vector x∈H, the orthogonal projection of x onto Ω, denoted by PΩ(x), is defined as
(7)PΩ(x):=arg min{∥y-x∥∣y∈Ω}.
The following lemma states some well-known properties of the orthogonal projection operator.

Lemma 1.
One has(8) (1)〈x-PΩ(x),PΩ(x)-y〉≥0, ∀ x∈Rn, ∀ y∈Ω.(9) (2)∥PΩ(x)-PΩ(y)∥≤∥x-y∥, ∀ x, y∈Rn.(10)(3)∥PΩ(x)-y∥2≤∥x-y∥2-∥x-PΩ(x)∥2, ∀ x∈Rn, ∀ y∈Ω.(11)(4)∥PΩ(x)-PΩ(y)∥2 ≤∥x-y∥2-∥PΩ(x)-x+y-PΩ(y)∥2, ∀ x,y∈Rn.

A mapping f is called monotone if
(12)〈x-y,f(x)-f(y)〉≥0, ∀ x,y∈Ω.
A mapping f is called Lipschitz continuous, if there exists an L≥0, such that
(13)∥f(x)-f(y)∥≤L∥x-y∥, ∀ x,y∈Ω.
The graph of f, denoted by G(f), is defined by
(14)G(f):={(x,y)∈Ω×Ω∣y=f(x)}.
A mapping S:Ω→Ω is called nonexpansive if
(15)∥S(x)-S(y)∥≤∥x-y∥, ∀ x,y∈Ω,
and the fixed point set of a mapping S, denoted by F(S), is defined by
(16)F(S):={x∈Ω∣S(x)=x}.
We denote the normal cone of Ω at v∈Ω by
(17)NΩ(v):={w∈H∣〈v-u,w〉≥0,u∈Ω},
and define the function T(v) as
(18)T(v):={f(v)+NΩ(v)if v∈Ω,∅if v∉Ω.
Then T is maximal monotone. It is well known that 0∈T(v), if and only if v∈Ω*. For more details, see, for example, [9] and references therein. The following lemma is established in Hilbert space and is well known as Opial condition.

Lemma 2.
For any sequence {xk}k=0∞⊂H that converges weakly to x(xk⇀x), one has
(19)liminfk→∞∥xk-x∥<liminfk→∞∥xk-y∥, ∀y≠x.

The next lemma is proposed in [10].

Lemma 3 (Demiclosedness principle).
Let Ω be a closed, convex subset of a real Hilbert space H, and let S:Ω→H be a nonexpansive mapping. Then I-S is demiclosed at y∈H; that is, for any sequence {xk}k=0∞⊂Ω, such that xk⇀x~, x~∈Ω and (I-S)xk→y, one has (I-S)x~=y.

3. An Algorithm and Its Convergence Analysis
In this section, we give our algorithm, and then discuss its convergence. First, we need the following definition.

Definition 4.
For some vector x∈Ω, the projected residual function is defined as
(20)r(x):=x-PΩ(x-f(x)).
Obviously, we have that x∈Ω* if and only if r(x)=0. Now we describe our algorithm.

Algorithm A.
Step 0. Take δ∈(0,1), γ∈(0,1), x0∈Ω, and k=0.

Step 1. For the current iterative point xk∈Ω, compute
(21)yk=PΩ(xk-f(xk)),(22)zk=(1-ηk)xk+ηkyk,
where ηk=γnk and nk being the smallest nonnegative integer n satisfying
(23)〈f(xk-γnr(xk)),r(xk)〉≥δ∥r(xk)∥2.
Compute
(24)tk=PHk∩Ω(xk),(25)xk+1=αkxk+(1-αk)Stk,
where {αk}⊂(a,b) (a,b∈(0,1)), Hk={x∈Ω∣〈x-zk,f(zk)〉≤0}, and S:Ω→Ω is a nonexpansive mapping.

Step 2. If ∥r(xk+1)∥=0, stop; otherwise go to Step 1.

Remark 5.
The iterative point tk is well computed in Algorithm A according to [3] and can be interpreted as follows: if (23) is well defined, then tk can be derived by the following iterative scheme: compute
(26)x-k=PHk(xk), tk=PHk∩Ω(x-k).
For more details, see [3, 4].

Now we investigate the weak convergence property of our algorithm. First we recall the following result, which was proposed by Schu [17].

Lemma 6.
Let H be a real Hilbert space, let {αk}k=0∞⊂(a,b) (a,b∈(0,1)) be a sequence of real number, and let {vk}k=0∞⊂H, {wk}k=0∞⊂H, such that
(27)limsupk→∞∥vk∥≤c,limsupk→∞∥wk∥≤c,limsupk→∞∥αkvk+(1-αk)wk∥=c,
for some c≥0. Then one has
(28)limk→∞∥vk-wk∥=0.

The following theorem is crucial in proving the boundness of the sequence {xk}k=0∞.

Theorem 7.
Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, F(S)∩Ω*≠∅, and x*∈Ω*. Then for any sequence {xk}k=0∞, {tk}k=0∞ generated by Algorithm A, one has
(29)∥tk-x*∥2≤∥xk-x*∥2-∥xk-tk∥2+2ηk〈r(xk),f(zk)〉∥f(zk)∥2〈zk-tk,f(zk)〉.

Proof.
Letting x=x-k and y=x*. It follows from Lemma 1 (10) that
(30)∥PHk∩Ω(x-k)-x*∥2≤∥x-k-x*∥2-∥x-k-PHk∩Ω(x-k)∥2,
that is,
(31)∥tk-x*∥2≤∥x-k-x*∥2-∥x-k-tk∥2.
From (20)–(23) in Algorithm A, we get 〈xk-zk,f(zk)〉>0, which means xk∉Hk. So, by the definition of the projection operator and [3], we obtain
(32)x-k=PHk(xk)=xk-〈xk-zk,f(zk)〉∥f(zk)∥2f(zk) =xk-ηk〈r(xk),f(zk)〉∥f(zk)∥2f(zk).
Substituting (32) into (31), we have
(33)∥tk-x*∥2≤∥xk-ηk〈r(xk),f(zk)〉∥f(zk)∥2f(zk)-x*∥2 -∥xk-ηk〈r(xk),f(zk)〉∥f(zk)∥2f(zk)-tk∥2=∥xk-x*∥2-∥xk-tk∥2 -2ηk〈r(xk),f(zk)〉∥f(zk)∥2〈tk-x*,f(zk)〉=∥xk-x*∥2-∥xk-tk∥2 +2ηk〈r(xk),f(zk)〉∥f(zk)∥2〈zk-tk,f(zk)〉 +2ηk〈r(xk),f(zk)〉∥f(zk)∥2〈x*-zk,f(zk)〉.
Since f is monotone, connecting with (1), we obtain
(34)〈zk-x*,f(zk)〉≥0.
Thus
(35)∥tk-x*∥2≤∥xk-x*∥2-∥xk-tk∥2+2ηk〈r(xk),f(zk)〉∥f(zk)∥2〈zk-tk,f(zk)〉,
which completes the proof.

Theorem 8.
Let Ω be a nonempty, closed, and convex subset of H, f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω*≠∅. Then for any sequence {xk}k=0∞, {yk}k=0∞, {tk}k=0∞ generated by Algorithm A, one has
(36)∥xk+1-x*∥2≤∥xk-x*∥2-(1-αk)(ηkδ∥f(zk)∥)2∥xk-yk∥4,∥xk+1-x*∥2≤∥xk-x*∥2-1-αk2∥xk-tk∥2.
Furthermore,
(37)limk→∞∥xk-yk∥=0,limk→∞∥xk-tk∥=0,limk→∞∥yk-tk∥=0.

Proof.
Using (22), we have
(38)ηk〈r(xk),f(zk)〉∥f(zk)∥2〈zk-xk,f(zk)〉 =ηk〈r(xk),f(zk)〉∥f(zk)∥2〈-ηkr(xk),f(zk)〉 =-(ηk〈r(xk),f(zk)〉∥f(zk)∥)2.
By the Cauchy-Schwarz inequality,
(39)2ηk〈r(xk),f(zk)〉∥f(zk)∥2〈xk-tk,f(zk)〉 ≤(ηk〈r(xk),f(zk)〉∥f(zk)∥)2+∥xk-tk∥2.
Hence, by (23)
(40)∥tk-x*∥2≤∥xk-x*∥2-∥xk-tk∥2 -(ηk〈r(xk),f(zk)〉∥f(zk)∥)2+∥xk-tk∥2 =∥xk-x*∥2-(ηk〈r(xk),f(zk)〉∥f(zk)∥)2 ≤∥xk-x*∥2-(ηkδ∥f(zk)∥)2∥xk-yk∥4.
Then we have
(41)∥xk+1-x*∥2=∥αkxk+(1-αk)Stk-x*∥2 =∥αk(xk-x*)+(1-αk)(Stk-x*)∥2 ≤αk∥xk-x*∥2+(1-αk)∥tk-x*∥2 ≤αk∥xk-x*∥2+(1-αk)∥xk-x*∥2 -(1-αk)(ηkδ∥f(zk)∥)2∥xk-yk∥4 =∥xk-x*∥2-(1-αk)(ηkδ∥f(zk)∥)2∥xk-yk∥4≤∥xk-x*∥2,
where the first inequation follows from that S is a nonexpansive mapping.

That means {xk}k=0∞ is bounded, and so as {zk}k=0∞. Since f is continuous; namely, there exists a constant M>0, s.t. ∥f(zk)∥≤M, for all k, we yet have
(42)∥xk+1-x*∥2≤∥xk-x*∥2-(1-αk)(δM)2ηk2∥xk-yk∥4.
So we know that there exists ξ≥0, limk→∞∥xk-x*∥=ξ, and hence
(43)limk→∞ηk∥xk-yk∥=0,
which implies that limk→∞∥xk-yk∥=0 or limk→∞ηk=0.

If limk→∞∥xk-yk∥=0, we get the conclusion.

If limk→∞ηk=0, we can deduce that the inequality (23) in Algorithm A is not satisfied for nk-1; that is, there exists k0, for all k≥k0,
(44)〈f(xk-γ-1ηkr(xk)),r(xk)〉<δ∥r(xk)∥2.
Applying (8) by setting x=xk-f(xk),y=xk leads to
(45)〈xk-f(xk)-PΩ(xk-f(xk)),PΩ(xk-f(xk))-xk〉≥0.
Therefore
(46)〈f(xk),r(xk)〉≥∥r(xk)∥2 as r(xk)=xk-PΩ(xk-f(xk)).
Passing onto the limit in (44), (46), we get δ limk→∞∥xk-yk∥≥limk→∞∥xk-yk∥, since δ∈(0,1), we obtain limk→∞∥xk-yk∥=0.

On the other hand, using Cauchy-Schwarz inequality again, we have
(47)2ηk〈r(xk),f(zk)〉∥f(zk)∥2〈xk-tk,f(zk)〉 ≤2(ηk〈r(xk),f(zk)〉∥f(zk)∥)2+12 ∥xk-tk∥2.
Therefore,
(48)∥tk-x*∥2 ≤∥xk-x*∥2-∥xk-tk∥2-2(ηk〈r(xk),f(zk)〉∥f(zk)∥)2 +2(ηk〈r(xk),f(zk)〉∥f(zk)∥)2+12 ∥xk-tk∥2 ≤∥xk-x*∥2-12 ∥xk-tk∥2.
Then we have
(49)∥xk+1-x*∥2=∥αkxk+(1-αk)Stk-x*∥2 ≤αk∥xk-x*∥2+(1-αk)∥tk-x*∥2 ≤αk∥xk-x*∥2+(1-αk)∥xk-x*∥2 -1-αk2∥xk-tk∥2=∥xk-x*∥2-1-αk2∥xk-tk∥2.
Noting that αk∈(a,b) (a,b∈(0,1)), it easily follows that 0<(1-b)/2<(1-αk)/2<(1-a)/2<1, which implies that
(50)limk→∞∥xk-tk∥=0.
By the triangle inequality, we have
(51)∥yk-tk∥=∥yk-xk+xk-tk∥≤∥yk-xk∥+∥xk-tk∥.
Passing onto the limit in (51), we conclude
(52)limk→∞∥yk-tk∥=0.
The proof is complete.

Theorem 9.
Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S) ∩ Ω*≠∅. Then the sequences {xk}k=0∞, {yk}k=0∞, {tk}k=0∞ generated by Algorithm A converge weakly to the same point x*∈F(S)∩Ω*, where x*=limk→∞PF(S)∩Ω*(xk).

Proof.
By Theorem 8, we know that {xk}k=0∞ is bound, which implies that there exists a subsequence {xki}i=0∞ of {xk}k=0∞ that converges weakly to some points x*∈F(S)∩Ω*.

First, we investigate some details of x*∈F(S).

Letting x′∈F(S)∩Ω*, since S is nonexpansive mapping, from (29) we have
(53)∥Stk-x′∥≤∥tk-x′∥≤∥xk-x′∥.
Passing onto the limit in (53), we obtain
(54)limk→∞∥Stk-x′∥≤ξ.
Then by (25) we have
(55)limk→∞∥αk(xk-x′)+(1-αk)(Stk-x′)∥ =limk→∞∥xk+1-x′∥=ξ.
From Lemma 6, it follows that
(56)limk→∞∥Stk-xk∥=0.
By the triangle inequality, we have
(57)∥Sxk-xk∥≤∥Sxk-Stk∥+∥Stk-xk∥≤∥xk-tk∥+∥Stk-xk∥,
and then passing onto the limit in (57), we deduce that
(58)limk→∞∥Sxk-xk∥=0,
which imply that x*∈F(S) by Lemma 3.

Second, we describe the details of x*∈Ω*.

Since xki⇀x*, using Theorem 8 we claim that tki⇀x* and yki⇀x*.

Letting (v,u)∈G(T), we have
(59)u∈T(v)=f(v)+NΩ(v), u-f(v)∈NΩ(v),
thus,
(60)〈v-x*,u-f(v)〉≥0, ∀ x*∈Ω.
Applying (8) by letting x=xk-f(xk),y=v, we have
(61)〈xk-f(xk)-PΩ(xk-f(xk)),PΩ(xk-f(xk))-v〉≥0,
that is,
(62)〈xk-f(xk)-yk,yk-v〉≥0.
Note that u-f(v)∈NΩ(v) and tk∈Ω, then
(63)〈v-tki,u〉 ≥〈v-tki,f(v)〉 ≥〈v-tki,f(v)〉-〈xki-f(xki)-yki,yki-v〉 =〈v-tki,f(v)-f(tki)〉+〈v-tki,f(tki)〉 -〈v-yki,yki-xki〉-〈v-yki,f(xki)〉 ≥〈v-tki,f(tki)〉-〈v-yki,yki-xki〉 -〈v-yki,f(xki)〉,
where the last inequation follows from the monotone of f.

Since f is continuous, by (37) we have
(64)limi→∞f(xki)=limi→∞f(yki), limi→∞xki=limi→∞tki.
Passing onto the limit in (63), we obtain
(65)〈v-x*,u〉≥0.
As f is maximal monotone, we have x*∈f-1(0), which implies that x*∈Ω*.

At last we show that such x* is unique.

Let {xkj}j=0∞ be another subsequence of {xk}k=0∞, such that xkj⇀xΔ*. Then we conclude that xΔ *∈F(S)∩Ω*. Suppose xΔ*≠x*; by Lemma 2 we have
(66)limk→∞∥xk-x*∥ =liminfi→∞∥xki-x*∥<liminfi→∞∥xki-xΔ *∥=limk→∞∥xk-xΔ *∥ =liminfj→∞∥xkj-xΔ *∥<liminfj→∞∥xkj-x*∥=limk→∞∥xk-x*∥,
which implies that limk→∞∥xk-x*∥<limk→∞∥xk-x*∥, and this is a contradiction. Thus, x*=xΔ *, and the proof is complete.

4. Further Study
In this section we propose an extension of Algorithm A, which is effective in practice. Similar to the investigation in Section 3, for the constant τ >0, we define a new projected residual function as follows:
(67)r(x,τ):=x-PΩ(x-τf(x)).
It is clear that the new projected residual function (67) degenerates into (20) by setting τ =1.

Algorithm B.
Step 0. Take δ∈(0,1), γ∈(0,1), η-1>0, θ>1, x0∈Ω, and k=0.

Step 1. For the current iterative point xk∈Ω, compute
(68)yk=PΩ(xk-τkf(xk)),zk=(1-ηk)xk+ηkyk,
where τk=min{θηk-1,1}, ηk=γnkτk and nk being the smallest nonnegative integer n satisfying
(69)〈f(xk-γnτkr(xk,τk)),r(xk,τk)〉≥δτk ∥r(xk,τk)∥2.
Compute
(70)tk=PHk∩Ω(xk),xk+1=αkxk+(1-αk)Stk,
where {αk}⊂(a,b) (a,b∈(0,1)) and Hk={x∈Ω∣〈x-zk,f(zk)〉≤0}.

Step 2. If ∥r(xk,τk)∥=0, stop; otherwise go to Step 1.

At the rest of this section, we discuss the weak convergence property of Algorithm B.

Lemma 10.
For any τ>0, one has
(71)x*is the solution of VI(f,Ω)⟺x*=PΩ(x*-τf(x*)).

Therefore, solving variational inequality is equivalent to finding a zero point of the projected residual function r(•,τ). Meanwhile we know that r(x,τ) is a continuous function of x, as the projection mapping is nonexpansive.

Lemma 11.
For any x∈Rn and τ1≥τ2>0, it holds that
(72)∥r(x,τ1)∥≥∥r(x,τ2)∥.

Theorem 12.
Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω*≠∅. Then for any sequence {xk}k=0∞, {tk}k=0∞ generated by Algorithm B, one has
(73)∥tk-x*∥2≤∥xk-x*∥2-∥xk-tk∥2 +2ηk〈r(xk,τk),f(zk)〉∥f(zk)∥2〈zk-tk,f(zk)〉.

Proof.
The proof of this theorem is similar to Theorem 7, so we omit it.

Theorem 13.
Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω*≠∅. Then for any sequences {xk}k=0∞, {yk}k=0∞, {tk}k=0∞ generated by Algorithm B, one has
(74)∥xk+1-x*∥2≤∥xk-x*∥2-1-αk2∥xk-tk∥2.
Furthermore,
(75)limk→∞∥xk-yk∥=0,limk→∞∥xk-tk∥=0,limk→∞∥yk-tk∥=0.

Proof.
The proof of this theorem is similar to the Theorem 8. The only difference is that (44) is substituted by
(76)〈f(xk-γ-1ηkr(xk,τk)),r(xk,τk)〉<δτk ∥r(xk,τk)∥2,
where (76) follows from Lemma 11 with τ1=1 and τ2=τk.

Theorem 14.
Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)∩Ω*≠∅. Then the sequences {xk}k=0∞, {yk}k=0∞, {tk}k=0∞ generated by Algorithm B converge weakly to the same point x*∈F(S)∩Ω*, where x*=limk→∞PF(S)∩Ω*(xk).