AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 531912 10.1155/2013/531912 531912 Research Article An Extension of Subgradient Method for Variational Inequality Problems in Hilbert Space Wang Xueyong Li Shengjie Kou Xipeng Zhou Guanglu 1 College of Mathematics and Statistics Chongqing University Chongqing 401331 China cqu.edu.cn 2013 12 3 2013 2013 26 10 2012 30 01 2013 01 02 2013 2013 Copyright © 2013 Xueyong Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

An extension of subgradient method for solving variational inequality problems is presented. A new iterative process, which relates to the fixed point of a nonexpansive mapping and the current iterative point, is generated. A weak convergence theorem is obtained for three sequences generated by the iterative process under some mild conditions.

1. Introduction

Let Ω be a nonempty closed convex subset of a real Hilbert space H, and let f:ΩΩ be a continuous mapping. The variational inequality problem, denoted by VI(f,Ω), is to find a vector x*Ω, such that (1)x-x*,f(x*)0,xΩ. Throughout the paper, let Ω* be the solution set of VI(f,Ω), which is assumed to be nonempty. In the special case when Ω   is the nonnegative orthant, (1) reduces to the nonlinear complementarity problem. Find a vector x*Ω, such that (2)x*0,f(x*)0,f(x*)Tx*=0. The variational inequality problem plays an important role in optimization theory and variational analysis. There are numerous applications of variational inequalities in mathematics as well as in equilibrium problems arising from engineering, economics, and other areas in real life, see  and the references therein. Many algorithms, which employ the projection onto the feasible set Ω   of the variational inequality or onto some related sets in order to iteratively reach a solution, have been proposed to solve (1). Korpelevich  proposed an extragradient method for finding the saddle point of some special cases of the equilibrium problem. Solodov and Svaiter  extended the extragradient algorithm through replying the set Ω by the intersection of two sets related to VI(f,Ω). In each iteration of the algorithm, the new vector xk+1 is calculated according to the following iterative scheme. Given the current vector xk, compute r(xk)=xk-PΩ(xk-f(xk)), if r(xk)=0, stop; otherwise, compute (3)zk=xk-ηkr(xk), where ηk=rmk and mk being the smallest nonnegative integer m satisfying (4)f(xk-rmr(xk)),r(xk)δr(xk)2, and then compute (5)xk+1=PΩHk(xk), where Hk={xRnx-zk,f(zk)0}.

On the other hand, Nadezhkina and Takahashi  got xk+1 by the following iterative formula: (6)xk+1=αkxk+(1-αk)SPΩ(xk-λkf(xk)), where {αk}k=0 is a sequence in (0,1),{λk}k=0 is a sequence, and S:ΩΩ   is a nonexpansive mapping. Denoting the fixed points set of S by F(S) and assuming F(S)Ω*  , they proved that the sequence {xk}k=0 converges weakly to some x*F(S)Ω*.

Motivated and inspired by the extragradient methods in [2, 3], in this paper, we study further extragradient methods and analyze the weak converge property of three sequences generated by our method.

The rest of this paper is organized as follows. In Section 2, we give some preliminaries and basic results. In Section 3, we present an extragradient algorithm and then discuss the weak convergence of the sequences generated by the algorithm. In Section 4, we modify the extragradient algorithm and give its convergence analysis.

2. Preliminary and Basic Results

Let H be a real Hilbert space with x,y denoting the inner product of the vectors x,y. Weak converge and strong converge of the sequence {xk}k=0 to a point x are denoted by xkx and xkx, respectively. Identity mapping from Ω to itself is denoted by I.

For some vector xH, the orthogonal projection of x onto Ω, denoted by PΩ(x), is defined as (7)PΩ(x):=argmin{y-xyΩ}. The following lemma states some well-known properties of the orthogonal projection operator.

Lemma 1.

One has(8)(1)x-PΩ(x),PΩ(x)-y0,xRn,yΩ.(9)(2)PΩ(x)-PΩ(y)x-y,x,yRn.(10)(3)PΩ(x)-y2x-y2-x-PΩ(x)2,xRn,yΩ.(11)(4)PΩ(x)-PΩ(y)2x-y2-PΩ(x)-x+y-PΩ(y)2,x,yRn.

A mapping f is called monotone if (12)x-y,f(x)-f(y)0,x,yΩ. A mapping f is called Lipschitz continuous, if there exists an L0, such that (13)f(x)-f(y)Lx-y,x,yΩ. The graph of f, denoted by G(f), is defined by (14)G(f):={(x,y)Ω×Ωy=f(x)}. A mapping S:ΩΩ   is called nonexpansive if (15)S(x)-S(y)x-y,x,yΩ, and the fixed point set of a mapping S, denoted by F(S), is defined by (16)F(S):={xΩS(x)=x}. We denote the normal cone of Ω at vΩ by (17)NΩ(v):={wHv-u,w0,uΩ}, and define the function T(v) as (18)T(v):={f(v)+NΩ(v)ifvΩ,ifvΩ. Then T is maximal monotone. It is well known that 0T(v), if and only if vΩ*. For more details, see, for example,  and references therein. The following lemma is established in Hilbert space and is well known as Opial condition.

Lemma 2.

For any sequence {xk}k=0H   that converges weakly to x(xkx), one has (19)liminfkxk-x<liminfkxk-y,yx.

The next lemma is proposed in .

Lemma 3 (Demiclosedness principle).

Let Ω be a closed, convex subset of a real Hilbert space H, and let S:ΩH   be a nonexpansive mapping. Then I-S is demiclosed at yH; that is, for any sequence {xk}k=0Ω, such that xkx~,x~Ω and (I-S)xky, one has (I-S)x~=y.

3. An Algorithm and Its Convergence Analysis

In this section, we give our algorithm, and then discuss its convergence. First, we need the following definition.

Definition 4.

For some vector xΩ, the projected residual function is defined as (20)r(x):=x-PΩ(x-f(x)). Obviously, we have that xΩ* if and only if r(x)=0. Now we describe our algorithm.

Algorithm A.

Step  0. Take δ(0,1),γ(0,1),x0Ω, and k=0.

Step 1. For the current iterative point xkΩ, compute (21)yk=PΩ(xk-f(xk)),(22)zk=(1-ηk)xk+ηkyk, where ηk=γnk and nk being the smallest nonnegative integer n satisfying (23)f(xk-γnr(xk)),r(xk)δr(xk)2. Compute (24)tk=PHkΩ(xk),(25)xk+1=αkxk+(1-αk)Stk, where {αk}(a,b)  (a,b(0,1)),Hk={xΩx-zk,f(zk)0}, and S:ΩΩ   is a nonexpansive mapping.

Step 2. If r(xk+1)=0, stop; otherwise go to Step 1.

Remark 5.

The iterative point tk is well computed in Algorithm A according to  and can be interpreted as follows: if (23) is well defined, then tk can be derived by the following iterative scheme: compute (26)x-k=PHk(xk),tk=PHkΩ(x-k). For more details, see [3, 4].

Now we investigate the weak convergence property of our algorithm. First we recall the following result, which was proposed by Schu .

Lemma 6.

Let H be a real Hilbert space, let {αk}k=0(a,b)  (a,b(0,1)) be a sequence of real number, and let {vk}k=0H,{wk}k=0H, such that (27)limsupkvkc,limsupkwkc,limsupkαkvk+(1-αk)wk=c, for some c0. Then one has (28)limkvk-wk=0.

The following theorem is crucial in proving the boundness of the sequence {xk}k=0.

Theorem 7.

Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, F(S)Ω*, and x*Ω*. Then for any sequence {xk}k=0,{tk}k=0 generated by Algorithm A, one has (29)tk-x*2xk-x*2-xk-tk2+2ηkr(xk),f(zk)f(zk)2zk-tk,f(zk).

Proof.

Letting x=x-k and y=x*. It follows from Lemma 1 (10) that (30)PHkΩ(x-k)-x*2x-k-x*2-x-k-PHkΩ(x-k)2, that is, (31)tk-x*2x-k-x*2-x-k-tk2. From (20)–(23) in Algorithm A, we get xk-zk,f(zk)>0, which means xkHk. So, by the definition of the projection operator and , we obtain (32)x-k=PHk(xk)=xk-xk-zk,f(zk)f(zk)2f(zk)  =xk-ηkr(xk),f(zk)f(zk)2f(zk). Substituting (32) into (31), we have (33)tk-x*2xk-ηkr(xk),f(zk)f(zk)2f(zk)-x*2-xk-ηkr(xk),f(zk)f(zk)2f(zk)-tk2=xk-x*2-xk-tk2-2ηkr(xk),f(zk)f(zk)2tk-x*,f(zk)=xk-x*2-xk-tk2+2ηkr(xk),f(zk)f(zk)2zk-tk,f(zk)  +2ηkr(xk),f(zk)f(zk)2x*-zk,f(zk). Since f is monotone, connecting with (1), we obtain (34)zk-x*,f(zk)0. Thus (35)tk-x*2xk-x*2-xk-tk2+2ηkr(xk),f(zk)f(zk)2zk-tk,f(zk), which completes the proof.

Theorem 8.

Let Ω be a nonempty, closed, and convex subset of H, f be a monotone and L-Lipschitz continuous mapping, and F(S)Ω*. Then for any sequence {xk}k=0,{yk}k=0,{tk}k=0   generated by Algorithm A, one has (36)xk+1-x*2xk-x*2-(1-αk)(ηkδf(zk))2xk-yk4,xk+1-x*2xk-x*2-1-αk2xk-tk2. Furthermore, (37)limkxk-yk=0,limkxk-tk=0,limkyk-tk=0.

Proof.

Using (22), we have (38)ηkr(xk),f(zk)f(zk)2zk-xk,f(zk)=ηkr(xk),f(zk)f(zk)2-ηkr(xk),f(zk)  =-(ηkr(xk),f(zk)f(zk))2. By the Cauchy-Schwarz inequality, (39)2ηkr(xk),f(zk)f(zk)2xk-tk,f(zk)(ηkr(xk),f(zk)f(zk))2+xk-tk2. Hence, by (23) (40)tk-x*2xk-x*2-xk-tk2-(ηkr(xk),f(zk)f(zk))2+xk-tk2  =xk-x*2-(ηkr(xk),f(zk)f(zk))2  xk-x*2-(ηkδf(zk))2xk-yk4. Then we have (41)xk+1-x*2=αkxk+(1-αk)Stk-x*2  =αk(xk-x*)+(1-αk)(Stk-x*)2  αkxk-x*2+(1-αk)tk-x*2  αkxk-x*2+(1-αk)xk-x*2-(1-αk)(ηkδf(zk))2xk-yk4  =xk-x*2-(1-αk)(ηkδf(zk))2xk-yk4xk-x*2, where the first inequation follows from that S is a nonexpansive mapping.

That means {xk}k=0   is bounded, and so as {zk}k=0. Since f is continuous; namely, there exists a constant M>0, s.t. f(zk)M,forallk, we yet have (42)xk+1-x*2xk-x*2-(1-αk)(δM)2ηk2xk-yk4. So we know that there exists ξ0,limkxk-x*=ξ, and hence (43)limkηkxk-yk=0, which implies that limkxk-yk=0   or limkηk=0.

If limkxk-yk=0, we get the conclusion.

If limkηk=0, we can deduce that the inequality (23) in Algorithm A is not satisfied for nk-1; that is, there exists k0, for all kk0, (44)f(xk-γ-1ηkr(xk)),r(xk)<δr(xk)2. Applying (8) by setting x=xk-f(xk),y=xk leads to (45)xk-f(xk)-PΩ(xk-f(xk)),PΩ(xk-f(xk))-xk0. Therefore (46)f(xk),r(xk)r(xk)2asr(xk)=xk-PΩ(xk-f(xk)). Passing onto the limit in (44), (46), we get δlimkxk-yklimkxk-yk, since δ(0,1), we obtain limkxk-yk=0.

On the other hand, using Cauchy-Schwarz inequality again, we have (47)2ηkr(xk),f(zk)f(zk)2xk-tk,f(zk)2(ηkr(xk),f(zk)f(zk))2+12  xk-tk2. Therefore, (48)tk-x*2xk-x*2-xk-tk2-2(ηkr(xk),f(zk)f(zk))2+2(ηkr(xk),f(zk)f(zk))2+12  xk-tk2  xk-x*2-12  xk-tk2. Then we have (49)xk+1-x*2=αkxk+(1-αk)Stk-x*2  αkxk-x*2+(1-αk)tk-x*2  αkxk-x*2+(1-αk)xk-x*2  -1-αk2xk-tk2=xk-x*2-1-αk2xk-tk2. Noting that αk(a,b)  (a,b(0,1)), it easily follows that 0<(1-b)/2<(1-αk)/2<(1-a)/2<1, which implies that (50)limkxk-tk=0. By the triangle inequality, we have (51)yk-tk=yk-xk+xk-tkyk-xk+xk-tk. Passing onto the limit in (51), we conclude (52)limkyk-tk=0. The proof is complete.

Theorem 9.

Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)Ω*. Then the sequences {xk}k=0,{yk}k=0,{tk}k=0   generated by Algorithm A converge weakly to the same point x*F(S)Ω*, where   x*=limkPF(S)Ω*(xk).

Proof.

By Theorem 8, we know that {xk}k=0   is bound, which implies that there exists a subsequence {xki}i=0 of {xk}k=0 that converges weakly to some points x*F(S)Ω*.

First, we investigate some details of x*F(S).

Letting xF(S)Ω*, since S is nonexpansive mapping, from (29) we have (53)Stk-xtk-xxk-x. Passing onto the limit in (53), we obtain (54)limkStk-xξ. Then by (25) we have (55)limkαk(xk-x)+(1-αk)(Stk-x)=limkxk+1-x=ξ. From Lemma 6, it follows that (56)limkStk-xk=0. By the triangle inequality, we have (57)Sxk-xkSxk-Stk+Stk-xkxk-tk+Stk-xk, and then passing onto the limit in (57), we deduce that (58)limkSxk-xk=0, which imply that x*F(S)  by Lemma 3.

Second, we describe the details of x*Ω*.

Since xkix*, using Theorem 8 we claim that tkix* and ykix*.

Letting (v,u)G(T), we have (59)uT(v)=f(v)+NΩ(v),u-f(v)NΩ(v), thus, (60)v-x*,u-f(v)0,x*Ω. Applying (8) by letting x=xk-f(xk),y=v, we have (61)xk-f(xk)-PΩ(xk-f(xk)),PΩ(xk-f(xk))-v0, that is, (62)xk-f(xk)-yk,yk-v0. Note that u-f(v)NΩ(v) and tkΩ, then (63)v-tki,uv-tki,f(v)v-tki,f(v)-xki-f(xki)-yki,yki-v  =v-tki,f(v)-f(tki)+v-tki,f(tki)-v-yki,yki-xki-v-yki,f(xki)  v-tki,f(tki)-v-yki,yki-xki-v-yki,f(xki), where the last inequation follows from the monotone of f.

Since f is continuous, by (37) we have (64)limif(xki)=limif(yki),limixki=limitki. Passing onto the limit in (63), we obtain (65)v-x*,u0. As f is maximal monotone, we have x*f-1(0), which implies that x*Ω*.

At last we show that such x* is unique.

Let {xkj}j=0 be another subsequence of {xk}k=0, such that xkjxΔ*. Then we conclude that xΔ  *F(S)Ω*. Suppose xΔ*x*; by Lemma 2 we have (66)limkxk-x*=liminfixki-x*<liminfixki-xΔ  *=limkxk-xΔ  *=liminfjxkj-xΔ  *<liminfjxkj-x*=limkxk-x*, which implies that limkxk-x*<limkxk-x*, and this is a contradiction. Thus, x*=xΔ  *, and the proof is complete.

4. Further Study

In this section we propose an extension of Algorithm A, which is effective in practice. Similar to the investigation in Section 3, for the constant τ  >0, we define a new projected residual function as follows: (67)r(x,τ):=x-PΩ(x-τf(x)). It is clear that the new projected residual function (67) degenerates into (20) by setting τ  =1.

Algorithm B.

Step 0. Take δ(0,1),γ(0,1),η-1>0,θ>1,x0Ω, and k=0.

Step 1. For the current iterative point xkΩ, compute (68)yk=PΩ(xk-τkf(xk)),zk=(1-ηk)xk+ηkyk, where τk=min{θηk-1,1},ηk=γnkτk and nk being the smallest nonnegative integer n satisfying (69)f(xk-γnτkr(xk,τk)),r(xk,τk)δτk  r(xk,τk)2. Compute (70)tk=PHkΩ(xk),xk+1=αkxk+(1-αk)Stk, where {αk}(a,b)(a,b(0,1)) and Hk={xΩx-zk,f(zk)0}.

Step 2. If r(xk,τk)=0, stop; otherwise go to Step 1.

At the rest of this section, we discuss the weak convergence property of Algorithm B.

Lemma 10.

For any τ>0, one has (71)x*isthesolutionofVI(f,Ω)x*=PΩ(x*-τf(x*)).

Therefore, solving variational inequality is equivalent to finding a zero point of the projected residual function r(,τ). Meanwhile we know that r(x,τ) is a continuous function of x, as the projection mapping is nonexpansive.

Lemma 11.

For any xRnandτ1τ2>0, it holds that (72)r(x,τ1)r(x,τ2).

Theorem 12.

Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)Ω*. Then for any sequence {xk}k=0,{tk}k=0   generated by Algorithm B, one has (73)tk-x*2xk-x*2-xk-tk2+2ηkr(xk,τk),f(zk)f(zk)2zk-tk,f(zk).

Proof.

The proof of this theorem is similar to Theorem 7, so we omit it.

Theorem 13.

Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)Ω*. Then for any sequences {xk}k=0,{yk}k=0,{tk}k=0   generated by Algorithm B, one has (74)xk+1-x*2xk-x*2-1-αk2xk-tk2. Furthermore, (75)limkxk-yk=0,limkxk-tk=0,limkyk-tk=0.

Proof.

The proof of this theorem is similar to the Theorem 8. The only difference is that (44) is substituted by (76)f(xk-γ-1ηkr(xk,τk)),r(xk,τk)<δτk  r(xk,τk)2, where (76) follows from Lemma 11 with τ1=1andτ2=τk.

Theorem 14.

Let Ω be a nonempty, closed, and convex subset of H, let f be a monotone and L-Lipschitz continuous mapping, and F(S)Ω*. Then the sequences {xk}k=0,{yk}k=0,{tk}k=0   generated by Algorithm B converge weakly to the same point x*F(S)Ω*, where   x*=limkPF(S)Ω*(xk).

5. Conclusions

In this paper, we proposed an extension of the extragradient algorithm for solving monotone variational inequalities and established its weak convergence theorem. The Algorithm B is effective in practice. Meanwhile, we pointed out that the solution of our algorithm is also a fixed point of a given nonexpansive mapping.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (Grant: 11171362) and the Fundamental Research Funds for the central universities (Grant: CDJXS12101103). The authors thank the anonymous reviewers for their valuable comments and suggestions, which helped to improve the paper.

Cottle R. W. Pang J.-S. Stone R. E. The Linear Complementarity Problem 1992 Boston, Mas, USA Academic Press MR1150683 ZBL0795.90071 Korpelevich G. M. The extragradient method for finding saddle points and other problems Matecon 1976 12 747 756 Solodov M. V. Svaiter B. F. A new projection method for variational inequality problems SIAM Journal on Control and Optimization 1999 37 3 765 776 10.1137/S0363012997317475 MR1675086 ZBL0959.49007 Solodov M. V. Stationary points of bound constrained minimization reformulations of complementarity problems Journal of Optimization Theory and Applications 1997 94 2 449 467 10.1023/A:1022695931376 MR1460675 ZBL0893.90161 Wang Y. J. Xiu N. H. Zhang J. Z. Modified extragradient method for variational inequalities and verification of solution existence Journal of Optimization Theory and Applications 2003 119 1 167 183 10.1023/B:JOTA.0000005047.30026.b8 MR2028444 ZBL1045.49017 Takahashi W. Toyoda M. Weak convergence theorems for nonexpansive mappings and monotone mappings Journal of Optimization Theory and Applications 2003 118 2 417 428 10.1023/A:1025407607560 MR2006529 ZBL1055.47052 Zarantonello E. H. Projections on Convex Sets in Hilbert Space and Spectral Theory, Contributions to Nonlinear Functional Analysis 1971 New York, NY, USA Academic press Opial Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings Bulletin of the American Mathematical Society 1967 73 591 597 MR0211301 10.1090/S0002-9904-1967-11761-0 ZBL0179.19902 Rockafellar R. T. On the maximality of sums of nonlinear monotone operators Transactions of the American Mathematical Society 1970 149 75 88 MR0282272 10.1090/S0002-9947-1970-0282272-5 ZBL0222.47017 Browder F. E. Fixed-point theorems for noncompact mappings in Hilbert space Proceedings of the National Academy of Sciences of the United States of America 1965 53 1272 1276 MR0178324 10.1073/pnas.53.6.1272 ZBL0125.35801 Nadezhkina N. Takahashi W. Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings Journal of Optimization Theory and Applications 2006 128 1 191 201 10.1007/s10957-005-7564-z MR2201895 ZBL1130.90055 Eaves B. C. On the basic theorem of complementarity Mathematical Programming 1971 1 1 68 75 MR0287901 10.1007/BF01584073 ZBL0227.90044 He B. S. Liao L. Z. Improvements of some projection methods for monotone nonlinear variational inequalities Journal of Optimization Theory and Applications 2002 112 1 111 128 10.1023/A:1013096613105 MR1881692 ZBL1025.65036 Censor Y. Gibali A. Reich S. The subgradient extragradient method for solving variational inequalities in Hilbert space Journal of Optimization Theory and Applications 2011 148 2 318 335 10.1007/s10957-010-9757-3 MR2780566 ZBL1229.58018 Censor Y. Gibali A. Reich S. Two extensions of Korpelevich’s extragradient method for solving the variational inequality problem in Euclidean space 2010 Wang Y. J. Xiu N. H. Wang C. Y. Unified framework of extragradient-type methods for pseudomonotone variational inequalities Journal of Optimization Theory and Applications 2001 111 3 641 656 10.1023/A:1012606212823 MR1873421 ZBL1039.49014 Schu J. Weak and strong convergence to fixed points of asymptotically nonexpansive mappings Bulletin of the Australian Mathematical Society 1991 43 1 153 159 10.1017/S0004972700028884 MR1086729 ZBL0709.47051