JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 965640 10.1155/2013/965640 965640 Research Article Gap Functions and Algorithms for Variational Inequality Problems Zhang Congjun http://orcid.org/0000-0002-3615-930X Liu Baoqing Wei Jun Dong Bo-Qing College of Applied Mathematics Nanjing University of Finance and Economics Nanjing Jiangsu 210023 China njue.edu.cn 2013 11 11 2013 2013 07 07 2013 10 10 2013 2013 Copyright © 2013 Congjun Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We solve several kinds of variational inequality problems through gap functions, give algorithms for the corresponding problems, obtain global error bounds, and make the convergence analysis. By generalized gap functions and generalized D-gap functions, we give global bounds for the set-valued mixed variational inequality problems. And through gap function, we equivalently transform the generalized variational inequality problem into a constraint optimization problem, give the steepest descent method, and show the convergence of the method.

1. Introduction

Variational inequality problem (VIP) provides us with a simple, natural, unified, and general frame to study a wide class of equilibrium problems arising in transportation system analysis [1, 2], regional science [3, 4], elasticity , optimization , and economics . Canonical VIP can be described as follows: find a point xKn such that (1)T(x),y-x0,yK, where K is a nonempty closed convex subset of n, T is a mapping from n into itself, and ·,· denotes the inner product in n.

In recent years, considerable interest has been shown in developing various, useful, and important extensions and generalizations of VIP, both for its own sake and for its applications, such as general variational inequality problem (GVIP)  and set-valued (mixed) variational inequality problem (SMVIP) . There are significant developments of these problems related to multivalued operators, nonconvex optimization, iterative methods, and structural analysis. More recently, much attention has been given to reformulate the VIP as an optimization problem. And gap functions, which can constitute an equivalent optimization problem, turn out to be very useful in designing new globally convergent algorithms and in analyzing the rate of convergence of some iterative methods. Various gap functions for VIP have been suggested and proposed by many authors in [8, 1013] and the references therein. Error bounds are functions which provide a measure of the distance between a solution set and an arbitrary point. Therefore, error bounds play an important role in the analysis of global or local convergence analysis of algorithms for solving VIP.

For the VIP defined in (1), the authors in  provided an equivalent optimization problem formulation through regularized gap function Gα:H defined by (2)Gα(x)=maxyK{F(x),x-y-α2x-y2}, where α is a parameter. The authors proved that x is the solution of problem (1) if and only if x is global minimizer of function Gα(x) in K and Gα(x)=0. In order to expand the definition of regularized gap function, the authors in  gave the definition of generalized regularized gap function defined by (3)Gα(x)=maxyK{F(x),x-y-αϕ(x,y)}, where ϕ is an abstract function which satisfies conditions ranked as follows:

ϕ is continuous differentiable on H×H;

ϕ is nonnegative on H×H;

ϕ is uniformly strongly convex on H; that is, there exists a positive number λ such that (4)ϕ(x,y1)-ϕ(x,y2)2ϕ(x,y2),y1-y2+λy1-y22,x,y1,y2H;

ϕ(x,y)=0x=y;

2(x,y) is uniformly Lipschtiz continuous on H; that is, there exists a constant L>0 such that (5)2ϕ(x,y1)-2ϕ(x,y2)Ly1-y2,x,y1,y2H.

Note that 2 is the partial of ϕ with respect to the second component and conditions (C1)–(C5) can make sense. One can refer to [10, 14] and so forth for more details.

Many gap functions have been explored during the past two decades as it is shown in  and the references therein. Motivated by their work, in this paper, we solve some classes of VIP through gap functions, give algorithms for the corresponding problems, obtain global error bounds, and make the convergence analysis. We consider generalized gap functions and generalized D-gap functions for SMVIP and give global bounds for the problem through the two functions, respectively. And for GVIP, we equivalently transform it into a constraint optimization problem through gap function, introduce the steepest descent method, and show the convergence of the method.

2. Preliminaries

Let H be a real Hilbert space whose inner product and norm are denoted by ·,· and ·, respectively. Let K be a nonempty closed convex set in H and let 2H be the family of all nonempty compact subsets of H.

Let F,f:HH be nonlinear operators. The GVIP can be described as follows: Find xH, f(x)K such that (6)F(x),f(y)-f(x)0,yH,f(y)K.

For single-valued operator f:H{+}, which is proper convex and lower semicontinuous, and for given multivalued operator T:H2H, the SMVIP can be described as follows: Find xK,wT(x) such that (7)w,y-x+f(y)-f(x)0,yK. Note that when f=0, the original problem (7) reduces to a set-valued variational inequality problem; when f=0 and T is a single-valued operator, problem (7) is the right problem discussed in (1).

Recall that the multivalued operator T:KH2H is said to be strongly monotone with modulus β>0 on K if (8)w-w,x-xβx-x2,d.(x,w),(x,w)graph(T). And T is said to be Lipschtiz continuous on a nonempty bounded set BK, if there exists a positive constant L such that (9)H(T(x),T(y))Lx-y,x,yB, where H(·,·) is the Hausdorff metric on B defined by (10)H(T(x),T(y))=max{suprT(x)infsT(y)r-s,supsT(y)infrT(x)r-s},ffffdddddddddddcddddddddddddfx,yB.

Let F:nn. Then F is a P0-function if max1in,xiyi(xi-yi)(Fi(x)-Fi(y))0, for all x,yn and xy. Assume Fμ(·):nn(μ>0). Fμ is called smoothing approximation function of F, if there exists a positive constant k such that (11)Fμ(x)-F(x)ku,u>0,xn. And Fμ is a uniform approximation if k is independent of x.

A matrix Mn×n is a P0-matrix if each of its principal minors is nonnegative.

We need the following lemmas. The parameters involved in the lemmas can be found in the following sections.

Lemma 1 (see [<xref ref-type="bibr" rid="B16">11</xref>]).

If abstract function ϕ satisfies condition (C1), then the following holds: (12)2ϕ(x,y1)-2ϕ(x,y2),y1-y22λy1-y22,ddddddddddddddddddddddddddddddy1,y2H; that is, 2(x,·) is strong monotone in H, and by (C5), one obtains that 2λL.

Lemma 2 (see [<xref ref-type="bibr" rid="B21">17</xref>]).

If abstract function ϕ satisfies conditions (C1)–(C4), then (13)2ϕ(x,y)=0x=y.

Lemma 3 (see [<xref ref-type="bibr" rid="B10">18</xref>]).

If abstract function ϕ satisfies conditions (C1)–(C5) and λ and L are the corresponding coefficients defined above, then one has (14)λx-y2ϕ(x,y)(L-λ)x-y2,x,yH.

Lemma 4 (see [<xref ref-type="bibr" rid="B9">19</xref>]).

If abstract function ϕ satisfies conditions (C1)–(C4), then Gα(x)αλx-πα(x)2. Moreover, when Gα(x)=0, x is a solution of SMVIP.

Lemma 5 (see [<xref ref-type="bibr" rid="B7">10</xref>]).

If abstract function ϕ satisfies conditions (C1)–(C4), then gα(x) is differentiable and (15)gα(x)=g(x)F(x)+F(x)(g(x)-yα(x))-αxϕ(g(x),yα(x)).

Lemma 6 (see [<xref ref-type="bibr" rid="B7">10</xref>, <xref ref-type="bibr" rid="B9">19</xref>]).

If abstract function ϕ satisfies conditions (C1)–(C4), then gα is nonnegative, and gα(x)=0x is a solution of GVIP.

Lemma 7 (see [<xref ref-type="bibr" rid="B7">10</xref>]).

Let abstract function ϕ satisfy conditions (C1)–(C4). If gα(x)=0 and F(x) is positive definite, then x is a solution of GVIP(F,f).

3. Gap Functions and Error Bounds for SMVIP

In this section, by introducing appropriate gap functions, we give global error bound for SMVIP. Firstly, we need the following propositions.

Proposition 8.

Let C be a nonempty closed convex set in H and let f be strictly convex in C. Then f has only one minimum in C.

Proof.

We use proof by contradiction to show the desired result. Let x1,x2C be two minimal points of f; that is, f(x1)=f(x2)=minf(x). Since f is strictly convex, one obtains that (16)f(αx1+(1-α)x2)<αf(x1)+(1-α)f(x2)=f(x1),α(0,1). This implies that there exists a point x3=αx1+(1-α)x2C, such that f(x3)<f(x1), which is a contradiction. This completes the proof.

Let T, f, and ϕ be defined as above and let K be a nonempty closed convex set in H. Now, we can introduce generalized gap function Gα of SMVIP(T,K) defined as follows: (17)Gα(x)=maxyHΨα(x,y)=maxyH{w,x-y+f(x)-f(y)-αϕ(x,y)},dddddddddddddddddddddx,yH,α>0. From uniform convex of ϕ(x,·), one obtains that -Ψα(x,·) is also uniform convex in H. By Proposition 8, there exists a minimal point πα(x) of ϕ(x,·) in H, such that (18)Gα(x)=w,x-πα(x)+f(x)-f(πα(x))-αϕ(x,πα(x)).

Proposition 9.

If abstract function ϕ satisfies conditions (C1)–(C4) and f:H{+} is proper convex and lower semicontinuous, then for all α>0, x=πα(x)x is a solution of SMVIP(T,K).

Proof.

From the definition of πα(x), one has (19)0(-Ψ(x,πα(x)))=w+f(πα(x))+α2ϕ(x,πα(x)). By the definition of subgradient, we have (20)f(y)f(πα(x))-w+α2ϕ(x,πα),y-πα(x), which is equivalent to (21)w,y-πα(x)+f(y)-f(πα(x))α-2ϕ(x,πα(x)),y-πα(x).

On the one hand, if x=πα(x), from Lemma 2, one obtains 2ϕ(x,πα(x))=0, and so does α-2ϕ(x,πα(x)),y-πα(x)=0. So, from (21), we have (22)w,y-πα(x)+f(y)-f(πα(x))0, which implies that x is a solution of SMVIP(T,K).

On the other hand, if x is a solution of SMVIP(T,K), take y=πα(x) in (7), then we have (23)w,πα(x)-x+f(πα(x))-f(x)0. From condition (C3), one has (24)ϕ(x,x)-ϕ(x,πα(x))2ϕ(x,ππα(x)),x-πα(x)+λx-πα(x)2. And by conditions (C2) and (C4), (25)ϕ(x,x)-ϕ(x,πα(x))0. So we have (26)2ϕ(x,πα(x)),x-πα(x)+λx-πα(x)20. Combining (23) with (26), we have x=πα(x). This completes the proof.

Based on the above discussion, one can obtain the following global error bound.

Theorem 10.

If abstract function ϕ satisfies conditions (C1)–(C5), f is closed convex, and T is strong monotone and Lipschtiz continuous with respect to the solution x- of SMVIP(T,K), then one has (27)x-x-L+αLβx-πα(x),where L and L can be found in (5) and (9), respectively.

Proof.

Since x- is a solution of SMVIP(T,K), take w-T(x-), then we obtain (28)w-,y-x-+f(y)-f(x)0. Let y=πα(x), for all xH. Then inequality (28) reduces to (29)w-,πα(x)-x-+f(πα(x))-f(x)0. Take y=x-, w^T(x) in (21) such that w^-w-H(T(x),T(x-)). Then inequality (21) changes to (30)w^,x--πα(x)+f(x-)-f(πα(x))α-2ϕ(x,πα(x)),x--πα(x). Combining (29) and (30), we have (31)w^-w-,πα(x)-x-α2ϕ(x,πα(x)),x--πα(x). And note that (32)α2ϕ(x,πα(x)),x--πα(x)=α2ϕ(x,πα(x)),x--x+α2ϕ(x,πα(x)),x-πα(x)=α2ϕ(x,πα(x))-2ϕ(x,x),x--x-α2ϕ(x,x)-2ϕ(x,πα(x)),x-πα(x)α2ϕ(x,πα(x))-2ϕ(x,x)x--x-2αλx-πα(x)2αLx-πα(x)x--x-2αλx-πα(x)2. From (8), one has (33)βx-x-2w^-w-,x-x-w^-w-,x-πα(x)+w^-w-,πα(x)-x-Lx-x-x-πα(x)+αLx-πα(x)x-x-(L+αL)x-x-x-πα(x), so we have (34)x-x-L+αLβx-πα(x). This completes the proof.

Theorem 11.

If abstract function ϕ satisfies conditions (C1)–(C5) and T is strong monotone for the solution x- of SMVIP and is Lipschtiz continuous with module L, then Gα has global error bound with respect to SMVIP; that is, (35)x-x-L+LβαλGα(x).

Proof.

By Lemma 4 and Theorem 10, one obtains (36)Gα(x)αλx-πα(x)2,x-x-L+αLβx-πα(x). So we can obtain (37)Gα(x)αλβ2(L+αL)2x-x-2, which implies that (38)x-x-L+αLβαλGα(x). This completes the proof.

Now, we introduce generalized D-gap function Hαγ for SMVIP which is defined by (39)Hαγ(x)=Gα(x)-Gγ(x)=maxyHΨα(x,y)-maxyHΨγ(x,y)=w,πγ(x)-πα(x)+f(πγ(x))-f(πα(x))+βϕ(x,πγ(x))-αϕ(x,πα(x)), where πα(x) and πγ(x) are minimal points for -Ψα(x,·) and -Ψγ(x,·) in H, respectively, and 0<α<γ. For Hαγ(x), we can conclude the following result.

Proposition 12.

If abstract function ϕ satisfies condition (C3), then one has (40)(γ-α)ϕ(x,πγ(x))Hαγ(x)(γ-α)ϕ(x,πα(x)).

Proof.

From the definition of Hαγ(x), one obtains that (41)Hαγ(x)=maxyHΨα(x,y)-maxyHΨγ(x,y)=Ψα(x,πα(x))-Ψγ(x,πγ(x))Ψα(x,πγ(x))-Ψγ(x,πγ(x))=w,x-πγ(x)-αϕ(x,πγ(x))-w,x-πγ(x)+γϕ(x,πγ(x))=(γ-α)ϕ(x,πγ(x)).Hαγ(x)(γ-α)ϕ(x,πα(x)) can be proved similarly. This completes the proof.

From Proposition 12, one has the following.

Proposition 13.

If ϕ satisfies conditions (C1)–(C4), then Hαγ(x) is nonnegative, and Hαγ(x)=0x is a solution of SMVIP(T,K).

Proof.

From Proposition 12 and nonnegative property of ϕ(·,·), we have that Hαγ(x) is nonnegative.

On the one hand, if Hαγ(x)=0, then by conditions (C2) and (C4), one has x=πα(x). Then by Proposition 9, we conclude that x is a solution of SMVIP(T,K).

On the other hand, if x is a solution of SMVIP(T,K), by Proposition 9, one obtains that x=πα(x). From condition (C4), one has ϕ(x,πα(x))=0. And since Hαγ(x) is nonnegative, we have Hαγ(x)=0. This completes the proof.

By the generalized D-gap function, we have the following error bound for SMVIP(T,K).

Theorem 14.

Let ϕ satisfy conditions (C1)–(C5). T is strong monotone for the solution x- of SMVIP and is Lipschtiz continuous with module L; then Hαγ(x) has global error bound with respect to SMVIP; that is, (42)x-x-L+Lβλ(γ-α)Hαγ(x).

Proof.

From Lemma 3, Theorem 10, and Proposition 13, we have (43)Hαγ(x)(γ-α)ϕ(x-πγ(x))ddddd(γ-α)λx-πγ(x)2dddddλ(γ-α)(βL+αL)2x-x-2, which implies that (44)x-x-L+Lβλ(γ-α)Hαγ(x). This completes the proof.

4. Steepest Descent Method for GVIP

In this section, by introducing appropriate generalized gap function, the original GVIP(F,f) in (6) can be changed into an optimization problem with restrictions. When one designs algorithms to solve the optimization problem, the gradient of objective function is unavoidable. We try to design a new algorithm, constructing a class of descent direction, to solve the optimization problem. In the following, we set H to be n. And we introduce the following generalized gap function for GVIP(F,f): (45)gα(x)=maxg(y)KΨα(x,y).dddd=maxg(y)K{F(x),f(x)-f(y)-αϕ(f(x),f(y))}ddd.d={F(x),f(x)-yα(x)-αϕ(f(x),yα(x))}, where yα(x) is a minimal point for -Ψα(x,·), α is a positive parameter, and ϕ satisfies conditions (C1)–(C5) stated above. For gα, we have the following useful results :

gα(x) is nonnegative in K;

gα(x)=0 for some xKx is a solution of VIP;

yα(x) is the only minimizer of Ψα(x,·) in K.

And similar to the discussion in [10, 11], we also give the following two assumptions:

F(x) is positive definite for all xK;

xϕ(x,y)=-yϕ(x,y).

From Lemmas 57, we obtain that the original GVIP (6) is equivalent to the following optimization problem: (46)mins.t.xKgα(x). For problem (46), we give the following algorithm.

Algorithm 15.

Step  0. Choose an initial value x0K, ε,t(0,1), and put k=0.

Step  1. If gα(x)ε, then we can end the circulation.

Step  2. Compute yα(xk), and let (47)dk=yα(xk)-f(xk).

Step  3. Let mk be the minimal nonnegative integer m, such that (48)gα(xk+tmdk)gα(xk)-t2mdk2.

Step  4. Let f(xk+1)=f(xk)+tmkdk, k=k+1; go to Step 1.

Proposition 16.

Let {xk} be a sequence generated by Algorithm 15. If {xk} are not the solutions of GVIP(F,f), then (49)gα(xk)Tdk<0; that is, dk is the descent direction of gα at xk, where dk is defined in (47).

Proof.

To begin, we show that f(xk)K, for all positive integer k. From Algorithm 15, one obtains that f(x0)K. We prove this result by induction. Assume f(xk)K; we only need to show that f(xk+1)K. Since xk,yα(xk)K, tk(0,1), and K is convex, we have (50)f(xk+1)=f(xk)+tkdk=(1-tk)f(xk)+tkyα(xk)K. For simplicity, yα(xk), xk are replaced by yα, x, respectively. From Lemma 5, one has (51)gα(x)Td={yαF(x)+F(x)(g(x)-yα)-αxϕ(g(x),yα)}Td=(g(x)-yα)TF(x)(yα-g(x))+{g(x)F(x)-αxϕ(g(x),yα)}T(yα-g(x)). Since (g(x)-yα)TF(x)(yα-g(x))<0, we only need to show that {g(x)F(x)-αxϕ(g(x),yα)}T(yα-g(x))0. Since yα is the unique minimizer of -Ψ(x,·) in K, we have (52)-yΨ(x,yα),u-yα=g(x)F(x)+αyϕ(x,yα),u-yα0,uK. Let u=xK in (52). One has (53){g(x)F(x)+αyϕ(x,yα)}T(yα-x)0. From assumption (b), we have (54){g(x)F(x)-αxϕ(x,yα)}T(yα-x)0. This completes the proof.

Now, we are in a position to show the global convergence result for Algorithm 15.

Theorem 17.

Let {xk} be a sequence generated by Algorithm 15, and let x be the cluster point of {xk}. Then x is a solution of GVIP(F,f).

Proof.

Let {xk}K be a subsequence which converges to x. If gα(x)=0, then from Lemma 6, x is a solution of GVIP. If gα(x)0, from the continuous property, one obtains that {yα(xk)}Kyα(x) which implies that (55){dk}yα(x)-f(x). Now, we begin to show that the cluster point dx of {dk}K is zero. We use proof by contradiction. Assume d=yα(x)-x0. On the one hand, from Proposition 16, one has that (56)gα(x)T<0. On the other hand, from Proposition 16, we obtain that {gα(xk)} is monotonically decreasing and bounded; that is, the sequence {gα(xk)} is convergent. From step 3 of Algorithm 15, one has (57)0t2mkdk2gα(xk)-gα(xk+1)0,  as  k. Hence, we have limkt2mkdk2=0; that is, (58)limkt2mk=0(d0). Without loss of generality, we assume tk(0,1), for all k. Then one cannot find the minimal nonnegative integer mk; that is, (59)gα(xk+tmk-1dk)>gα(xk)-(tmk)2dk2,k, Or, equally, (60)gα(xk+tmk-1dk)-gα(xk)tmk>-tmkdk2,k. Let k, from (58), and gα be continuous and differentiable; we can obtain (61)gα(x)T0. Inequalities (56) and (61) are at odds. This completes the proof.

Acknowledgments

The authors would like to thank the referees for the helpful suggestions. This work is supported by the National Natural Science Foundation of China, Contact/Grant nos. 11071109 and 11371198, the Priority Academic Program Development of Jiangsu Higher Education Institutions, and the Foundation for Innovative Program of Jiangsu Province Contact/Grant no. CXZZ12_0383.

Florian M. Nonlinear cost network models in transportation analysis Mathematical Programming Study 1986 26 167 196 MR830490 ZBL0607.90029 Smith M. J. The existence, uniqueness and stability of traffic equilibria Transportation Research B 1979 13 4 295 304 10.1016/0191-2615(79)90022-5 MR551841 Florian M. Los M. A new look at static spatial price equilibrium model Regional Science and Urban Economy 1982 12 374 389 Nagurney A. B. Competitive equilibrium problems, variational inequalities and regional science Journal of Regional Science 1987 27 503 517 Han W. Reddy B. D. On the finite element method for mixed variational inequalities arising in elastoplasticity SIAM Journal on Numerical Analysis 1995 32 6 1778 1807 10.1137/0732081 MR1360460 ZBL0838.73072 Pang J. S. Chan D. Iterative methods for variational and complementarity problems Mathematical Programming 1982 24 3 284 313 10.1007/BF01585112 MR676947 ZBL0499.90074 Cohen G. Nash equilibria: gradient and decomposition algorithms Large Scale Systems 1987 12 2 173 184 MR951822 ZBL0654.90103 Noor M. A. General variational inequalities Applied Mathematics Letters 1988 1 2 119 122 10.1016/0893-9659(88)90054-7 MR953368 ZBL0655.49005 Fang S. C. Peterson E. L. Generalized variational inequalities Journal of Optimization Theory and Applications 1982 38 3 363 383 10.1007/BF00935344 MR686212 ZBL0471.49007 Fukushima M. Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems Mathematical Programming 1992 53 1 99 110 10.1007/BF01585696 MR1151767 ZBL0756.90081 Qu B. Wang C. Y. Zhang J. Z. Convergence and error bound of a method for solving variational inequality problems via the generalized D-gap function Journal of Optimization Theory and Applications 2003 119 3 535 552 10.1023/B:JOTA.0000006688.13248.04 MR2026462 ZBL1045.49015 Solodov M. V. Tseng P. Some methods based on the D-gap function for solving monotone variational inequalities Computational Optimization and Applications 2000 17 2-3 255 277 10.1023/A:1026506516738 MR1806255 ZBL1168.49303 Solodov M. V. Merit functions and error bounds for generalized variational inequalities Journal of Mathematical Analysis and Applications 2003 287 2 405 414 10.1016/S0022-247X(02)00554-1 MR2022727 ZBL1036.49020 Wu J. H. Florian M. Marcotte P. A general descent framework for the monotone variational inequality problem Mathematical Programming 1993 61 3 281 300 10.1007/BF01582152 MR1242462 ZBL0813.90111 Chen J. S. On some NCP-functions based on the generalized Fischer-Burmeister function Asia-Pacific Journal of Operational Research 2007 24 3 401 420 10.1142/S0217595907001292 MR2335554 ZBL1141.90557 Noor M. A. Merit functions for general variational inequalities Journal of Mathematical Analysis and Applications 2006 316 2 736 752 10.1016/j.jmaa.2005.05.011 MR2207343 ZBL1085.49011 Yamashita N. Taji K. Fukushima M. Unconstrained optimization reformulations of variational inequality problems Journal of Optimization Theory and Applications 1997 92 3 439 456 10.1023/A:1022660704427 MR1432604 ZBL0879.90180 Huang L. R. Ng K. F. Equivalent optimization formulations and error bounds for variational inequality problems Journal of Optimization Theory and Applications 2005 125 2 299 314 10.1007/s10957-004-1839-7 MR2136497 ZBL1071.49007 Hu Y. H. Gap gunctions and weak sharpness of solutions for variational inequalities [Ph.D. thesis] 2010 (Chinese) Southest Normal University