JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 762165 10.1155/2013/762165 762165 Research Article Methods for Solving Generalized Nash Equilibrium Qu Biao Zhao Jing Fang Ya Ping School of Management Qufu Normal University Rizhao Shandong 276826 China qfnu.edu.cn 2013 14 2 2013 2013 12 10 2012 05 01 2013 17 01 2013 2013 Copyright © 2013 Biao Qu and Jing Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The generalized Nash equilibrium problem (GNEP) is an extension of the standard Nash equilibrium problem (NEP), in which each player's strategy set may depend on the rival player's strategies. In this paper, we present two descent type methods. The algorithms are based on a reformulation of the generalized Nash equilibrium using Nikaido-Isoda function as unconstrained optimization. We prove that our algorithms are globally convergent and the convergence analysis is not based on conditions guaranteeing that every stationary point of the optimization problem is a solution of the GNEP.

1. Introduction

The generalized Nash equilibrium problem (GNEP for short) is an extension of the standard Nash equilibrium problem (NEP for short), in which the strategy set of each player depends on the strategies of all the other players as well as on his own strategy. The GNEP has recently attracted much attention due to its applications in various fields like mathematics, computer science, economics, and engineering . For more details, we refer the reader to a recent survey paper by Facchinei and Kanzow  and the references therein.

Let us first recall the definition of the GNEP. There are N players labelled by an integer v=1,,N. Each player v controls the variables xvRnv. Let x=(x1xn)T be the vector formed by all these decision variables, where n:=n1+n2++nv. To emphasize the vth player variable within the vector x, we sometimes write x=(xv,x-v)TRn, where x-v denotes all the other player's variables. In the games, each player controls the variables xv and tries to minimize a cost function θv(xv,x-v) subjects to the constraint (xv,x-v)TX with x-v given as exogenous, where X is a common strategy set. A vector x*:=(x*,1,,x*,N)T is called a solution of the GNEP or a generalized Nash equilibrium, if for each player v=1,,N, x*,v solves the following optimization problem with x*,-v being fixed: (1)minxvθv(xv,x*,-v),s.t.(xv,x*,-v)X.

If X is defined as the Cartesian product of certain sets XvRnv, that is, X=X1×X2××XN, then the GNEP reduces to the standard Nash equilibrium problem.

Throughout this paper, we can make the following assumption.

Assumption 1.

(a) The set X is nonempty, closed, and convex.

(b) The utility function θv is continuously differentiable and, as a function of xv alone, convex.

A basic tool for both the theoretical and the numerical solution of (generalized) Nash equilibrium problems is the Nikaido-Isoda function defined as (2)Ψ(x,y)=v=1N[θv(xv,x-v)-θv(yv,x-v)].

Sometimes also the name Ky-Fan function can be found in the literature, see [12, 13]. In the following, we state a definition which we have taken from .

Definition 1.

x * is a normalized Nash equilibrium of the GNEP, if maxyΨ(x*,y)=0 holds, where Ψ(x,y) denotes the Nikaido-Isoda function defined as (2).

In order to overcome the nondifferentiable property of the mapping Ψ(x,y), von Heusinger and Kanzow  used a simple regularization of the Nikaido-Isoda function. For a parameter α>0, the following regularized Nikaido-Isoda function was considered: (3)Ψα(x,y)=v=1N[θv(xv,x-v)-θv(yv,x-v)]-α2x-y2. Since under the given Assumption 1, Ψα(x,y) is strongly concave in y, the maximization problem  (4)maxyΨα(x,y),s.t.    yX has a unique solution for each x, denoted by yα(x).

The corresponding value function is then defined by (5)Vα(x)=maxyΨα(x,y)=Ψα(x,yα(x)).

Let β>α>0 be a given parameter. The corresponding value function is then defined by (6)Vβ(x)=maxyΨβ(x,y)=Ψβ(x,yβ(x)).

Define (7)Vαβ(x)=Vα(x)-Vβ(x).

In , the following important properties of the function Vαβ(x) have been proved.

Theorem 2.

The following statements hold:

Vαβ(x)0 for any xRn;

x* is a normalized Nash equilibrium of the GNEP if and only if Vαβ(x*)=0;

Vαβ(x) is continuously differentiable on Rn and that (8)Vαβ(x)  =Vα(x)-Vβ(x)  =v=1N[θv(yβ(x)v,x-v)-θv(yα(x)v,x-v)]+(x1θ1(yα(x)1,x-1)-x1θ1(yβ(x)1,x-1)xNθN(yα(x)N,x-N)-xNθN(yβ(x)N,x-N))-α(x-yα(x))+β(x-yβ(x)).

From Theorem 2, we know that the normalized Nash equilibrium of the GNEP is precisely the global minima of the smooth unconstrained optimization problem (see ) as (9)minxRnVαβ(x) with zero optimal value.

In this paper, we develop two new descent methods for finding a normalized Nash equilibrium of the GNEP by solving the optimization problem (9). The key to our methods is a strategy for adjusting α and β when a stationary point of Vαβ(x) is not a solution of the GNEP. We will show that our algorithms are globally convergent to a normalized Nash equilibrium under appropriate assumption on the cost function, which is not stronger than the one considered in .

The organization of the paper is as follows. In Section 2, we state the main assumption underlying our algorithms and present some examples of the GNEP satisfying it. In Section 3, we derive some useful properties of the function Vαβ(x). In Section 4, we formally state our algorithms and prove that they are both globally convergent to a normalized Nash equilibrium.

2. Main Assumption

In order to construct algorithms and guarantee the convergence of them, we give the following assumption.

Assumption 2.

For any β>α>0 and xRn, if yα(x)yβ(x), we have (10)v=1N(θv(yβ(x)v,x-v)-θv(yα(x)v,x-v))T·(yβ(x)-yα(x))v=1N(xvθv(yβ(x)v,x-v)-xvθv(yα(x)v,x-v))T·(yβ(x)v-yα(x)v).

We next consider three examples which satisfy Assumption 2.

Example 3.

Let us consider the case in which all the cost functions are separable, that is, (11)θv(x)=fv(xv)+gv(x-v), where fv:RnvR is convex and gv:Rn-nvR. A simple calculation shows that, for any yRn, we have (12)v=1N(xvθv(yβ(x)v,x-v)-xvθv(yα(x)v,x-v))T·(yβ(x)v-yα(x)v)=v=1N(fv(yβ(x)v)-fv(yα(x)v))T(yβ(x)v-yα(x)v),v=1N(θv(yβ(x)v,x-v)-θv(yα(x)v,x-v))T·(yβ(x)v-yα(x)v)=v=1N(fv(yβ(x)v)-fv(yα(x)v))T(yβ(x)v-yα(x)v).

Hence Assumption 2 holds.

Example 4.

Consider the case where the cost function θv(x) is quadratic, that is, (13)θv(x)=12(xv)TAvvxv+μ=1,μvN(xv)TAvμxμ For v=1,,N. We have (14)v=1N(xvθv(yβ(x)v,x-v)-xvθv(yα(x)v,x-v))T·(yβ(x)v-yα(x)v)=v=1Nyβ(x)v-yα(x)v,Avv(yβ(x)v-yα(x)v),v=1N(θv(yβ(x)v,x-v)-θv(yα(x)v,x-v))T·(yβ(x)-yα(x))=(A11A21A1NA21A22A2NA1NA2NANN)yβ(x)-yα(x),12345(A11A21A1NA21A22A2NA1NA2NANN)yβ(x)-yα(x).

Therefore, if the matrix (15)(0A12A1NA210A2NAN1AN20) is positive semidefinite, Assumption 2 is satisfied.

In the following example, we show the relationship between our assumption and the one considered in  as follows.

For any β>α>0, a given xRn with yα(x)yβ(x), the inequality (16)v=1N(θv(yβ(x)v,x-v)-θv(yα(x)v,x-v))T×(yβ(x)-yα(x))>0 holds.

Example 5.

Consider the GNEP with N=2 as (17)X={xRn:x11,x21,x1+x210} and the cost function θ1(x)=x1x2 and θ2(x)=-x1x2. The point x*=(1,9)T is the unique normalized Nash equilibrium. For any yR2, we have (18)v=1N(θv(yβ(x)v,x-v)-θv(yα(x)v,x-v))T·(yβ(x)-yα(x))=0,v=1N(xvθv(yβ(x)v,x-v)-xvθv(yα(x)v,x-v))T·(yβ(x)v-yα(x)v)  =0.

Therefore Assumption 2 holds, but (16) does not hold for any β>α>0.

3. Properties of <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M83"><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>α</mml:mi><mml:mi>β</mml:mi></mml:mrow></mml:msub><mml:mo mathvariant="bold">(</mml:mo><mml:mi>x</mml:mi><mml:mo mathvariant="bold">)</mml:mo></mml:math></inline-formula> Lemma 6.

For any β>α>0 and xRn, we have (19)Vαβ(x)β-α2x-yβ(x)2+α2yα(x)-yβ(x)2,(20)Vαβ(x)β-α2x-yα(x)2-β2yα(x)-yβ(x)2.

Proof.

Since yα(x) satisfies the optimality condition, then (21)v=1N[xvθv(yα(x)v,x-v)-α(xv-yα(x)v)]T·(yβ(x)v-yα(x)v)0.

In a similar way, it follows that yβ(x) satisfies (22)v=1N[xvθv(yβ(x)v,x-v)-β(xv-yβ(x)v)]T·(yα(x)v-yβ(x)v)0.

Since θv(x) as a function of xv alone is convex, we have (23)v=1N[θv(yβ(x)v,x-v)-θv(yα(x)v,x-v)]-α(x-yα(x))T(yβ(x)-yα(x))0,(24)v=1N[θv(yα(x)v,x-v)-θv(yβ(x)v,x-v)]-β(x-yβ(x))T(yα(x)-yβ(x))0 respectively. Thus, using the definition of Vαβ(x) and (23), we have (25)Vαβ(x)=v=1N[θv(yα(x)v,x-v)-θv(yβ(x)v,x-v)]+β2x-yβ(x)2-α2x-yα(x)2α(x-yα(x))T(yβ(x)-yα(x))+β2x-yβ(x)2-α2x-yα(x)2=α(x-yα(x))T(yβ(x)-yα(x))+β2x-yβ(x)2-α2x-yβ(x)+yβ(x)-yα(x)2=β-α2x-yβ(x)2+α2yα(x)-yβ(x)2.

Similarly, using the definition of Vαβ(x) and (24), we have (26)Vαβ(x)β-α2x-yα(x)2-β2yα(x)-yβ(x)2.

The proof is complete.

Lemma 7.

Assume X is bounded. For any β>α>0 and xRn, we have (27)limsupβ,α0Vαβ(x)β-αVαβ(x)β-α.

Proof.

We have from (19) that (28)2Vαβ(x)β-αx-yβ(x)2+αβ-αyα(x)-yβ(x)2x-yβ(x)2.

By the definition of Vαβ(x), we have (29)2Vαββ-α=2v=1N[θv(yβ(x)v,x-v)-θv(yα(x)v,x-v)]β-α-αx-yα(x)2+βx-yβ(x)2β-α.

Since yα(x)X,yβ(x)X and X is bounded, we get that (30)limsupβ,α0Vαβ(x)β-α12x-yβ(x)2Vαββ-α.

This completes the proof.

Equation (8) and Assumption 2 yield (31)Vαβ(x)T(yβ(x)-yα(x))=v=1N[θv(yβ(x)v,x-v)-θv(yα(x)v,x-v)]Tv=1N·(yβ(x)-yα(x))+(x1θ1(yα(x)1,x-1)-x1θ1(yβ(x)1,x-1)xNθN(yα(x)N,x-N)-xNθN(yβ(x)N,x-N))T·(yβ(x)-yα(x))-[α(x-yα(x))-β(x-yβ(x))]T·(yβ(x)-yα(x))-[α(x-yα(x))-β(x-yβ(x))]T·(yβ(x)-yα(x))=:eαβ(x)0, where nonnegativity of eαβ(x) follows from the inequalities (23) and (24). In particular, either eαβ(x) is above a tolerance ε>0, in which case yα(x)-yβ(x) is a direction of sufficient descent for Vαβ(x) at x or else, as we show in the lemma below, and x is an approximate solution of the GNEP with accuracy depending on ε, α, β. This result will lead to our methods.

Lemma 8.

For any β>α>0 and xRn, we have (32)x-yβ(x)  2Vαβ(x)β-α,(33)γαβ(x)Vα(x)γαβ(x)+eαβ(x)+α2yα(x)-yβ(x)2, where γαβ(x)=v=1N[θv(xv,x-v)-θv(yβ(x)v,x-v)]-(α/2)x-yβ(x)2.

Proof.

Inequality (32) follows immediately from (19) in Lemma 6.

The definition of Vα(x) implies that (34)Vα(x)v=1N[θv(xv,x-v)-θv(yβ(x)v,x-v)]-α2x-yβ(x)2, which proves the first inequality in (33).

Since eαβ(x) is the sum of the nonnegative quantity (v=1N[θv(yβ(x)v,x-v)-θv(yα(x)v,x-v)]-α(x-yα(x))T·(yβ(x)-yα(x))) with another nonnegative quantity (see (23) and (24)), we have (35)eαβ(x)v=1N[θv(yβ(x)v,x-v)-θv(yα(x)v,x-v)]-α(x-yα(x))T(yβ(x)-yα(x)).

Thus, (36)Vα(x)γαβ(x)+eαβ(x)+α2yα(x)-yβ(x)2, which is the second inequality in (33). This completes the proof.

4. Two Methods for Solving the GNEP

In this section, we introduce two methods for solving the GNEP, motivated by the D-gap function scheme for solving monotone variational inequalities [14, 15]. We first formally describe our methods below and then analyze their convergence using Lemma 8.

Algorithm 9.

Choose an arbitrary initial point x0Rn, and any β0>α0>0. Choose any sequences of numbers εk>0, ηk0, λk[0,1), k=1,2,, such that (37)limkεk=limkηk1-λk=0,k=1(1-λk)=.

For k=1,2,, we iterate the following.

Iteration k. Choose any 0<αk(1/2)  αk-1. Choose any βk2βk-1 and x-kRn satisfying (38)Vαkβk(x-k)βk-αkηk+λkVαk-1βk-1(xk-1)βk-1-αk-1.

Apply a descent method to the unconstrained minimization of the function Vαkβk, with x-k as the starting point and using yαk-yβk as a safeguard descent direction at x, until the method generates an xRn satisfying eαkβk(x)εk. The resulting x is denoted by xk.

Theorem 10.

Assume X is bounded. Let {xk,αk,βk,εk,ηk,λk}k=0,1,2, be generated by Algorithm 9. Then {xk} is bounded; βk;αk0; and every cluster point of {xk} is a normalized Nash equilibrium of the GNEP.

Proof.

Denote ak=Vαk  βk  (xk)/(βk-αk). By (38), we have akηk+λkak-1 for k=1,2, and it follows from (37) that ak0 (, Lemma 3). For each k{1,2,}, we have from Lemma 8 that (32) and (33) hold with α=αk, β=βk, x=xk. This together with eαk,βk(xk)εk and yαk(xk)-yβk(xk)diam(X) yields (39)xk-yβk(xk)2ak,γkVαk(xk)γk+εk+αkdiam(X)22, where γk=v=1N[θv(xk,v,xk,-v)-θv(yβkv(xk),xk,-v)]-(α/2)xk-yβk(xk)2 and diam(X)=maxx,yXx-y. Since ak0, the first inequality in (39) implies {xk} is bounded. Moreover, this also implies rk0.

Since αk0, the last two inequalities in (39) yield Vαk(xk)0. Since for each yX, we have Vαk(xk)Ψ(x,y)-(αk/2)xk-y2, and this yields 0Ψ(x,y) for each cluster point x of {xk}. Thus, each cluster point x is a normalized Nash equilibrium of the GNEP. This completes the proof.

Algorithm 11.

Choose any x0Rn, any β0>α0, and two sequences of nonnegative numbers ρk,ηk,k=1,2   such that (40)ηk+ρk>0k,k=1ρk<,k=1ηk<.

Choose any continuous function ϕ:R+R+ with ϕ(t)=0t=0. For k=1,2,, we iterate the following.

Iteration k. Choose any 0<αk(1/2)αk-1 and then choose βk2βk-1 satisfying (41)Vαkβk(xk-1)βk-αk(1+ρk)Vαk-1βk-1(xk-1)βk-1-αk-1+ηk.

Apply a descent method to the unconstrained minimization of the function Vαkβk(xk) with xk-1 as the starting point. We assume the descent method has the property that the amount of descent achieved at x per step is bounded away from zero whenever x is bounded and Vαkβk(x) is bounded away from zero. Then, either the method in a finite number of steps generates an x satisfying (42)Vαkβk(x)ϕ(Vαkβk(x)βk-αk)  , which we denote by xk, or else Vαkβk(x) must decrease towards zero, in which case any cluster point of x solves the GNEP.

Theorem 12.

Assume X is bounded. Let {xk,αk,βk,ρk,ηk}k=0,1,2, be generated by Algorithm 11.

Suppose xk is obtained for all k. Then, {xk}is bounded; βk; αk0, and every cluster point of {xk} is a normalized Nash equilibrium of the GNEP.

Suppose xk is not obtained for some k. Then, the descent method generates a bounded sequence of x with Vαkβk(x)0 so every cluster point of x solves the GNEP.

Proof.

(a) Since we use a descent method at iteration k to obtain xk from xk-1, then Vαkβk(xk)Vαkβk(xk-1), so (41) yields (43)Vαkβk(xk)βk-αk(1+ρk)Vαk-1βk-1(xk-1)βk-1-αk-1+ηk.

Denote ak=Vαk  βk  (xk)/(βk-αk). This can then be written as ak(1+ρk)ak-1+ηk for k=1,2,. Using ak0 and (41), it follows that the sequence {ak} converges to some a-0 (, Lemma 2). Since (32) implies (44)xk-yβk(xk)2ak,k the sequence {xk} is bounded.

We claim that a-=0. Suppose the contrary. Then for all k sufficiently large, it holds that aka-/2. Then, (45)a-2v=1N[θv(yβkv(xk),xk,-v)-θv(yαkv(xk),xk,-v)]βk-αk-(αk/2)xk-yαk(xk)2+(βk/2)xk-yβk(xk)2βk-αk

Since, by the construction of the algorithm, βk and αk0, and {xk} is bounded (as are yαk(xk) and yβk(xk)), we get (46)0<a-2liminfkxk-yβk(xk)2.

Then limkxk-yβk(xk)βk=, so (47)limkVαk,βk(xk)=.

Since xk satisfies (42), Vαk,βk(xk)ϕ(ak)for all k, this contradicts convergence of {ϕ(ak)} (recall that ϕ is a continuous function). Hence, a-=0. For each k{1,2,}, we have from Lemma 8 that (33) holds with α=αk, β=βk, x=xk and from the fact eαβ(x)0 that (48)eαkβk(xk)εk:=Vαkβk(xk)T(yβk(xk)-yαk(xk)).

This, together with yαk(xk)-yβk(xk)diam(X), yields (49)γkVα(x)γk+εk+αkdiam(X)22, where γk=v=1N[θv(xk,v,xk,-v)-θv(yβk(xk)v,xk,-v)]-(αk/2)xk-yβk(xk)2 and diam(X)=maxx,yXx-y. Since ak0, (44) implies {xk}is bounded. Moreover, (44) implies γk0. Also, we have Vαkβk(xk)ϕ(ak)0, soεk0.

From the facts that αk0, (49), γk0 and εk0, we get Vαk(xk)0. Since for each yX, we have from the definition of Vα(x) that (50)Vαk(xk)Ψ(x,y)-αk2xk-y2, which yields 0Ψ(x,y) for each cluster point x of {xk}. Thus, each cluster point x is a normalized Nash equilibrium of the GNEP.

(b) It is easy to proof that Vαkβk(x)0. Hence x is a normalized Nash equilibrium of the GNEP.

The proof is completed.

Acknowledgments

This research was partly supported by the National Natural Science Foundation of China (11271226, 10971118) and the Promotive Research Fund for Excellent Young and Middle-Aged Scientists of Shandong Province (BS2010SF010).

Dreves A. Kanzow C. Nonsmooth optimization reformulations characterizing all solutions of jointly convex generalized Nash equilibrium problems Computational Optimization and Applications 2011 50 1 23 48 10.1007/s10589-009-9314-x MR2822815 ZBL1227.90040 Dreves A. Kanzow C. Stein O. Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems Journal of Global Optimization 2012 53 4 587 614 10.1007/s10898-011-9727-9 MR2944054 Facchinei F. Kanzow C. Generalized Nash equilibrium problems Annals of Operations Research 2010 175 177 211 10.1007/s10479-009-0653-x MR2593158 ZBL1211.90228 Facchinei F. Fischer A. Piccialli V. On generalized Nash games and variational inequalities Operations Research Letters 2007 35 2 159 164 10.1016/j.orl.2006.03.004 MR2311395 Facchinei F. Kanzow C. Generalized Nash equilibrium problems A Quarterly Journal of Operations Research 2007 5 3 173 210 10.1007/s10288-007-0054-4 MR2353415 ZBL1211.91162 Han D. Zhang H. Qian G. Xu L. An improved two-step method for solving generalized Nash equilibrium problems European Journal of Operational Research 2012 216 3 613 623 10.1016/j.ejor.2011.08.008 MR2845861 ZBL1252.90081 Harker P. T. Generalized Nash games and quasi-variational inequalities European Journal of Operational Research 1991 54 1 81 94 2-s2.0-0026417165 von Heusinger A. Kanzow C. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions Computational Optimization and Applications 2009 43 3 353 377 10.1007/s10589-007-9145-6 MR2511919 ZBL1170.90495 Panicucci B. Pappalardo M. Passacantando M. On solving generalized Nash equilibrium problems via optimization Optimization Letters 2009 3 3 419 435 10.1007/s11590-009-0122-0 MR2507341 ZBL1187.91011 Qu B. Jiang J. G. On the computation of normalized nash equilibrium for generalized nash equilibrium problem Journal of Convergence Information Technology 2012 7 22 16 21 10.4156/jcit.vol7.issue22.3 Zhang J. Qu B. Xiu N. Some projection-like methods for the generalized Nash equilibria Computational Optimization and Applications 2010 45 1 89 109 10.1007/s10589-008-9173-x MR2594597 ZBL1198.91026 Flåm S. D. Antipin A. S. Equilibrium programming using proximal-like algorithms Mathematical Programming 1997 78 1 29 41 MR1454787 10.1016/S0025-5610(96)00071-8 ZBL0890.90150 Flåm S. D. Ruszczyński A. Noncooperative convex games: computing equilibrium by partial regularization Working Paper 1994 94–42 Laxenburg, Austria Solodov M. V. Tseng P. Some methods based on the D-gap function for solving monotone variational inequalities Computational Optimization and Applications 2000 17 2-3 255 277 10.1023/A:1026506516738 MR1806255 ZBL1168.49303 Qu B. Wang C. Y. Zhang J. Z. Convergence and error bound of a method for solving variational inequality problems via the generalized D-gap function Journal of Optimization Theory and Applications 2003 119 3 535 552 10.1023/B:JOTA.0000006688.13248.04 MR2026462 ZBL1045.49015 Polyak B. T. Introduction to Optimization 1987 New York, NY, USA Optimization Software xxvii+438 Translations Series in Mathematics and Engineering MR1099605