JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 816528 10.1155/2012/816528 816528 Research Article Expected Residual Minimization Method for a Class of Stochastic Quasivariational Inequality Problems Ma Hui-Qiang Huang Nan-Jing Huang Xue-Xiang Department of Mathematics Sichuan University Chengdu Sichuan 610064 China scu.edu.cn 2012 10 11 2012 2012 22 08 2012 15 10 2012 2012 Copyright © 2012 Hui-Qiang Ma and Nan-Jing Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We consider the expected residual minimization method for a class of stochastic quasivariational inequality problems (SQVIP). The regularized gap function for quasivariational inequality problem (QVIP) is in general not differentiable. We first show that the regularized gap function is differentiable and convex for a class of QVIPs under some suitable conditions. Then, we reformulate SQVIP as a deterministic minimization problem that minimizes the expected residual of the regularized gap function and solve it by sample average approximation (SAA) method. Finally, we investigate the limiting behavior of the optimal solutions and stationary points.

1. Introduction

The quasivariational inequality problem is a very important and powerful tool for the study of generalized equilibrium problems. It has been used to study and formulate generalized Nash equilibrium problem in which a strategy set of each player depends on the other players’ strategies (see, for more details, ).

QVIP is to find a vector x*S(x*) such that (1.1)F(x*),x-x*0,xS(x*), where F:nn is a mapping, the symbol ·,· denotes the inner product in n, and S:n2n is a set-valued mapping of which S(x) is a closed convex set in n for each x. In particular, if S is a closed convex set and S(x)S for each x, then QVIP (1.1) becomes the classical variational inequality problem (VIP): find a vector x*S such that (1.2)F(x*),x-x*0,xS.

In most important practical applications, the function F always involves some random factors or uncertainties. Let (Ω,,P) be a probability space. Taking the randomness into account, we get stochastic quasivariational inequality problem (SQVIP): find an x*S(x*) such that (1.3)P{ωΩ:F(x*,ω),x-x*0,xS(x*)}=1, or equivalently, (1.4)F(x*,ω),x-x*0,xS(x*),ωΩa.s., where F:n×Ωn is a mapping and a.s. is abbreviation for “almost surely” under the given probability measure P.

Due to the introduction of randomness, SQVIP (1.4) becomes more practical and also evokes more and more attentions in the recent literature . However, to our best knowledge, most publications in the existing literature discuss the stochastic complementarity problems and the stochastic variational inequality problems, which are two special cases of (1.4). It is well known that quasivariational inequalities are more complicated than variational inequalities and complementarity problems and that they have widely applications. Therefore, it is meaningful and interesting to study the general problem (1.4).

Because of the existence of a random element ω, we cannot generally find a vector x*S(x*) such that (1.4) holds almost surely. That is, (1.4) is not well defined if we think of solving (1.4) before knowing the realization ω. Therefore, in order to get a reasonable resolution, an appropriate deterministic reformulation for SQVIP becomes an important issue in the study of the considered problem.

Recently, one of the mainstreaming research methods on the stochastic variational inequality problem is expected residual minimization method (see [4, 5, 7, 1113, 16] and the references therein). Chen and Fukushima  formulated the stochastic linear complementarity problem (SLCP) as a minimization problem which minimizes the expectation of gap function (also called residual function) for SLCP. They regarded the optimal solution of this minimization problem as a solution to SLCP. This method is the so-called expected residual minimization method (ERM). Following the ideas of Chen and Fukushima , Zhang and Chen  considered the stochastic nonlinear complementary problems. Luo and Lin [12, 13] generalized the expected residual minimization method to solve stochastic variational inequality problem.

In this paper, we focus on ERM method for SQVIP. We first show that the regularized gap function for QVIP is differentiable and convex under some suitable conditions. Then, we formulate SQVIP (1.4) as an optimization problem and solve this problem by SAA method.

The rest of this paper is organized as follows. In Section 2, some preliminaries and the reformulation for SQVIP are given. In Section 3, we give some suitable conditions under which the regularized gap function for QVIP is differentiable and convex. In Section 4, we show that the objective function of the reformulation problem is convex and differentiable under some suitable conditions. Finally, the convergence results of optimal solutions and stationary points are given in Section 5.

2. Preliminaries

Throughout this paper, we use the following notations. · denotes the Euclidean norm of a vector. For an n×n symmetric positive-definite matrix G, ·G denotes the G-norm defined by xG=x,Gx for xn and ProjS,G(x) denotes the projection of the point x onto the closed convex set S with respect to the norm ·G. For a mapping F:nn, xF(x) denotes the usual gradient of F(x) in x. It is easy to verify that (2.1)λminxxGλmaxx, where λmin and λmax are the smallest and largest eigenvalues of G, respectively.

The regularized gap function for the QVIP (1.1) is given as follows: (2.2)fα(x):=maxyS(x){-F(x),y-x-α2y-xG2}, where α is a positive parameter. Let Xn be defined by X={xn:xS(x)}. This is called a feasible set of QVIP (1.1). For the relationship between the regularized gap function (2.2) and QVIP (1.1), the following result has been shown in [17, 18].

Lemma 2.1.

Let fα(x) be defined by (2.2). Then fα(x)0 for all xX. Furthermore, fα(x*)=0 and x*X if and only if x* is a solution to QVIP (1.1). Hence, problem (1.1) is equivalent to finding a global optimal solution to the problem: (2.3)minxXfα(x).

Though the regularized gap function fα(x) is directional differentiable under some suitable conditions (see, [17, 18]), it is in general nondifferentiable.

The regularized gap function (or residual function) for SQVIP (1.4) is as follows: (2.4)fα(x,ω):=maxyS(x){-F(x,ω),y-x-α2y-xG2}, and the deterministic reformulation for SQVIP is (2.5)minxXΘ(x):=𝔼fα(x,ω), where 𝔼 denotes the expectation operator.

Note that the objective function Θ(x) contains mathematical expectation. Throughout this paper, we assume that 𝔼fα(x,ω) cannot be calculated in a closed form so that we will have to approximate it through discretization. One of the most well-known discretization approaches is sample average approximation method. In general, for an integrable function ϕ:Ω, we approximate the expected value 𝔼[ϕ(ω)] with sample average (1/Nk)ωiΩkϕ(ωi), where ω1,,ωNk are independently and identically distributed random samples of ω and Ωk:={ω1,,ωNk}. By the strong law of large numbers, we get the following lemma.

Lemma 2.2.

If ϕ(ω) is integrable, then (2.6)limk1NkωiΩkϕ(ωi)=𝔼[ϕ(ω)] holds with probability one.

Let (2.7)Θk(x):=1NkωiΩkfα(x,ωi). Applying the above techniques, we can get the following approximation of (2.5): (2.8)minxXΘk(x).

3. Convexity and Differentiability of <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M71"><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>α</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>

In the remainder of this paper, we restrict ourself to a special case, where S(x)=S+m(x). Here, S is a closed convex set in n and m(x):nn is a mapping. In this case, we can show that fα(x) is continuously differentiable whenever so are the functions F(x) and m(x). In order to get this result, we need the following lemma (see [19, Chapter 4, Theorem  1.7]).

Lemma 3.1.

Let Sn be a nonempty closed set and Um be an open set. Assume that f:n×U be continuous and the gradient uf(·,·) is also continuous. If the problem minxSf(x,u) is uniquely attained at x(u) for any fixed uU, then the function ϕ(u):=minxSf(x,u) is continuously differentiable and uϕ(u) is given by uϕ(u)=uf(x(u),u).

For any yS(x)=S+m(x), we can find a vector zS such that y=z+m(x). Thus, we can rewrite (2.2) as follows: (3.1)fα(x)=maxzS{-F(x),z+m(x)-x-α2z-(x-m(x))G2}=-minzS{F(x),z-(x-m(x))+α2z-(x-m(x))G2}. The minimization problem in (3.1) is essentially equivalent to the following problem: (3.2)minzSz-[x-m(x)-α-1G-1F(x)]G2.

It is easy to know that problem (3.2) has a unique optimal solution ProjS,G(x-m(x)-α-1G-1F(x)). Thus, ProjS,G(x-m(x)-α-1G-1F(x)) is also a unique solution of problem (3.1). The following result is a natural extension of [20, Theorem  3.2].

Theorem 3.2.

If S is a closed convex set in n and m(x) and F(x) are continuously differentiable, then the regularized gap function fα(x) given by (2.2) is also continuously differentiable and its gradient is given by (3.3)fα(x)=[I-m(x)]F(x)-[F(x)-α(I-m(x))G][zα(x)-(x-m(x))], where zα(x)=ProjS,G(x-m(x)-α-1G-1F(x)) and I denotes the n×n identity matrix.

Proof.

Let us define the function h:n×S by (3.4)h(x,z)=F(x),z-(x-m(x))+α2z-(x-m(x))G2. It is obviously that if F(x) and m(x) are continuous, then h(x,z) is continuous in (x,z). If F(x) and m(x) are continuously differentiable, then (3.5)xh(x,z)=-[I-m(x)]F(x)+[F(x)-α(I-m(x))G][z-(x-m(x))] is continuous in (x,z). By (3.1), we have (3.6)fα(x)=-minzSh(x,z). Since the minimum on the right-hand side of (3.6) is uniquely attained at z=zα(x), it follows from Lemma 3.1 that fα(x) is differentiable and its gradient is given by (3.7)fα(x)=-xh(x,zα(x))=[I-m(x)]F(x)-[F(x)-α(I-m(x))G][zα(x)-(x-m(x))]. This completes the proof.

Remark 3.3.

When m(x)0, we have S(x)S and so QVIP (1.1) reduces to VIP (1.2). In this case (3.8)fα(x)=F(x)-[F(x)-αG][zα(x)-x], where (3.9)zα(x)=ProjS,G(x-α-1G-1F(x)). Moreover, when α=1, we have (3.10)fα(x)=F(x)-[F(x)-G][zα(x)-x],zα(x)=ProjS,G(x-G-1F(x)), which is the same as [20, Theorem  3.2].

Now we investigate the conditions under which fα(x) is convex.

Theorem 3.4.

Suppose that F(x)=Mx+q and m(x)=Nx, where M and N are n×n matrices and qn is a vector. Denote βmin and μmax by the smallest and largest eigenvalues of MT(I-N)+(I-N)TM and (N-I)TG(N-I), respectively. We have the following statements.

If μmax>0, βmin0 and α(βmin/μmax), then the function fα(x) is convex. Moreover, if there exists a constant β>0 such that α(βmin/μmax(1+β)), then fα(x) is strongly convex with modulus αβμmax.

If μmax=0 and βmin0, then the function fα(x) is convex. Moreover, if βmin>0, then fα(x) is strongly convex with modulus βmin.

Proof.

Substituting F(x)=Mx+q and m(x)=Nx into (3.1), we have (3.11)fα(x)=maxzS{-Mx+q,z+(N-I)x-α2z-(I-N)xG2}. Define (3.12)H(x,z)=-Mx+q,z+(N-I)x-α2z-(I-N)xG2. Noting that (3.13)x2H(x,z)=MT(I-N)+(I-N)TM-α(N-I)TG(N-I), we have, for any yn, (3.14)yTx2H(x,z)y=yT[MT(I-N)+(I-N)TM]y-αyT(N-I)TG(N-I)y(βmin-αμmax)y2.

If μmax>0, βmin0 and α(βmin/μmax), we have (3.15)yTx2H(x,z)y(βmin-αμmax)y20. This implies that the Hessen matrix x2H(x,z) is positive semidefinite and hence H(x,z) is convex in x for any zS. In consequence, by (3.11), the regularized gap function fα(x) is convex. Moreover, if α(βmin/μmax(1+β)), then (3.16)yTx2H(x,z)y(βmin-αμmax)y2αβμmaxy2, which means that H(x,z) is strongly convex with modulus αβμmax in x for any zS. From (3.11), we know that the regularized gap function fα(x) is strongly convex.

If μmax=0 and βmin0, we have (3.17)yTx2H(x,z)yβminy20. Thus, the regularized gap function fα(x) is convex. Moreover, if βmin>0, then the regularized gap function fα(x) is strongly convex with modulus βmin. This completes the proof.

Remark 3.5.

When N=0, QVIP (1.1) reduces to VIP (1.2). Denote β¯min and μ¯max by the smallest and largest eigenvalues of MT+M and G, respectively. In this case, the function (3.18)f¯α(x)=maxzS{-F(x),z-x-α2z-xG2} is convex when μ¯max>0, β¯min0 and α(β¯min/μ¯max).

Remark 3.6.

When N=0 and G=I, we have that μ¯max=1. In this case, the function (3.19)f^α(x)=maxzS{-F(x),z-x-α2z-x2} is convex when β¯min0 and αβ¯min. This is consistent with [4, Theorem  2.1].

4. Properties of Function <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M195"><mml:mrow><mml:mi mathvariant="normal">Θ</mml:mi></mml:mrow></mml:math></inline-formula>

In this section, we consider the properties of the objective function Θ(x) of problem (2.5). In what follows we show that Θ(x) is differentiable under some suitable conditions.

Theorem 4.1.

Suppose that F(x,ω):=M(ω)x+Q(ω), where M:Ωn×n and Q:Ωn with (4.1)𝔼[M(ω)2+Q(ω)2]<+. Let S(x)=S+Nx. Then the function Θ(x) is differentiable and (4.2)xΘ(x)=𝔼xfα(x,ω).

Proof.

Since S(x)=S+Nx, it is easy to know that (4.3)fα(x,ω)=-F(x,ω),yα(x,ω)-(x-Nx)-α2yα(x,ω)-(x-Nx)G2, where (4.4)yα(x,ω)=ProjS,G(x-Nx-α-1G-1F(x,ω)). It follows from Lemma 2.1 that fα(x,ω)0 and so (4.5)α2yα(x,ω)-x+NxG2-F(x,ω),yα(x,ω)-x+NxF(x,ω)yα(x,ω)-x+Nx1λminF(x,ω)yα(x,ω)-x+NxG. Thus, (4.6)yα(x,ω)-x+NxG2αλminF(x,ω),yα(x,ω)-x+Nx1λminyα(x,ω)-x+NxG2αλminF(x,ω).

In a similar way to Theorem 3.2, we can show that fα(x,ω) is differentiable with respect to x and (4.7)xfα(x,ω)=(I-N)F(x,ω)-[M(ω)-α(I-N)G][yα(x,ω)-(I-N)x]. It follows that (4.8)xfα(x,ω)I-NF(x,ω)-M(ω)-α(I-N)Gyα(x,ω)-(I-N)x{I-N+2αλminM(ω)-α(I-N)G}F(x,ω)(1+2Gλmin)I-NF(x,ω)+2αλminM(ω)F(x,ω)(1+2Gλmin)I-N(1+x)(M(ω)+Q(ω))+2αλmin(1+x)(M(ω)+Q(ω))2(1+2Gλmin)I-N(1+x)(M(ω)+Q(ω))+4αλmin(1+x)(M(ω)2+Q(ω)2). By [21, Theorem  16.8], the function Θ is differentiable and xΘ(x)=𝔼xfα(x,ω). This completes the proof.

The following theorem gives some conditions under which Θ(x) is convex.

Theorem 4.2.

Suppose that the assumption of Theorem 4.1 holds. Let (4.9)β0infωΩΩ0λmin(M(ω)T(I-N)+(I-N)TM(ω)), where Ω0 is a null subset of Ω and λmin(G) denotes the smallest eigenvalue of G. We have the following statements.

If μmax>0, β0>0 and α(β0/μmax), then the function Θ(x) is convex. Moreover, if α(β0/μmax(1+β)) with β>0, then Θ(x) is strongly convex with modulus αβμmax.

If μmax=0 and β00, then the function Θ(x) is convex. Moreover, if β0>0, then Θα(x) is strongly convex with modulus β0.

Proof.

Define (4.10)H(x,z,ω)=-M(ω)x+Q(ω),z+(N-I)x-α2z-(I-N)xG2. Noting that (4.11)x2H(x,z,ω)=M(ω)T(I-N)+(I-N)TM(ω)-α(N-I)TG(N-I), we have, for any yn, (4.12)yTx2H(x,z,ω)y=yT[M(ω)T(I-N)+(I-N)TM(ω)]y-αyT(N-I)TG(N-I)y(β0-αμmax)  y2, where the inequality holds almost surely.

If μmax>0, β0>0 and α(β0/μmax), then (4.13)yTx2H(x,z,ω)y0. This implies that the Hessen matrix x2H(x,z,ω) is positive semidefinite and hence H(x,z,ω) is convex in x for any zS. Since (4.14)fα(x,ω)=maxyS(x){-F(x,ω),y-x-α2y-xG2}=maxzSH(x,z,ω), the regularized gap function fα(x,ω) is convex and so is Θ(x). Moreover, if α(β0/μmax(1+β)), then (4.15)yTx2H(x,z,ω)yαβμmaxy2, which means that H(x,z,ω) is strongly convex in x for any zS. From the definitions of H(x,z,ω) and fα(x,ω), we know that fα(x,ω) is strongly convex with modulus αβμmax and so is Θ(x).

If μmax=0 and β00, then (4.16)yTx2H(x,z,ω)yβ0y20, which implies that the regularized gap function fα(x,ω) is convex and so is Θ(x). Moreover, if β0>0, then Θ(x) is strongly convex with modulus β0. This completes the proof.

It is easy to verify that X={xn:xS(x)} is a convex subset when S(x)=S+Nx. Thus, Theorem 4.2 indicates that problem (2.5) is a convex program. From the proof details of Theorem 4.2, we can also get that problem (2.8) is a convex program. Hence we can obtain a global optimal solution using existing solution methods.

5. Convergence of Solutions and Stationary Points

In this section, we will investigate the limiting behavior of the optimal solutions and stationary points of (2.8).

Note that if the conditions of Theorem 4.1 are satisfied, then the set X is closed, and (5.1)𝔼M(ω)<,𝔼[M(ω)+Q(ω)+c]2<, where c is a constant.

Theorem 5.1.

Suppose that the conditions of Theorem 4.1 are satisfied. Let xk be an optimal solution of problem (2.8) for each k. If x* is an accumulation point of {xk}, then it is an optimal solution of problem (2.5).

Proof.

Without loss of generality, we assume that xk itself converges to x* as k tends to infinity. It is obvious that x*X.

We first show that (5.2)limk|Θk(xk)-Θk(x*)|=0. It follows from mean-value theorem that (5.3)|Θk(xk)-Θk(x*)|=|1NkωiΩk[fα(xk,ωi)-fα(x*,ωi)]|1NkωiΩk|fα(xk,ωi)-fα(x*,ωi)|1NkωiΩkxfα(yik,ωi)xk-x*, where yik=γikxk+(1-γik)x* and γik[0,1]. From the proof details of Theorem 4.1, we have (5.4)xfα(yik,ωi)(1+2Gλmin)(1+yik)(M(ωi)+Q(ωi))I-N+2αλmin(1+yik)(M(ωi)+Q(ωi))2. Since limk+xk=x*, there exists a constant C such that xkC for each k. By the definition of yik, we know that yikC. Hence, (5.5)xfα(yik,ωi)(1+2Gλmin)(1+C)(M(ωi)+Q(ωi))I-N+2αλmin(1+C)(M(ωi)+Q(ωi))2C'(M(ωi)+Q(ωi)+1)2, where (5.6)C'=max{(1+2Gλmin)(1+C)I-N,2αλmin(1+C)}. It follows that (5.7)|Θk(xk)-Θk(x*)|C'1NkωiΩk(M(ωi)+Q(ωi)+1)2xk-x*0, which means that (5.2) holds.

Now, we show that x* is an optimal solution of problem (2.5). It follows from (5.2) and (5.8)|Θk(xk)-Θ(x*)||Θk(xk)-Θk(x*)|+|Θk(x*)-Θ(x*)|, that limk+Θk(xk)=Θ(x*). Since xk is an optimal solution of problem (2.8) for each k, we have that, for any xX, (5.9)Θk(xk)Θk(x). Letting k above, we get from (5.2) and Lemma 2.2 that (5.10)Θ(x*)Θ(x), which means x* is an optimal solution of problem (2.5). This completes the proof.

In general, it is difficult to obtain a global optimal solution of problem (2.8), whereas computation of stationary points is relatively easy. Therefore, it is important to study the limiting behavior of stationary points of problem (2.8).

Definition 5.2.

x k is said to be stationary to problem (2.8) if (5.11)xΘk(xk),y-xk0,yX, and x* is said to be stationary to problem (2.5) if (5.12)xΘ(x*),y-x*0,yX.

Theorem 5.3.

Let xk be stationary to problem (2.8) for each k. If the conditions of Theorem 4.1 are satisfied, then any accumulation point x* of {xk} is a stationary point of problem (2.5).

Proof.

Without loss of generality, we assume that {xk} itself converges to x*.

At first, we show that (5.13)limkxΘk(xk)-xΘk(x*)=0. It follows from (2.1) and the nonexpansivity of the projection operator that (5.14)yα(xk,ω)-yα(x*,ω)1λminyα(xk,ω)-yα(x*,ω)G=1λminProjS,G(xk-Nxk-α-1G-1F(xk,ω))=1λmin-ProjS,G(x*-Nx*-α-1G-1F(x*,ω))G1λminxk-Nxk-α-1G-1F(xk,ω)-[x*-Nx*-α-1G-1F(x*,ω)]Gλmaxλminxk-Nxk-α-1G-1F(xk,ω)-[x*-Nx*-α-1G-1F(x*,ω)]λmaxλmin(I-N+α-1G-1M(ω))xk-x*. Thus, (5.15)xΘk(xk)-xΘk(x*)1NkωiΩkxfα(xk,ωi)-xfα(x*,ωi)=1NkωiΩk(I-N)[M(ωi)xk+Q(ωi)]-[M(ωi)-α(I-N)G]1Nk×[yα(xk,ωi)-(I-N)xk]1Nk-{(I-N)[M(ωi)x*+Q(ωi)]-[M(ωi)-α(I-N)G]1Nk×[yα(x*,ωi)-(I-N)x*]}[M(ωi)xk+Q(ωi)]2xk-x*I-N1NkωiΩkM(ωi)+αI-N2Gxk-x*+1NkωiΩkM(ωi)-α(I-N)Gyα(xk,ωi)-yα(x*,ωi)2xk-x*I-N1NkωiΩkM(ωi)+αI-N2Gxk-x*+λmaxλmin1NkωiΩkM(ωi)-α(I-N)G[I-N+α-1G-1M(ωi)]xk-x*[2+λmaxλmin(GG-1+1)]I-N1NkωiΩkM(ωi)xk-x*+(1+λmaxλmin)αI-N2Gxk-x*+α-1G-1λmaxλmin1NkωiΩkM(ωi)2xk-x*0, which means that (5.13) is true.

Next, we show that (5.16)limkxΘk(xk)=xΘ(x*). It follows from Lemma 2.2 and Theorem 4.1 that (5.17)limkxΘk(x*)=limk1NkωiΩkxfα(x*,ωi)=𝔼xfα(x*,ω)=xΘ(x*). By (5.13), we have (5.18)xΘk(xk)-xΘ(x*)xΘk(xk)-xΘk(x*)+xΘk(x*)-xΘ(x*)0, which implies that (5.16) is true.

Now we show that x* is a stationary point of problem (2.5). Since xk is stationary to problem (2.8), that is, for any yX, (5.19)xΘk(xk),y-xk0. Letting k above, we get from (5.16) that (5.20)xΘ(x*),y-x*0. Thus, x* is a stationary point of problem (2.5). This completes the proof.

Acknowledgments

This work was supported by the Key Program of NSFC (Grant no. 70831005) and the National Natural Science Foundation of China (111171237, 71101099).

Harker P. T. Generalized Nash games and quasi-variational inequalities European Journal of Operational Research 1991 54 81 94 Kubota K. Fukushima M. Gap function approach to the generalized Nash equilibrium problem Journal of Optimization Theory and Applications 2010 144 3 511 531 10.1007/s10957-009-9614-4 2592772 ZBL1188.91021 Pang J. S. Fukushima M. Quasi-variational inequalities, generalized Nash equilibria, and multi-leader-follower games Computational Management Science 2005 2 1 21 56 10.1007/s10287-004-0010-0 2164953 ZBL1115.90059 Agdeppa R. P. Yamashita N. Fukushima M. Convex expected residual models for stochastic affine variational inequality problems and its application to the traffic equilibrium problem Pacific Journal of Optimization 2010 6 1 3 19 2604512 ZBL1193.65107 Chen X. Fukushima M. Expected residual minimization method for stochastic linear complementarity problems Mathematics of Operations Research 2005 30 4 1022 1038 10.1287/moor.1050.0160 2185828 ZBL1162.90527 Chen X. Zhang C. Fukushima M. Robust solution of monotone stochastic linear complementarity problems Mathematical Programming 2009 117 1-2 51 80 10.1007/s10107-007-0163-z 2421299 ZBL1165.90012 Fang H. Chen X. Fukushima M. Stochastic R0 matrix linear complementarity problems SIAM Journal on Optimization 2007 18 2 482 506 10.1137/050630805 2338448 ZBL1151.90052 Jiang H. Xu H. Stochastic approximation approaches to the stochastic variational inequality problem IEEE Transactions on Automatic Control 2008 53 6 1462 1475 10.1109/TAC.2008.925853 2451235 Lin G. H. Chen X. Fukushima M. New restricted NCP functions and their applications to stochastic NCP and stochastic MPEC Optimization 2007 56 5-6 641 953 10.1080/02331930701617320 2362712 ZBL1172.90455 Lin G. H. Fukushima M. New reformulations for stochastic nonlinear complementarity problems Optimization Methods & Software 2006 21 4 551 564 10.1080/10556780600627610 2277498 ZBL1113.90110 Ling C. Qi L. Zhou G. Caccetta L. The SC1 property of an expected residual function arising from stochastic complementarity problems Operations Research Letters 2008 36 4 456 460 10.1016/j.orl.2008.01.010 2437271 Luo M. J. Lin G. H. Expected residual minimization method for stochastic variational inequality problems Journal of Optimization Theory and Applications 2009 140 1 103 116 10.1007/s10957-008-9439-6 2475910 ZBL1190.90112 Luo M. J. Lin G. H. Convergence results of the ERM method for nonlinear stochastic variational inequality problems Journal of Optimization Theory and Applications 2009 142 3 569 581 10.1007/s10957-009-9534-3 2535116 ZBL1187.90295 Wang M. Z. Ali M. M. Lin G. H. Sample average approximation method for stochastic complementarity problems with applications to supply chain supernetworks Journal of Industrial and Management Optimization 2011 7 2 317 345 10.3934/jimo.2011.7.317 2787678 ZBL1225.90138 Xu H. Sample average approximation methods for a class of stochastic variational inequality problems Asia-Pacific Journal of Operational Research 2010 27 1 103 119 10.1142/S0217595910002569 2646955 ZBL1186.90083 Zhang C. Chen X. Stochastic nonlinear complementarity problem and applications to traffic equilibrium under uncertainty Journal of Optimization Theory and Applications 2008 137 2 277 295 10.1007/s10957-008-9358-6 2395102 ZBL1163.90034 Fukushima M. A class of gap functions for quasi-variational inequality problems Journal of Industrial and Management Optimization 2007 3 2 165 171 10.3934/jimo.2007.3.165 2329365 ZBL1170.90487 Taji K. On gap functions for quasi-variational inequalities Abstract and Applied Analysis 2008 2008 7 531361 10.1155/2008/531361 2393109 Auslender A. Optimisation: Méthodes Numériques 1976 Paris, France Masson 0441204 Fukushima M. Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems Mathematical Programming 1992 53 1 99 110 10.1007/BF01585696 1151767 ZBL0756.90081 Billingsley P. Probability and Measure 1995 New York, NY, USA John Wiley & Sons 1324786