The monotone variational inequalities capture various concrete applications arising in many areas. In this paper, we develop a new prediction-correction method for monotone variational inequalities with separable structure. The new method can be easily implementable, and the main computational effort in each iteration of the method is to evaluate the proximal mappings of the involved operators. At each iteration, the algorithm also allows the involved subvariational inequalities to be solved in parallel. We establish the global convergence of the proposed method. Preliminary numerical results show that the new method can be competitive with Chen's proximal-based decomposition method in Chen and Teboulle (1994).
1. Introduction
The variational inequality (VI (Ω,F)) in the finite-dimensional space is to determine a vector u∈Ω such that
(1)〈u′-u,F(u)〉≥0,∀u′∈Ω,
where Ω∈ℜn is a nonempty closed convex subset and F is a continuous mapping from ℜn into itself. The VI (Ω,F) has found many efficient applications in a broad spectrum of areas such as traffic equilibrium [1] and network economic problems [2]. For solving (1), the proximal point algorithm (PPA), which was proposed by Martinet [3] and further studied by Rockafellar [4, 5], generates the new iterative point uk+1 via the following procedure:
(2)〈u′-uk+1,F(uk+1)+G(uk+1-uk)〉≥0,∀u′∈Ω,
where G∈ℜn×n is a positive definite matrix, playing the role of proximal regularization parameter. Note that the PPA has to solve systems of nonlinear equations in each iteration. In many cases, solving these equations is quite difficult. This difficulty has inspired the burst of approximate versions of the PPAs, in order to approximately solve (2) under certain “relative error.” These new methods include well-known-extragradient type methods (EGM) as special cases. Assume that F is Lipschitz continuous; that is, there is l∈(0,1), such that
(3)β∥F(uk)-F(u~k)∥≤l∥uk-u~k∥.
Then at each iteration EGM takes the following general form:
(4)〈u′-u~,βF(uk)+u~-uk〉≥0,∀u′∈Ω,〈u′-uk+1,βF(u~)+uk+1-uk〉≥0,∀u′∈Ω.
In this paper, we consider the following variational inequalities: find a vector w∈𝒟 such that
(5)〈w′-w,F(w)〉≥0,∀w′∈𝒟,
with
(6)w:=(xy),F(w):=(f(x)g(y)),𝒟={(x,y)∣x∈𝒳,y∈𝒴,Ax+By=b},
where 𝒟∈ℜn+p is a nonempty closed convex subset and f:𝒳→ℜn and g:𝒴→ℜp are monotone operators. Problem (5) is referred to as a structured variational inequality (SVI) [6].
By attaching a Lagrange multiplier vector λ∈ℜm to the linear constraints Ax+By=b, the VI problem (5) is converted into the following form:
(7)〈x′-x,f(x)-ATλy′-y,g(y)-BTλλ′-λ,Ax+By-b〉≥0,∀u′∈Ω,
where
(8)Ω=𝒳×𝒴×ℜm.
The compact form is
(9)〈u′-u,F(u)〉≥0,∀u′∈Ω,
with
(10)u:=(xyλ),F(u):=(f(x)-ATλg(y)-BTλAx+By-b).
For the purpose of parallel computing, the proximal alternating directions method (PADM) generates u~k=(x~k,y~k,λ~k)∈Ω as follows [7, 8]: first find an x~k∈𝒳 such that
(11)〈x′-x~k,f(x~k)-AT[λk-β(Ax~k+yk-b)]+r(x~k-xk)〉≥0,∀x∈𝒳.
Then find an y~k∈𝒴 such that
(12)〈y′-y~k,g(y~k)-BT[λk-β(Ax~k+y~k-b)]+s(y~k-yk)〉≥0,∀y∈𝒴.
Finally, update λ~k via
(13)λ~k=λk-β(Ax~k+By~k-b).
Here r≥0 and s≥0 are given proximal parameters; β≥0 is a given penalty parameter for the linearly constraints. Note that when r=s=0 in (11)-(12), the classical alternating directions method (ADM) is recovered. To make the PADM (11)–(13) more efficient and flexible, some strategies have been developed. For example, allow r, s, and β to vary from iteration to iteration according to certain strategies [8–10]; produce the new iterate based on the minor correction to the predictor. A simple and effective correction scheme is (see, e.g., [11, 12])
(14)uk+1=uk-αk(uk-u~k),
where αk>0 is a chosen step size.
The PADM (11)–(13) is often easy to implement under the assumption that the decomposed subproblems have closed-form solutions or can be efficiently solved up to a high precision. However, in some cases, matrixes A and B are not identity matrices, and the two subproblems in PADM (11)-(12) are difficult to solve because the evaluation of (ATA+(1/β)f)-1(Aυ) and (BTB+(1/β)g)-1(Bυ) could be costly. To overcome this difficulty, we propose a new implementable prediction-correction method for the SVI. At each iteration, we first decompose the problem to two small problems with respect to x and y, respectively. The two subproblems are all easy to solve under the assumption that the resolvent operators of f and g are easy to evaluate, where the resolvent operator of mapping T is defined as (I+λT)-1(υ). Then, we update the Lagrange multipliers and make a correction step to ensure the algorithm's convergence.
The SVI has been studied extensively both in the theoretical frameworks and applications. Recently, Han [13] proposed a hybrid entropic proximal decomposition method for the SVI. Han's method is based on logarithmic-quadratic functions and combined with self-adaptive strategy. He [14] presented a parallel splitting augmented Lagrangian method which can be extended to solve the system of equilibrium problems with three separable operators. Xu et al. [15] proposed two classes of correction methods for the SVI in which the mapping F does not have an explicit form. Besides, Xu and Wu [16] also studied a class of linearized proximal alternating direction methods and showed that the relaxation factor can have the same restriction region as for the general ADM. Yuan and Li [17] developed a logarithmic-quadratic-proximal- (LQP-) based decomposition method by applying the LQP terms to regularize the ADM subproblems; then Bnouhachem et al. [18] studied a new inexact LQP alternating direction method by solving a series of related systems of nonlinear equations.
The rest of this paper is organized as follows. In Section 2, we review some preliminaries which are useful for further analysis. In Section 3, we present the new implementable prediction-correction method for SVI, and the global convergence result is established. Numerical experiments and some conclusions are addressed in Sections 4 and 5, respectively.
2. Preliminaries
In this section, we make some standard assumptions and summarize some basic properties of VI which will be used in the subsequent discussions.
Assumption
𝒳,𝒴 are simple closed convex sets.
A set which is said to be simple means that the projection onto the set is easy to compute, where the projection of a point υ onto the closed convex set Ω, denoted by PΩ(υ), is defined as the nearest point u∈Ω to υ; that is,
(15)PΩ(υ)=argmin{∥u-υ∥∣u∈Ω}.
The mapping F is point-to-point, monotone, and continuous.
A mapping F:ℜn→ℜn is said to be monotone on Ω if
(16)〈u-υ,F(u)-F(υ)〉≥0,∀u,υ∈Ω.
The solution set of SVI (Ω,F), denoted by Ω*, is nonempty.
Properties. Let G be a symmetric positive definite matrix; the G-norm of the vector u is denoted by ∥u∥G:=〈u,Gu〉. In particular, when G=I, ∥u∥:=〈u,u〉 is the Euclidean norm of u. For a matrix A, ∥A∥ denotes its norm ∥A∥:=max{∥Ax∥:∥x∥≤1}.
The following well-known properties of the projection operator will be used in the coming analysis.
Lemma 1.
Let Ω∈ℜn be a nonempty closed convex set; let PΩ(·) be the projection operator onto Ω under the G-norm. Then
(17)〈u′-PΩ(u′),G(u-PΩ(u′))〉≤0,∀u′∈ℜn,∀u∈Ω,∥PΩ(u)-PΩ(u′)∥G≤∥u-u′∥G,∀u,u′∈ℜn,∥u-PΩ(u′)∥G2≤∥u-u′∥G2-∥u′-PΩ(u′)∥G2,∀u′∈ℜn,∀u∈Ω.
For any arbitrary positive scalar β and u∈Ω, let e(u,β) denote the residual function associated with the mapping F; that is,
(18)e(u,β)=u-PΩ[u-βF(u)].
Lemma 2.
u* is a solution of the SVI (Ω,F) if and only if e(u*,β)=0 for any given positive constant β (see [2, page 267]).
Lemma 3.
Solving SVI (Ω,F) (7) is equivalent to find a zero point of the mapping
(19)e(u,β)≔(e1(u,β)e2(u,β)e3(u,β))=(x-P𝒳{x-β[f(x)-ATλ]}y-P𝒴{y-β[g(y)-BTλ]}β(Ax+By-b)).
3. The New Algorithm
In this section, we present a new prediction-correction method for SVI (Ω,F) and show its global convergence. But, at the beginning, to make the algorithm more succinct, we first define some matrices:
(20)H=(rI000sI0001βI),M=(I01rAT0I1sBT00I),Q=(rI0AT0sIBT001βI).
Obviously, H is a symmetric positive definite matrix whenever r>0,s>0, and β>0, and we also have Q=HM.
3.1. Description of the AlgorithmAlgorithm 4.
It is a prediction-correction-based algorithm for the SVI (Ω,F).
Phase 1 (initialization step). Given a small number ϵ>0, let γ∈(0,2); matrixes Q,M are defined in (20). Take u0∈ℜn+p+m; set k=0. Choose the parameters r>0,s>0, and β>0 such that
(21)r>2β∥ATA∥,s>2β∥BTB∥.
Phase 2 (prediction step). Generate the predictor x~k via solving the following projection equation:
(22)x~k=P𝒳[xk-1r(f(x~k)-ATλk)].
Then find an y~k∈𝒴 such that
(23)y~k=P𝒴[yk-1s(g(y~k)-BTλk)].
Finally, update λ~k via
(24)λ~k=λk-β(Ax~k+By~k-b).
Phase 3 (correction step). Correct the predictor, and generate the new iterate uk+1 via
(25)uk+1=uk-αkM(uk-u~k),
where
(26)αk=γαk*,αk*=〈uk-u~k,Q(uk-u~k)〉∥M(uk-u~k)∥H2.
Phase 4 (convergence verification). If ∥uk-uk+1∥≤ϵ, stop; otherwise set k:=k+1; go to Phase 2.
Remark 5.
Note that (22) does not involve y~k and that (23) is independent on the x~k generated by (22). Hence the two projections (22) and (23) are eligible for parallel computation.
Remark 6.
It is easy to check that u~k=(x~k,y~k,λ~k) is a solution of SVI (Ω,F) if and only if Axk=Ax~k,Byk=By~k, and λk=λ~k. Thus, it is reasonable to take the magnitude of ∥uk-uk+1∥≤ϵ as the stopping criterion.
Remark 7.
The strategy of choosing the step size αk in the correction step which coincides with the strategy in He's papers, see, for example, [19], will be explained in detail in the following section.
Remark 8.
Our method and the methods proposed in [6, 15, 20] are all in the prediction-correction algorithmic framework, where at each iteration they make a prediction step to produce a predictor and a correction step to generate the new iterate via correcting this predictor.
3.2. Contractive Properties
Now, we start to prove some properties of the sequence {u~k}. The first lemma quantifies the discrepancy between the point u~k and a solution point of SVI (Ω,F).
Lemma 9.
Let {u~} be generated by (22)–(24), and let the matrix M be given in (20). Then one has
(27)〈u′-u~k,F(u~)-Q(uk-u~k)〉≥0,∀u′∈Ω.
Proof.
Note that u~k generated by (22)–(24) are actually solutions of the following VIs:
(28)〈x′-x~k,f(x~k)-ATλk-r(xk-x~k)〉≥0,∀x∈𝒳,(29)〈y′-y~k,g(y~k)-BTλk-s(yk-y~k)〉≥0,∀y∈𝒴,(30)〈λ′-λ~k,Ax~k+By~k-b-1β(λk-λ~k)〉≥0,∀λ∈ℜm.
Combining (28)–(30) together, we have
(31)〈x′-x~k,f(x~k)-ATλ~k-AT(λk-λ~k)-r(xk-x~k)y′-y~k,g(y~k)-BTλ~k-BT(λk-λ~k)-s(yk-y~k)λ′-λ~,Ax~k+By~k-b-1β(λk-λ~k)〉≥0,∀u′∈Ω.
Using the notations of F (see (10)) and Q (see (20)), the earlier inequality can be rewritten into
(32)〈u′-u~k,F(u~)-Q(uk-u~k)〉≥0,∀u′∈Ω.
The assertion (27) is thus proved.
The following lemma plays a key role in proving the convergence of the algorithm.
Lemma 10.
Let matrixes Q, H be defined in (20), if the parameters r>0,s>0, and β>0 in (22)–(24) satisfy
(33)r>2β∥ATA∥,s>2β∥BTB∥.
Then for the matrix Q in (27), one has
(34)〈u-u~,Q(u-u~)〉≥(1-μ2)∥u-u~∥H200∀u≠u~∈ℜn+p+m,
with
(35)μ=max{2β∥ATA∥r,2β∥BTB∥s}∈(0,1).
Proof.
For any u≠u~, we have
(36)〈u-u~,Q(u-u~)〉=∥u-u~∥H2+〈λ-λ~,A(x-x~)〉+〈λ-λ~,y-y~〉.
According to the Cauchy-Schwarz inequality, we get
(37)〈λ-λ~,A(x-x~)〉+〈λ-λ~,y-y~〉=12(2〈λ-λ~,A(x-x~)〉+2〈λ-λ~,B(y-y~)〉)≥-12{2βμ∥A(x-x~)∥2+μ2β∥λ-λ~∥2}-12{2βμ∥B(y-y~)∥2+μ2β∥λ-λ~∥2}=-12{2βμ∥A(x-x~)∥2+2βμ×∥B(y-y~)∥2+μβ∥λ-λ~∥2}.
With the μ defined in (35), we have
(38)2βμ∥A(x-x~)∥2≤μr∥x-x~∥2,2βμ∥B(y-y~)∥2≤μs∥y-y~∥2.
Substituting (38) into (37), combining (36), the assertion (34) is proved.
Lemma 11.
Suppose that u*=(x*,y*,λ*)∈Ω is a solution point of (9) and the sequences {uk+1} are corrected by an undeterminate step size denoted by α instead of (26); that is,
(39)uk+1=uk-αM(uk-u~k).
Then one has
(40)ϑk(α)≥φk(α),
where
(41)ϑk(α)=∥uk-u*∥H2-∥uk+1-u*∥H2,φk(α)=2α〈uk-u~k,Q(uk-u~k)〉-α2∥M(uk-u~k)∥H2.
Proof.
One can see that
(42)ϑk(α)=∥uk-u*∥H2-∥uk+1-u*∥H2=∥uk-u*∥H2-∥uk-αM(uk-u~k)-u*∥H2=2α〈uk-u*,HM(uk-u~k)〉-α2∥M(uk-u~k)∥H2.
On the other hand, since Q=HM, using the monotonicity of F and Lemma 9, we have
(43)〈uk-u*,HM(uk-u~k)〉=〈uk-u*,Q(uk-u~k)〉=〈uk-u~k,Q(uk-u~k)〉+〈u~k-u~*,Q(uk-u~k)〉≥〈uk-u~k,Q(uk-u~k)〉+〈u~k-u*,F(u~k)〉≥〈uk-u~k,Q(uk-u~k)〉+〈u~k-u*,F(u*)〉≥〈uk-u~k,Q(uk-u~k)〉.
Combining (42)-(43) together, we have
(44)ϑk(α)=2α〈uk-u*,Q(uk-u~k)〉-α2∥M(uk-u~k)∥H2≥2α〈uk-u~k,Q(uk-u~k)〉-α2∥M(uk-u~k)∥H2=φk(α).
Thus, φk(α) is a lower bound of ϑk(α) for any α>0.
Remark 12.
Note that φk(α) is a quadratic function of α and it reaches its maximum at
(45)αk*=〈uk-u~k,Q(uk-u~k)〉∥M(uk-u~k)∥H2.
Hence, it is reasonable to use the step size strategy (26). The parameter γ in (26) plays the role of a relaxation or scaling parameter. We can easily see that γ∈(0,2) can ensure convergence.
Now, we prove the Fejér monotonicity of the iterative sequence {uk} generated by the algorithm.
Theorem 13.
Suppose that u*=(x*,y*,λ*)∈Ω is a solution point of (9) and the sequences {uk} are generated by the algorithm. Then
(46)∥uk+1-u*∥H2≤∥uk-u*∥H2-12r(2-r)(1-μ2)∥uk-u~k∥H2.
Proof.
According to Lemma 11,
(47)∥uk+1-u*∥H2≤∥uk-u*∥H2-φk(αk)=∥uk-u*∥H2-(αk2∥M(uk-u~k)∥H22αk〈uk-u~k,Q(uk-u~k)〉-αk2∥M(uk-u~k)∥H2)=∥uk-u*∥H2-γ(2-γ)αk*×〈uk-u~k,Q(uk-u~k)〉≤∥uk-u*∥H2-γ(2-γ)αk*(1-μ2)∥u-u~∥H2.
Moreover, it follows from (26) that the step size
(48)αk*=∥uk-u~k∥Q+QT22∥M(uk-u~k)∥H2=∥uk-u~k∥Q+QT22∥uk-u~k∥MTHM2=12((+(1β∥λk-λ~k∥2-1r∥AT(λk-λ~k)∥2-1s∥BT(λk-λ~k)∥2))∥uk-u~k∥MTHM2+r∥xk-x~k∥2+s∥yk-y~k∥2+(1β∥λk-λ~k∥2-1r∥AT(λk-λ~k)∥2-1s∥BT(λk-λ~k)∥2+(1β∥λk-λ~k∥2-1r∥AT(λk-λ~k)∥2)))×(∥uk-u~k∥MTHM2)-1(+(1β∥λk-λ~k∥2-1r∥AT(λk-λ~k)∥2-1s∥BT(λk-λ~k)∥2))∥uk-u~k∥MTHM2+r∥xk-x~k∥2∥+s∥yk-y~k∥2)≥12((+(1β-1r∥ATA∥-1s∥BTB∥)∥λk-λ~k∥2∥uk-u~k∥MTHM2+r∥xk-x~k∥2+s∥yk-y~k∥2+(1β-1r∥ATA∥-1s∥BTB∥)∥λk-λ~k∥2)×(∥uk-u~k∥MTHM2)-112((+(1β-1r∥ATA∥-1s∥BTB∥)∥λk-λ~k∥2∥uk-u~k∥MTHM2+r∥xk-x~k∥2∥+s∥yk-y~k∥2).
Based on the conditions (33), we have
(49)r∥xk-x~k∥2+s∥yk-y~k∥2+(1β-1r∥ATA∥-1s∥BTB∥)∥λk-λ~k∥2≥0.
Hence,
(50)αk*≥12∥uk-u~k∥MTHM2∥uk-u~k∥MTHM2=12.
Substituting (50) into (47), we have
(51)∥uk+1-u*∥H2≤∥uk-u*∥H2-12r(2-r)(1-μ2)∥uk-u~k∥H2.
Thus, we obtain the assertion of this theorem.
Based on the earlier results, we are now ready to prove the global convergence of the algorithm.
Theorem 14.
The sequence {uk} generated by the proposed algorithm converges to a solution of SVI (Ω,F).
Proof.
We prove the convergence of the proposed algorithm by following the standard analytic framework of contraction-type methods. It follows from (46) that {uk} is bounded, and we have that
(52)limk→∞∥uk-u~k∥H=0.
Consequently,
(53)limk→∞∥xk-x~k∥=0,limk→∞∥yk-y~k∥=0,limk→∞∥Ax~k+By~k-b∥=limk→∞∥1β(λk-λ~k)∥=0,
since (see (22) and (23))
(54)x~k=P𝒳[x~k-1r(f(x~k)-ATλ~k)+(xk-x~k)+1rAT(λk-λ~k)],y~k=P𝒴[y~k-1s(g(y~k)-BTλ~k)+(yk-y~k)+1sBT(λk-λ~k)],λ~k=λk-β(Ax~k+By~k-b).
It follows from (53) that
(55)limk→∞x~k-P𝒳[x~k-1r(f(x~k)-ATλ~k)]=0,limk→∞y~k-P𝒴[y~k-1s(g(y~k)-BTλ~k)]=0,limk→∞Ax~k+By~k-b=0.
Because u~k is also bounded, it has at least one cluster point. Let u∞ be a cluster point of u~k, and let u~kj be the subsequence converging to u∞. It follows from (55) that
(56)limj→∞x~kj-P𝒳[x~kj-1r(f(x~kj)-ATλ~kj)]=0,limj→∞y~kj-P𝒴[y~kj-1s(g(y~kj)-BTλ~kj)]=0,limj→∞Ax~kj+By~kj-b=0.
Consequently,
(57)x∞-P𝒳[x∞-1r(f(x∞)-ATλ∞)]=0,y∞-P𝒴[y∞-1s(g(y∞)-BTλ∞)]=0,Ax∞+By∞-b=0.
Using the continuity of F and the projection operator PΩ(·), we have that u∞ is a solution of SVI (Ω,F).
On the other hand, by taking limits over the subsequences in (52) and using limj→∞u~kj=u∞, we have that, for any k>kj, it follows from (46) that
(58)∥uk-u∞∥H≤∥ukj-u∞∥H.
Thus, the sequence {uk} converges to u∞, which is a solution of SVI (Ω,F).
4. Numerical Experiments
In this section, we present some numerical experiments results to show the effectiveness of the proposed algorithm. The codes are run on a notebook computer with Inter(R) Core(TM) 2 CPU 2.0 GHZ and RAM 2.00 GM under MATLAB Version 2009b.
We consider the following optimization problem:
(59)minx∈ℜn,y∈ℜp12xTPx+12yTQys.t.Ax+By=b,
where P∈ℜn×n,Q∈ℜp×p are symmetric positive semidefinite matrixes, A∈ℜm×n, B∈ℜm×p, and b∈ℜm.
Using the KKT condition, the problem (59) can be converted into the following variational inequality: find w*=(x*,y*,λ*)∈ℜn+p+m such that
(60)〈x′-x*,Px*-ATλ*y′-y*,Qy*-BTλ*λ′-λ*,Ax*+By*-b〉≥0,∀w′∈ℜn+p+m.
In this example, we randomly created the input data of the tested collection in the following manner.
P and Q were generated randomly with eigenvalues in [5,10] according to the following MATLAB scripts:
P=rand(n);[Q1,R1]=qr(P),
S=5+5*rand(n,1); P=Q1*diag(S)*Q1′,
Q=rand(p); [Q2,R2]=qr(Q),
S=5+5*rand(m,1); P=Q2*diag(S)*Q2′.
A and B were generated randomly with singular values in [0,3], and the maximum singular value is 3 according to the following MATLAB scripts:
A=rand(m,n);[U,S,V]=svd(A),
S=S/S(1,1)*3; A=U*S*V′,
B=rand(m,p); [U,S,V]=svd(B),
S=S/S(1,1)*3; B=U*S*V′.
b is generated randomly with b=rand(m,1)*10.
According to the data generation, we have ∥ATA∥=9 and ∥BTB∥=9.
To apply (22)–(25) to solve (59), instead of choosing the step length αk judiciously as (24), we can simply choose αk≡1 by takeing r=1/αk* (since αk*>1/2 when u≠u~, we have r∈(0,2) which satisfies the requirement). Then, we obtain the following subproblems which are all easy enough to have closed-form solutions:
(61)x~k=(rI+P)-1(rxk+ATλk),y~k=(sI+Q)-1(syk+BTλk),λ~k=λk-β(Ax~k+By~k-b),xk+1=x~k-1rAT(λk-λ~k),yk+1=y~k-1sBT(λk-λ~k),λk+1=λ~k.
For comparison, we also solve it by the parallel decomposition method (denoted by PDM) that has been studied extensively in the literature (e.g., [21, 22]). For PDM, the restrictions on the proximal parameters are the same as our algorithm. By applying PDM to (59), we obtain the following subproblems which are also easy enough to have closed-form solutions:
(62)xk+1=(rI+P)-1(rxk-βAT(Axk+Byk-b-1βλk)),yk+1=(sI+Q)-1(syk-βBT(Axk+Byk-b-1βλk)),λk+1=λk-β(Axk+1+Byk+1-b).
We report the numerical experiments by building their performance profiles in terms of the number of iterations and the total of computational time. Here, we take β=3+(n/10),r=s=20β for the two algorithms. We set the initial vector (x0,y0,λ0)=(0,0,0), and the stopping criterion is
(63)Tol=max{∥yk+1-yk∥∞,∥λk+1-λk∥∞∥xk+1-xk∥∞,∥yk+1-yk∥∞,∥λk+1-λk∥∞}≤10-4.
The computational results are given in Table 1 for different choices of m, n, and p. We reported the number of iterations (Iter.) and the computing time in seconds (CPU(s)) when the mentioned stopping criterion is achieved.
Numerical results for the example.
m
n
p
PDM
New algorithm
Iter.
CPU (s)
Error
Iter.
CPU (s)
Error
10
10
10
237
0.075
9.956×10-5
237
0.075
9.895×10-5
10
15
15
250
0.143
9.758×10-5
250
0.086
9.815×10-5
20
20
20
314
0.115
9.669×10-5
314
0.112
9.728×10-5
20
30
30
372
0.175
9.586×10-5
372
0.178
9.597×10-5
40
50
50
561
3.443
9.631×10-5
561
1.340
9.625×10-5
50
80
80
714
3.534
9.990×10-5
715
1.963
9.892×10-5
60
100
100
842
8.107
9.982×10-5
842
7.274
9.996×10-5
100
120
120
1065
9.773
9.926×10-5
1065
11.786
9.938×10-5
150
200
200
1661
24.451
9.942×10-5
1661
21.366
9.947×10-5
200
250
250
2055
38.037
9.907×10-5
2055
35.020
9.911×10-5
200
300
300
2445
66.520
9.964×10-5
2445
61.673
9.970×10-5
The data in Table 1 indicates clearly that the proposed method is efficient compared with the classical PDM in [21, 22]. We can observe that the iteration numbers and the CPU time of the two algorithms are almost the same.
5. Conclusions
In this paper, we proposed a new implementable algorithm for solving the monotone variational inequalities with separable structure. At each iteration, the algorithm performs two easily implementable projections parallelly to produce a predictor and then makes a simple correction to generate the new iterate. Under some mild conditions, we proved the global convergence of the new method. We also give some numerical experiments to show that the proposed method is applicable and valid.
DafermosS.Traffic equilibrium and variational inequalitiesBertsekasD. P.GafniE. M.MartinetB.Régularisation d’inéquations variationnelles par approximations successivesRockafellarR. T.Monotone operators and the proximal point algorithmRockafellarR. T.Augmented lagrangians and applications of the proximal point
algorithm in convex programmingHeB.LiaoL.QianM.Alternating projection based prediction-correction methods for structured variational inequalitiesTsengP.Alternating projection-proximal methods for convex programming and variational inequalitiesHeB.LiaoL.HanD.YangH.A new inexact alternating directions method for monotone variational inequalitiesHeB.WangS.YangH.A modified variable-penalty alternating directions method for monotone variational inequalitiesHeB.YangH.WangS.Alternating direction method with self-adaptive penalty parameters for monotone variational inequalitiesHeB.LiaoL.WangX.Proximal-like contraction methods for monotone variational inequalities in a unified framework II: general methods and numerical experimentsHeB.YangZ.YuanX.An approximate proximal-extragradient type method for monotone variational inequalitiesHanD.A hybrid entropic proximal decomposition method with self-adaptive strategy for solving variational inequality problemsHeB.Parallel splitting augmented Lagrangian methods for monotone structured variational inequalitiesXuM.JiangJ.LiB.XuB.An improved prediction-correction method for monotone variational inequalities with separable operatorsXuM.WuT.A class of linearized proximal alternating direction methodsYuanX.LiM.An LQP-based decomposition method for solving a class of variational inequalitiesBnouhachemA.BenazzaH.KhalfaouiM.An inexact alternating direction method for solving a class of structured variational inequalitiesHeB.YuanX.Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspectiveHanD.A generalized proximal-point-based prediction-correction method for variational inequality problemsHeB.YuanX.The unified framework of some proximal-based decomposition methods for monotone variational inequalities with separable structureChenG.TeboulleM.A proximal-based decomposition method for convex minimization problems