1. Introduction
According to [1], the history of algorithms for solving finite-dimensional variational inequalities is relatively short. A recent development of such methods is the analytic center method based on cutting plane methods. It combines the feature of the newly developed interior point methods with the classical cutting plane scheme to achieve polynomial complexity in theory and quick convergence in practice. More details can be found in [2, 3]. Specifically, Goffin et al. [4] developed a convergent framework for finding a solution x* of the variational inequality VIP(F,X) associated with the continuous mapping F from X to Rn and the polyhedron X={x∈Rn | Ax≤b} under an assumption slightly stronger than pseudomonotonicity. Again, Marcotte and Zhu [5] extended this algorithm to quasimonotone variational inequalities that satisfy a weak additional assumption. Such methods are effective in practice.

Note that the facts in optimization problems, see [6–8], some functions from Rn to R are themselves defined through other minimization problems. For example, consider the Lagrangian relaxation, see [9–12], the primal problem is
(1.1)max{q(ξ)∣ξ∈P,h(ξ)=0},
where P is a compact subset of Rm and q:Rm→R, h:Rm→Rn are two functions. Lagrangian relaxation in this problem leads to the problem min{f(x) | x∈Rn}, where
(1.2)f(x)=maxξ∈P{q(ξ)+〈x,h(ξ)〉}
is the dual function. Trying to solve problem (1.1) by means of solving its dual problem min{f(x) | x∈Rn} becomes more difficult since in this case evaluating the function value f(x) requires solving exactly another optimization problem (1.2). Let us see another example: consider the problem
(1.3)min{f(x)∣x∈C},
where f is convex (not necessarily differentiable), C⊂Rn is a nonempty closed convex set, F is called the Moreau-Yosida regularization of f on C, that is,
(1.4)F(x)=minz∈C{f(z)+12α∥z-x∥2},
where α is a positive parameter. A point x∈C is a solution to (1.3) if and only if it is a solution to the problem:
(1.5)minx∈RnF(x).
The problem (1.5) is easier to deal with than (1.3), see [13]. But in this case, computing the exact function value of F at an arbitrary point x is difficult or even impossible since F itself is defined through a minimization problem involving another function f. Intuitively, we consider the approximate computation of function F.

The above-mentioned phenomenon also exists for mappings from X to Y, where X and Y are two subspaces of any two finite-dimensional spaces, respectively. Once a mapping, and more specifically, a continuous mapping is defined implicitly rather than explicitly, the approximation of the mapping becomes inevitable, see [14]. In this paper we try to solve VIP(F,X) by assuming the values of the mapping F from X to Rn can be only computed approximately. Under the assumption, we construct an algorithm for solving the approximate variational inequality problem AVIP(F,X) and we also prove that there exists a cluster point of the iteration points generated by the proposed algorithm, it is a solution to the original problem VIP(F,X).

This paper is organized as follows. Some basic concepts and results are introduced in Section 2. In Section 3, a proximal analytic center cutting plane algorithm for solving the variational inequality problems is given. The convergence analysis of the proposed algorithm is addressed in Section 4. In the last section, we give some conclusions.

2. Basic Concepts and Results
Let X={x∈Rn∣Ax≤b} be a polyhedron and F a continuous mapping from X to Rn. A vector x* is a solution to the variational inequality VIP(F,X) if and only if it satisfies the system of nonlinear inequalities:
(2.1)〈F(x*),x-x*〉≥0, ∀x∈X.
The vector x* is a solution to the dual variational inequality VID(F,X) of VIP(F,X) if and only if it satisfies
(2.2)〈F(x),x-x*〉≥0, ∀x∈X.
We denote by XP* the solution set of VIP(F,X), and XD* the solution set of VID(F,X), respectively. Whenever F is continuous, we have XD*⊆XP*, see [15]. If F is pseudomonotone on X, then XD*=XP*, see [15]. If F is quasimonotone at x*∈XP* and F(x*) is not normal to X at x*, then XD* is nonempty, see Proposition 1 in [5]. For the definitions of monotone, pseudomonotone and quasimonotone, see [5, 15].

Definition 2.1.
The gap functions gP(x) and gD(x) of VIP(F,X) and VID(F,X) are, respectively, defined by
(2.3)gP(x)=maxy∈X〈F(x),x-y〉,gD(x)=maxy∈X〈F(y),x-y〉.
Note that gP(x)≥0, gD(x)≥0, and gP(x*)=0 if and only if x* is a solution to VIP(F,X), gD(x*)=0 if and only if x* is a solution to VID(F,X). Thus, XP*={x∈X∣gP(x)=0},XD*={x∈X∣gD(x)=0}.

Definition 2.2.
A point x~∈X is called an ε-solution to VIP(F,X) if gP(x~)≤ε for given ε>0.

Definition 2.3.
For x,y∈X, we say F(x)⪯F(y) if and only if Fi(x)≤Fi(y), for i=1,2,…,n, where F(x)=(F1(x),F2(x),…,Fn(x))T.

Assumptions.
Throughout this paper, we make the following assumptions: for each x,y∈X, given any ε-=(ε,ε,…,ε), δ-=(δ,δ,…,δ), where ε,δ∈(0,1), we can always find a F-x∈Rn and a F-y∈Rn such that
(2.4)(a) F(x)⪯F-x⪯F(x)+ε-, F(y)⪯F-y⪯F(y)+δ-, that is, we can compute the approximate value of F with any precision;(b) F-y→F-x if y→x, no matter the relationship between ε- and δ-;(c) ∥F-y-F-x∥≤L∥y-x∥, where L>0 is a constant.
These assumptions are realistic in practice, see [16, 17]. By using the given architecture in [16, 17], we can approximate the mapping F arbitrary well since neural networks are capable of approximating any function from one finite-dimensional real vector space to another one arbitrary well, see [18]. Specifically, let us consider the case of univariate function. If f is a min-type function of the form
(2.5)f(x)=inf{Nz(x)∣z∈Z},
where each Nz(x) is convex and Z is an infinite set, then it may be impossible to calculate f(x). However, we may still consider two cases. In the first case of controllable accuracy, for each positive ε>0 one can find an ε-minimizer of (2.5), that is, an element zx∈Z satisfying Nzx(x)≤f(x)+ε; in the second case, this may be possible only for some fixed (any possibly unknown) ε<∞. In both cases, we may set f-x=Nzx(x)≤f(x)+ε. A special case of (2.5) arises from Lagrangian relaxation [15], where the problem max{f(x)∣x∈S} with S=R+n is the Lagrangian dual of the primal problem
(2.6)inf ψ0(x) s.t. ψj(x)≥0, j=1,2,…,n, x∈X,
with Nz(x)=ψ0(z)+〈x,ψ(z)〉 for ψ=(ψ1,ψ2,…,ψn). Then, for each multiplier x≥0, we need only to find zx∈Z such that f-x=Nzx(x)≤f(x)+ε, see [19].

Under the above assumptions (2.4), we introduce an approximate problem AVIP(F,X) associated with VIP(F,X): finding x*∈X such that
(2.7)〈F-x*,x-x*〉≥0, ∀x∈X,
where F-x* satisfies F(x*)⪯F-x*⪯F(x*)+ε- for arbitrary ε-≻0. Its dual problem AVID(F,X) is to find x*∈X such that
(2.8)〈F-x,x-x*〉≥0, ∀x∈X,
where F-x satisfies F(x)⪯F-x⪯F(x)+ε- for arbitrary ε-≻0.

Definition 2.5.
The gap function of AVIP(F,X) is defined by g-P(x)=maxy∈X〈F-x,x-y〉.

Definition 2.6.
A point x~∈X is called an ε-solution to AVIP(F,X) if g-P(x~)≤ε for given ε>0.

The optimal solution sets of AVIP(F,X) and AVID(F,X) are denoted by AXP* and AXD*, respectively. The following proposition ensures that AXD* is nonempty.

Proposition 2.7.
If there exists a point x*∈AXP* such that
(2.9)〈F-x*,y-x*〉>0⇒〈F-y,y-x*〉≥0, ∀y∈X,
and F-x* is not normal to X at x*, then AXD* is nonempty.

Proof.
Since F-x* is not normal to X at x*, there exists a point x0∈X such that 〈F-x*,x0-x*〉>0. ∀ x∈X, set xt=tx0+(1-t)x for t∈(0,1], then we have 〈F-x*,x-x*〉≥0 and 〈F-x*,xt-x*〉>0. Note the condition (2.9), we obtain 〈F-xt,xt-x*〉≥0. Letting t→0, it follows from the condition (b) in (2.4) that 〈F-x,x-x*〉≥0, that is, x*∈AXD*.

In the following part, we focus our attention on solving AVIP(F,X). Let Γ(y,x):Rn×Rn→Rn denote an auxiliary mapping, continuous in x and y, strongly monotone in y, that is,
(2.10)〈Γ(y,x)-Γ(z,x),y-z〉≥β∥y-z∥2, ∀y,z∈X,
for some β>0. We consider the auxiliary variational inequality associated with Γ, whose solution w(x) satisfies
(2.11)〈Γ(w(x),x)-Γ(x,x)+F-x,y-w(x)〉≥0, ∀y∈X.
In view of the strong monotonicity of Γ(y,x) with respect to y, this auxiliary variational inequality has a unique solution w(x).

Proposition 2.8.
The mapping w:X→X is continuous on X. Furthermore, x is a solution to AVIP(F,X) if and only if x=w(x).

Proof.
The first part of the proposition follows from Theorem 5.4 in [1]. To prove the second part, we first suppose that x=w(x). This yields 〈F-x,y-x〉≥0, ∀y∈X, that is, x solves AVIP(F,X). Conversely, suppose that x solves AVIP(F,X), then
(2.12)〈F-x,w(x)-x〉≥0,
and from (2.11), we have
(2.13)〈Γ(w(x),x)-Γ(x,x)+F-x,x-w(x)〉≥0.
Adding the two preceding inequalities, one obtains
(2.14)〈Γ(w(x),x)-Γ(x,x),x-w(x)〉≥0,
and we conclude, from the strong monotonicity of Γ with respect to y, that x=w(x).

Let ρ<1,α<β be two positive numbers. Let l be the smallest nonnegative integer satisfying
(2.15)〈F-x+ρl(w(x)-x),x-w(x)〉≥α∥w(x)-x∥2,
where F-x+ρl(w(x)-x) satisfies F(x+ρl(w(x)-x))⪯F-x+ρl(w(x)-x)⪯F(x+ρl(w(x)-x))+ε- for arbitrary ε-≻0. The existence of a finite l will be proved in Proposition 2.9. The composite mapping G is defined, for every x∈X, by
(2.16)G(x)=G-x=F-x+ρl(w(x)-x).
If x* is a solution to AVIP(F,X), then we have w(x*)=x*, l=0 and G-x*=F-x*.

Proposition 2.9.
The operator G is well defined for every x∈X. Moreover, we have
(2.17)l≤ln((β-α)/L)lnρ,
where L is the number given in (2.4)-(c).

Proof.
From the definition of w(x), we have
(2.18)〈F-x,x-w(x)〉≥〈Γ(w(x),x)-Γ(x,x),w(x)-x〉≥β∥x-w(x)∥2.
Suppose (2.15) does not hold for any finite integer l, that is,
(2.19)〈F-x+ρl(w(x)-x),x-w(x)〉<α∥x-w(x)∥2.
Note the assumption (2.4)-(b), we obtain
(2.20)F-x+ρl(w(x)-x)→F-x as l→+∞,
therefore,
(2.21)〈F-x,x-w(x)〉≤α∥x-w(x)∥2.
Since α<β, (2.21) is in contradiction with (2.18). To prove the second part, we notice that
(2.22)〈F-x+ρl(w(x)-x),x-w(x)〉=〈F-x,x-w(x)〉+〈F-x+ρl(w(x)-x)-F-x,x-w(x)〉≥β∥w(x)-x∥2-Lρl∥w(x)-x∥2≥α∥w(x)-x∥2
if α≤β-Lρl, which means the second conclusion of Proposition 2.9 holds.

Proposition 2.10.
If x∉AXP*, then ∀y*∈AXD*, we have
(2.23)〈G-x,x-y*〉>0.

Proof.
Let y(x)=x+ρl(w(x)-x), then G-x=F-y(x) and
(2.24)〈F-y(x),w(x)-x〉≤-α∥w(x)-x∥2.
Since x∉AXP*, so w(x)≠x. Therefore,
(2.25)〈F-y(x),y(x)-x〉=ρl〈F-y(x),w(x)-x〉<0.
For all y*∈AXD*, there holds
(2.26)〈F-y(x),y(x)-y*〉≥0.
By combining (2.25) with (2.26), we obtain 〈F-y(x),x-y*〉>0, that is, 〈G-x,x-y*〉>0.

4. Convergence Analysis
In [20], the authors proposed a column generation scheme to generate the polytope Xk, and they proved if k satisfies the following inequality
(4.1)ε2m≥1/2+2m ln(1+(k+1)/8m2)2m+k+1e-2α((k+1)/(2m+k+1)),
where ε<1/2 is a constant, the scheme will stop with a feasible solution, that is, they can find a vector ak+1 such that {y∣ak+1Ty≤ak+1Tyk}⊃Γ with ||ak+1||=1, Γ contains a full-dimensional closed ball with ε<1/2 radius. In other words, there exists the smallest k(ε) such that Xk(ε) generated by the column generation scheme does not contain the ball with ε<1/2 radius, and it is known as the finite cut property. It is easy to know that the result of Theorem 6.6 in [20] also holds without much change for our Algorithm 3.1 using the approximate centers. That is, by using the row generation scheme, there exists the smallest k(ρ) such that Xk(ρ) generated in Step 4 in Algorithm 3.1 does not contain the ball with ρ radius lying inside the polytope X. This result plays an important role in proving the convergence of the described Algorithm 3.1 in Section 3.

Theorem 4.1.
Let the polyhedron X have nonempty interior, and let AXD* be nonempty. Assumption (2.4) holds. Then either Algorithm 3.1 stops with a solution to AVIP(F,X) after a finite number of iterations, or there exists a subsequence of the infinite sequence {xk} that converges to a point in AXP*.

Proof.
Assume that xk∉AXP* for every iteration k, and let y*∈AXD*. From Proposition 2.10, we know that y*∈Xk never lies on Hk for any k. Let {y-i}i∈N be an arbitrary sequence of point in the interior of X converging to y* and δi a sequence of positive numbers such that limi→+∞δi=0 and that the sequence of closed balls {B(y-i,δi)}i∈N lies in the interior of X. Note that limi→+∞B(y-i,δi)={y*}. From finite cut property, we know that there exists the smallest index k(i) and a point y~i∈B(y-i,δi) such that y~i satisfies
(4.2)〈G-xk(i),xk(i)-y~i〉<0.
As 〈G-xk(i),xk(i)-y*〉>0, there exists a point y^i on the segment [y~i,y*] such that 〈G-xk(i),xk(i)-y^i〉=0. Since X is compact, we can extract from {xk(i)}i∈N a convergent subsequence {xk(i)}i∈S. Denote by x^ its limit point, we have
(4.3)〈G-xk(i),y^i-xk(i)〉=0, ∀i∈S.
From Proposition 2.9, we know that lk(i) is bounded. Consequently, we can extract form {lk(i)}i∈S a constant subsequence lk*(i)=k*. Now from the continuity of w(x) for fixed k and the relations (2.15) and (4.3), it follows by taking the limit in (4.3) that
(4.4)〈G-x^,y*-x^〉=0.
By Proposition 2.10, we conclude that x^∈AXP*.

Theorem 4.2.
Under the conditions of Theorem 4.1, either Algorithm 3.1 stops with a solution to AVIP(F,X) after a finite number of iterations, or there exists a subsequence of the infinite sequence {xk} that converges to a point in XP*.

Proof.
Since ε∈(0,1), εk∈(0,1). At the end of Step 4 in Algorithm 3.1 we increase k by one, so we have εk+1<εk, εk→0 as k→∞. Moreover, ε-xk→0- as k→∞ in Algorithm 3.1, where 0- denotes the zero vector. This means F-x^=F(x^) as k→∞. Therefore, from the second result of Theorem 4.1, we know x^ is the solution to the problem VIP(F,X).