ITERATIVE RESOLVENT METHODS FOR GENERAL MIXED VARIATIONAL INEQUALITIES

General mixed variational inequality is a useful and important generalization of variational inequalities with a wide range of applications in industry, economics, finance, network analysis, optimization, elasticity and structural engineering; see [1]–[21] and the references therein. There are several numerical methods including resolvent method and its variant forms, auxiliary principle and operator-splitting for solving mixed variational inequalities. Resolvent method and its variant forms represent an important tool for finding the approximate solutions. The main idea in this technique is to establish the equivalence between the mixed variational inequalities and the fixed-point problem by using the concept of the resolvent. The novel feature of this technique for solving


Introduction
General mixed variational inequality is a useful and important generalization of variational inequalities with a wide range of applications in industry, economics, finance, network analysis, optimization, elasticity and structural engineering; see [1]- [21] and the references therein.There are several numerical methods including resolvent method and its variant forms, auxiliary principle and operator-splitting for solving mixed variational inequalities.Resolvent method and its variant forms represent an important tool for finding the approximate solutions.The main idea in this technique is to establish the equivalence between the mixed variational inequalities and the fixed-point problem by using the concept of the resolvent.The novel feature of this technique for solving the mixed variational inequalities is that the resolvent step involves the subdifferential of the proper, convex and lower semicontinuous function part only and the other part facilitates the problem decomposition.This can lead to the development of very efficient methods, since one can treat each part of the original operator independently.In the context of variational inequalities, Noor [11,12,13,14,15,17] has used the resolvent operator technique in conjunction with the technique of updating the solution to suggest some two-step, three-step and four-step forward-backward resolvent-splitting methods for solving mixed variational inequalities.In this paper, we suggest and analyze a new class of self-adaptive algorithms for general mixed variational inequalities by modifying the associated fixed-point formulation.The new splitting methods are self-adaptive type methods involving the line search strategy, where the step size depends upon the resolvent equation and searching direction is a combination of the resolvent residue and the modified extraresolvent direction.Our results include the previous results of Noor and Noor [8] and Noor [11,12,13,14,15,17] for solving different classes of variational inequalities as special cases.It is shown that the convergence of these new methods requires only the pseudomonotonicity, which is a weaker condition than monotonicity.Our results can be viewed as a novel application of the technique of updating the solution for developing new iterative methods for solving mixed variational inequalities and related optimization problems.

Preliminaries
Let H be a real Hilbert space whose inner product and norm are denoted by which is known as the problem of finding a zero of the sum of two (maximal) monotone operators and has been studied extensively in recent years.We remark that if g ≡ I, the identity operator, then problem (2.1) is equivalent to finding u ∈ H such that which are called the mixed variational inequalities.It has been shown that a wide class of linear and nonlinear problems arising in finance, economics, circuit and network analysis, elasticity, optimization and operations research can be studied via the mixed variational inequalities (2.1) and (2.2).For the applications, numerical methods and formulations; see [1,2,3,4,5,6,11,12,13,14,15,17,20].We note that if ϕ is the indicator function of a closed convex set K in H, that is, Inequality of the type (2.3) is known as the general variational inequality, which was introduced and studied by Noor [7] in 1988.It turned out that the odd-order and nonsymmetric free, unilateral, obstacle and equilibrium problems can be studied by the general variational inequality (2.3).It has been shown in [16] that a class of quasivariational inequalities and nonconvex programming problem can be viewed as the the general variational inequality problems.For the applications and numerical methods; see [8,9,10,11,12,13,14,15,16,17,18,21] and the references therein.
From now onward, we assume that the operator g is onto K and g −1 exists unless otherwise specified. If which is known as the general complementarity problem, which was introduced and studied by Noor [7] in 1988.We note that if g(u) = u−m(u), where m is a point-to-point mapping, then problem (2.4) is called the quasi(implicit) complementarity problem.If the operators T and g are affine, then problem (2.4) is known as the vertical linear complementarity problem; see, for example, [21].
For g ≡ I, the identity operator, problem (2.3) collapses to: find u ∈ K such that which is called the standard variational inequality introduced and studied by Stampacchia [19] in 1964.For recent state-of-the art research; see [1]- [20].
It is clear that problems (2.2)-(2.5)are special cases of the general mixed variational inequality (2.1).In brief, for a suitable and appropriate choice of the operators T , g, ϕ and the space H, one can obtain a wide class of variational inequalities and complementarity problems.This clearly shows that problem (2.1) is a quite general and unifying one.Furthermore, problem (2.1) has important applications in various branches of pure and applied sciences; see [1]- [20].
We now recall some well-known concepts and results.Definition 2.1: For all u, v, z ∈ H, an operator T : H → H is said to be: For g ≡ I, where I is the identity operator, Definition 2.1 reduces to the classical definition of monotonicity and pseudomonotonicity.It is known that monotonicity implies pseudomonotonicity but the converse is not true, see [2].Thus we conclude that the condition of pseudomonotonicity is weaker than monotonicity.

Definition 2.2:
If A is maximal monotone operator on H, then for a constant ρ > 0, the resolvent operator associated with A is defined as where I is the identity operator.
It is well-known that the operator A is maximal monotone if and only if the resolvent operator J A is defined everywhere on the space.The operator J A is single-valued and nonexpansive.
Remark 2.1: It is well known that the subdifferential ∂ϕ of a proper, convex and lower semicontinuos function ϕ : H −→ R ∪ {∞} is a maximal monotone operator, so is the resolvent operator associated with ∂ϕ and is defined everywhere.
Lemma 2.1: if and only if where J ϕ is the resolvent operator.
We remark that if the proper, convex and lower semicontinuous function ϕ is an indicator function of closed convex set K in H, then J ϕ ≡ P K , the projection of H onto K.In this case Lemma 2.2 is equivalent to the projection lemma; see [1].

Main Results
In this section, we use the resolvent operator technique to suggest a modified resolvent method for solving general mixed variational inequalities (2.1).For this purpose, we need the following result, which can be proved by using Lemma 2.
Algorithm 3.1: For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme It is well-known (see [11]) that the convergence of Algorithm 3.1 requires that the operator T is both strongly monotone and Lipschitz continuous.These strict conditions rule out many applications of the resolvent Algorithm 3.1.These facts motivated to develop other techniques.In recent years, a number of iterative resolvent methods have been suggested and analyzed by using the technique of updating the solution by performing an additional forward step and resolvent at each step; see [8,11,12,13,14,15,17,20].If g −1 exists, then we can rewrite the equation (3.1) in the form This fixed-point formulation allows us to suggest and analyze the following iterative method, which is known as the extraresolvent method.Algorithm 3.2: For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme Predictor Step.
where ρ n satisfies We note that if the proper, convex and lower-semicontinuous function ϕ(.) is the indictor function of a closed convex set K in H, then J ϕ ≡ P K , the projection operator from H onto K. Consequently Algorithm 3.2 reduces to the improved version of the extragradient method for solving the general variational inequalities (2.3).
Noor [11] used the technique of updating the solution to rewrite the equation (3.1) in the form: This fixed-point formulation is used to suggest and analyze the following method for solving general mixed variational inequalities (2.1).
Algorithm 3.3: For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme which is known as the two-step forward-backward splitting method.For the convergence analysis of Algorithm 3.3; see Noor [11].For a positive constant α, one can rewrite equation (3.1) in the form where This fixed-point formulation is used to suggest and analyze the following self-adaptive iterative method.Algorithm 3.4: For a given u 0 , compute the approximate solution u n+1 by the iterative scheme Predictor step.
We again use the technique of updating the solution to rewrite the equation (3.1) in the form.
or equivalently where g −1 is the inverse of the operator g.
The above fixed-point formulations have been used in [17] to suggest and analyze the following iterative methods for solving general mixed variational inequalities (2.1).
Algorithm 3.5: For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative schemes which is known as the predictor-corrector method; see Noor [17].Algorithm 3.5 can be considered as a generalization of the forward-backward splitting method of Noor [14].If g −1 exists, then Algorithm 3.5 can be written in the equivalent form as: Algorithm 3.6: For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme which is known as the four-step forward-backward splitting algorithm.Note that the order of T and J ϕ has not been changed.This method can be considered as a generalization of the three-step forward-backward splitting algorithm of Glowinski and Le Tallec [5] and Noor [14].For the convergence analysis of Algorithm 3.6; see Noor [17].Algorithm 3.7: For a given u 0 ∈ H, compute u n+1 by the iterative scheme which is again four-step forward-backward splitting type method considered and studied by Noor [17].Algorithm 3.7 can be viewed as generalization of a modified two-step forward-backward splitting method for maximal monotone mappings of Tseng [20].Using essentially the technique of Tseng [20] one can discuss the applications of Algorithm 3.7 in optimization and mathematical programming.
In this paper, we suggest a new method involving the line search strategy, which includes these splitting type methods as special cases.For this purpose, we now define the resolvent residue vector by where g(w) and g(z) are defined by (3.5) and (3.4) respectively.From Lemma 3.1, it follows that u ∈ H is a solution of (2.1) if and only if u ∈ H is a solution of the equation For a positive constant α, we rewrite the equation (3.1), using (3.3), (3.4), (3.5) and (3.9), in the following form. where This fixed-point formulation enables us to suggest the following self-adaptive resolvent method for general mixed variational inequalities (2.1) and this is the main motivation of this paper.
Algorithm 3.8: For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative schemes Predictor step.
where ρ n satisfies Corrector step. where ) where α n is the corrector step size.If the proper, convex and lower-semicontinuous function ϕ is an indicator function of a closed convex set K in H, then J ϕ ≡ P K , the projection of H onto K. Consequently Algorithm 3.8 collapses to: Algorithm 3.9: For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative schemes Predictor step.
where ρ n satisfies Algorithm 3.9 appears to be a new one for the general variational inequalities (2.3).For g ≡ I, the identity operator, we obtain new improved versions of algorithms for (mixed) variational inequalities and related optimization problems.This clearly shows that Algorithm 3.8 is a unifying one and includes several known and new algorithms as special cases.
For the convergence analysis of Algorithm 3.8, we need the following results, which can be proved using the technique of Noor [12].We include their proofs for the sake of completeness and to convey an idea.Lemma 3.2: If ū ∈ H is a solution of (2.1) and T is g-pseudomonotone, then

.21)
Proof: Let ū ∈ H be a solution of (2.1).Then (3.24) where we have used the fact that the operator T is g-pseudomonotone.
Adding (3.23) and (3.24), we have the required result.Theorem 3.1: Let g : H −→ H be invertible and let H be a finite dimensional space.If u n+1 is the approximate solution obtained from Algorithm 3.8 and ū ∈ H is a solution of (2.1), then lim n→∞ u n = ū.
Proof: Let ū ∈ H be a solution of (2.1).From (3.27), it follows that the sequence {||g(ū) − g(u n )||} is nonincreasing and consequently {g(u n )} is bounded.Under the assumptions of g, it follows that the sequence {u n } is also bounded.Furthermore, we have Thus it follows from the above inequality that the sequence {u n } has exactly one cluster point û and lim n→∞ g(u n ) = g(û).
Since g is invertible, lim n→∞ (u n ) = û, the required result.