doi:10.1155/2011/305856 Research Article A Penalization-Gradient Algorithm for Variational Inequalities

This paper is concerned with the study of a penalization-gradient algorithm for solving variational inequalities, namely, find such that for all , where is a single-valued operator, is a closed convex set of a real Hilbert space . Given which acts as a penalization function with respect to the constraint , and a penalization parameter , we consider an algorithm which alternates a proximal step with respect to and a gradient step with respect to and reads as . Under mild hypotheses, we obtain weak convergence for an inverse strongly monotone operator and strong convergence for a Lipschitz continuous and strongly monotone operator. Applications to hierarchical minimization and fixed-point problems are also given and the multivalued case is reached by replacing the multivalued operator by its Yosida approximate which is always Lipschitz continuous.


Introduction
Let H be a real Hilbert space, A : H → H a monotone operator, and let C be a closed convex set in H, we are interested in the study of a gradient-penalization algorithm for solving the problem of finding x ∈ C such that Ax, y − x ≥ 0 ∀y ∈ C, 1.1 or equivalently Ax N C x 0, 1.2 where N C is the normal cone to a closed convex set C. The above problem is a variational inequality, initiated by Stampacchia 1 , and this field is now a well-known branch of pure and applied mathematics, and many important problems can be cast in this framework. In 2 , Attouch et al., based on seminal work by Passty 3 , solve this problem with a multivalued operator by using splitting proximal methods. A drawback is the fact that the convergence in general is only ergodic. Motivated by 2, 4 and by 5 where penalty methods for variational inequalities with single-valued monotone maps are given, we will prove that our proposed forward-backward penalization-gradient method 1.9 enjoys good asymptotic convergence properties. We will provide some applications to hierarchical fixed-point and optimization problems and also propose an idea to reach monotone variational inclusions.
To begin with, see, for instance 6 , let us recall that an operator with domain D T and range R T is said to be monotone if It is said to be maximal monotone if, in addition, its graph, gph T : { x, y ∈ H × H : y ∈ T x }, is not properly contained in the graph of any other monotone operator. An operator sequence T k is said to be graph convergent to T if gph T k converges to gph T in the Kuratowski-Painlevé's sense, that is, lim sup k gph T k ⊂ gph T ⊂ lim inf k gph T k . It is wellknown that for each x ∈ H and λ > 0 there is a unique z ∈ H such that x ∈ I λT z. The single-valued operator J T λ : I λT −1 is called the resolvent of T of parameter λ. It is a nonexpansive mapping which is everywhere defined and is related to its Yosida approximate, namely T λ x : x − J T λ x /λ, by the relation T λ x ∈ T J T λ x . The latter is 1/λ-Lipschitz continuous and satisfies T λ μ T λ μ . Recall that the inverse T −1 of T is the operator defined by x ∈ T −1 y ⇔ y ∈ T x and that, for all x, y ∈ H, we have the following key inequality

1.4
Observe that the relation T λ μ x T λ μ x leads to Now, given a proper lower semicontinuous convex function f : Its Moreau-Yosida approximate and proximal mapping f λ and prox λf are given, respectively, by International Journal of Mathematics and Mathematical Sciences 3 We have the following interesting relation ∂f λ ∇f λ . Finally, given a nonempty closed convex set C ⊂ H, its indicator function is defined as δ C x 0 if x ∈ C and ∞ otherwise. The projection onto C at a point u is P C u inf c∈C u − c . The normal cone to C at x is if x ∈ C and ∅ otherwise. Observe that ∂δ C N C , prox λf J ∂f λ , and J N C λ P C . Given some x k−1 ∈ H, the current approximation to a solution of 1.2 , we study the penalization-gradient iteration which will generate, for parameters λ k > 0, β k → ∞, x k as the solution of the regularized subproblem which can be rewritten as Having in view a large range of applications, we shall not assume any particular structure or regularity on the penalization function Ψ. Instead, we just suppose that Ψ is convex, lower semicontinuous and C argminΨ / ∅. We will denote by VI A, C the solution set of 1.2 .
The following lemmas will be needed in our analysis, see for example 6, 7 , respectively. Lemma 1.1. Let T be a maximal monotone operator, then β k T graph converges to N T −1 0 as β k → ∞ provided that T −1 0 / ∅. Lemma 1.2. Assume that α k and δ k are two sequences of nonnegative real numbers such that

International Journal of Mathematics and Mathematical Sciences
by the triangular inequality, we can write On the other hand, by virtue of 1.4 and 2.1 , we successively have

2.4
Hence The later implies, by Lemma 1.2 and the fact that 2.2 insures lim k → ∞ x − x k 0, that the positive real sequence x k − x 2 k∈AE converges to some limit l x , that is, and also assures that Combining the two latter equalities, we infer that International Journal of Mathematics and Mathematical Sciences 5 Now, 1.9 can be written equivalently as By virtue of Lemma 1.1, we have β k ∂Ψ graph converges to N argminΨ because Furthermore, the Lipschitz continuity of A see, e.g., 8 clearly ensures that the sequence A β k ∂Ψ graph converges in turn to A N argminΨ . Now, let x * be a cluster point of {x k }. Passing to the limit in 2.9 , on a subsequence still denoted by {x k }, and taking into account the fact that the graph of a maximal monotone operator is weakly strongly closed in H × H, we then conclude that because A is Lipschitz continuous, x k is asymptotically regular thanks to 2.8 , and λ k is bounded away from zero. It remains to prove that there is no more than one cluster point, our argument is classical and is presented here for completeness.
Let x be another cluster of {x k }, we will show that x x * . This is a consequence of 2.6 . Indeed, we see that the limit of x k − x * , x * − x as k → ∞ must exists. This limit has to be zero because x * is a cluster point of {x k }. Hence at the limit, we obtain Reversing the role of x and x * , we also have That is x x * , which completes the proof. to J ∂δ C λ * and therefore ensures that ii In the special case Ψ x 1/2 dist x, C 2 , 2.2 reduces to ∞ k 0 1/β k < ∞, see Application 2 of Section 3. Suppose which is the case for all k ≥ κ for some κ ∈ AE because λ k is bounded and lim k → ∞ β k ∞.
0, for all k ≥ κ, and thus 2.2 is clearly satisfied. The particular case Ψ 0 corresponds to the unconstrained case, namely, C H. In this context the resolvent associated to β k ∂Ψ is the identity, and condition 2.2 is trivially satisfied.

Strong Convergence
Now, we would like to stress that we can guarantee strong convergence by reinforcing assumptions on A.

Proposition 2.3. Assume that A is strong monotone with constant α > 0, that is,
and Lipschitz continuous with constant L > 0, that is, If λ k ∈ ε, 2α/L 2 − ε (where ε > 0 is a small enough constant) and lim k → ∞ λ k λ * > 0, then the sequence generated by 1.9 strongly converges to the unique solution of 1.2 .
Proof. Indeed, by replacing inverse strong monotonicity of A by strong monotonicity and Lipschitz continuity, it is easy to see from the first part of the proof of Theorem 2.1 that the operator of I − λ k A satisfies

2.20
International Journal of Mathematics and Mathematical Sciences 7 Following the arguments in the proof of Theorem 2.1 to obtain

2.21
Now, by setting Θ λ √ 1 − 2λα λ 2 L 2 , we can check that 0 < Θ λ < 1 if and only if λ k ∈ 0, 2α/L 2 , and a simple computation shows that 0 < Θ λ k ≤ Θ * < 1 with Θ * max{Θ ε , Θ 2α/L 2 − ε }. Hence, The result follows from Ortega and Rheinboldt 9, page 338 and the fact that lim k → ∞ δ k x 0. The later follows thanks to the equivalence between graph convergence of the sequence of operators β k ∂Ψ to ∂δ C and the pointwise convergence of their resolvent operators combined with the fact that lim k → ∞ λ k λ * .

(1) Hierarchical Convex Minimization Problems
Having in mind the connection between monotone operators and convex functions, we may consider the special case A ∇Φ, Φ being a proper lower semicontinuous differentiable convex function. Differentiability of Φ ensures that ∇Φ N argminΨ ∂ Φ δ argminΨ and 1.2 reads as 3.1 Using definition of the Moreau-Yosida approximate, algorithm 1.9 reads as In this case, it is well-known that the assumption 2.1 of inverse strong monotonicity of ∇Φ is equivalent to its L-Lipschitz continuity. If further we assume ∞ k 1 δ k x < ∞ for all x ∈ VI ∇Φ, C and λ k ∈ ε, 2/L − ε , then by Theorem 2.1 we obtain weak convergence 8 International Journal of Mathematics and Mathematical Sciences of algorithm 3.2 to a solution of 3.1 . The strong convergence is obtained, thanks to Proposition 2.3, if in addition Ψ is strongly convex i.e., there is α > 0; for all μ ∈ 0, 1 , all x 1 , x 2 ∈ H and λ k a convergent sequence with λ k ∈ ε, 2α/L 2 − ε . Note that strong convexity of Ψ is equivalent to α-strong monotonicity of its gradient. A concrete example in signal recovery is the Projected Land weber problem, namely, and A is therefore Lipschitz continuous with constant L 2 . Now, it is well-known that the problem possesses exactly one solution if L is bounded below, that is, In this case, A is strongly monotone. Indeed, it is easily seen that f is strongly convex: consider x, y ∈ H and μ ∈ 0, 1 , one has 3.7

(2) Classical Penalization
In the special case where Ψ x 1/2 dist x, C 2 , we have which is nothing but the classical penalization operator, see 10 . In this context, taking into account the fact that International Journal of Mathematics and Mathematical Sciences 9 and that x solves 1.2 , and thus x P C x − λ k Ax , we successively have

3.10
So condition on the parameters reduces to ∞ k 1 1/β k < ∞, and algorithm 1.9 is nothing but a relaxed projection-gradient method. Indeed, using 1.5 and the fact that J N C λ P C , we obtain x k 1 1 λ k β k I λ k β k 1 λ k β k P C I − λ k A x k−1 .

3.11
An inspection of the proof of Theorem 2.1 shows that the weak converges is assured with λ k ∈ ε, 2/L − ε .

(3) A Hierarchical Fixed-Point Problem
Having in mind the connection between inverse strongly monotone operators and nonexpansive mappings, we may consider the following fixed-point problem: with P a nonexpansive mapping, namely, Px − Py ≤ x − y . It is well-known that A I − P is inverse strongly monotone with L 2. Indeed, by definition of P , we have 3.13 On the other hand Ax − Ay 2 − 2 x − y, Ax − Ay . 3.14