IJMMSInternational Journal of Mathematics and Mathematical Sciences1687-04250161-1712Hindawi Publishing Corporation69195210.1155/2009/691952691952Research ArticleRelatively Inexact Proximal Point Algorithm and Linear Convergence AnalysisVermaRam U.1JebeleanPetruDepartment of Mathematical SciencesFlorida Institute of TechnologyMelbourne, FL 32901USAfit.edu200922112009200930072009091120092009Copyright © 2009This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Based on a notion of relatively maximal (m)-relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing Rockafellar's theorem (1976) on linear convergence using the proximal point algorithm in a real Hilbert space setting. Convergence analysis, based on this new model, is simpler and compact than that of the celebrated technique of Rockafellar in which the Lipschitz continuity at 0 of the inverse of the set-valued mapping is applied. Furthermore, it can be used to generalize the Yosida approximation, which, in turn, can be applied to first-order evolution equations as well as evolution inclusions.

1. Introduction

Let X be a real Hilbert space with the inner product ·,· and with the norm · on X. We consider the inclusion problem. Find a solution to

0M(x), where M:X2X is a set-valued mapping on X.

Rockafellar [1, Theorem 2] discussed general convergence of the proximal point algorithm in the context of solving (1.1), by showing for M maximal monotone, that the sequence {xk} generated for an initial point x0 by the proximal point algorithm

xk+1Pk(xk) converges strongly to a solution of (1.1), provided that the approximation is made sufficiently accurate as the iteration proceeds, where Pk=(I+ckM)-1 is the resolvent operator for a sequence {ck} of positive real numbers, that is bounded away from zero. We observe from (1.2) that xk+1 is an approximate solution to inclusion problem

0M(x)+ck-1(x-xk).

Next, we state the theorem of Rockafellar [1, Theorem 2], where an approach of using the Lipschitz continuity of M-1 instead of the strong monotonicity of M is considered, that turned out to be more application enhanced to convex programming. Moreover, it is well-known that the resolvent operator Pk=(I+ckM)-1 is nonexpansive, so it does not seem to be possible to achieve a linear convergence without having the Lipschitz continuity constant less than one in that setting. This could have been the motivation behind looking for the Lipschitz continuity of M-1 at zero which helped achieving the Lipschitz continuity of Pk with Lipschitz constant that is less than one instead.

Theorem 1.1.

Let X be a real Hilbert space, and let M:X2X be maximal monotone. For an arbitrarily chosen initial point x0, let the sequence {xk} be generated by the proximal point algorithm (1.2) such that xk+1-Pk(xk)ϵk, where Pk=(I+ckM)-1, and the scalar sequences {ϵk} and {ck}, respectively, satisfy Σk=0ϵk< and {ck} is bounded away from zero.

We further suppose that sequence {xk} is generated by the proximal point algorithm (1.2) such that

xk+1-Pk(xk)δkxk+1-xk, where scalar sequences {δk} and {ck}, respectively, satisfy Σk=0δk< and ckc.

Also, assume that {xk} is bounded in the sense that the solution set to (1.1) is nonempty, and that M-1 is (a)-Lipschitz continuous at 0 for a  >0. Let

μk=aa2+ck2<1. Then the sequence {xk} converges strongly to x*, a unique solution to (1.1) with

xk+1-x*αkxk-x*  kk, where

0αk=μk+δk1-δk<1  kk,αk0as  ck.

As we observe that most of the variational problems, including minimization or maximization of functions, variational inequality problems, quasivariational inequality problems, minimax problems, decision and management sciences, and engineering sciences can be unified into form (1.1), the notion of the general maximal monotonicity has played a crucially significant role by providing a powerful framework to develop and use suitable proximal point algorithms in exploring and studying convex programming and variational inequalities. Algorithms of this type turned out to be of more interest because of their roles in certain computational methods based on duality, for instance the Hestenes-Powell method of multipliers in nonlinear programming. For more details, we refer the reader to .

In this communication, we examine the approximation solvability of inclusion problem (1.1) by introducing the notion of relatively maximal (m)-relaxed monotone mappings, and derive some auxiliary results involving relatively maximal (m)-relaxed monotone and cocoercive mappings. The notion of the relatively maximal (m)-relaxed monotonicity is based on the notion of A-maximal (m)-relaxed monotonicity introduced and studied in [9, 10], but it seems more application-oriented. We note that our approach to the solvability of (1.1) differs significantly than that of  in the sense that M is without the monotonicity assumption; there is no assumption of the Lipschitz continuity on M-1, and the proof turns out to be simple and compact. Note that there exists a huge amount of research on new developments and applications of proximal point algorithms in literature to approximating solutions of inclusion problems of the form (1.1) in different space settings, especially in Hilbert as well as in Banach space settings.

2. Preliminaries

In this section, first we introduce the notion of the relatively maximal (m)-relaxed monotonicity, and then we derive some basic properties along with some auxiliary results for the problem on hand.

Let X be a real Hilbert space with the norm · for X, and with the inner product ·,·.

Definition 2.1.

Let X be a real Hilbert space, and let M:X2X be a multivalued mapping and A:XX a single-valued mapping on X. The map M is said to be the following.

Monotone if u*-v*,u-v0  (u,u*),(v,v*)graph(M).

Strictly monotone if M is monotone and equality holds only if u=v.

(r)-strongly monotone if there exists a positive constant r such that u*-v*,u-vru-v2(u,u*),(v,v*)graph(M).

(r)-expanding if there exists a positive constant r such that u*-v*ru-v  (u,u*),(v,v*)graph(M).

Strongly monotone if u*-v*,u-vu-v2  (u,u*),(v,v*)graph(M).

Expanding if u*-v*u-v  (u,u*),(v,v*)graph(M).

(m)-relaxed monotone if there is a positive constant m such that u*-v*,u-v-mu-v2(u,u*),(v,v*)graph(M).

(c)-cocoercive if there exists a positive constant c such that u*-v*,u-vcu*-v*2  (u,u*),(v,v*)graph(M).

Monotone with respect to A if u*-v*,A(u)-A(v)0  (u,u*),(v,v*)graph(M).

Strictly monotone with respect to A if M is monotone with respect to A and equality holds only if u=v.

(r)-strongly monotone with respect to A if there exists a positive constant r such that u*-v*,A(u)-A(v)ru-v2(u,u*),(v,v*)graph(M).

(m)-relaxed monotone with respect to A if there exists a positive constant m such that u*-v*,A(u)-A(v)-mu-v2(u,u*),(v,v*)graph(M).

(h)-hybrid relaxed monotone with respect to A if there exists a positive constant h such that u*-v*,A(u)-A(v)-hA(u)-A(v)2(u,u*),(v,v*)graph(M).

(m)-cocoercive with respect to A if there exists a positive constant m such that u*-v*,A(u)-A(v)mu*-v*2(u,u*),(v,v*)graph(M).

Definition 2.2.

Let X be a real Hilbert space, and let M:X2X be a mapping on X. Furthermore, let A:XX be a single-valued mapping on X. The map M is said to be the following.

Nonexpansive if u*-v*u-v  (u,u*),(v,v*)graph(M).

Cocoercive if u*-v*,u-vu*-v*2(u,u*),(v,v*)graph(M).

Cocoercive with respect to A if u*-v*,A(u)-A(v)u*-v*2(u,u*),(v,v*)graph(M).

Definition 2.3.

Let X be a real Hilbert space. Let A:XX be a single-valued mapping. The map M:X2X is said to be relatively maximal (m)-relaxed monotone (with respect to A) if

M is (m)-relaxed monotone with respect to A for m>0,

R(I+ρM)=X for ρ>0.

Definition 2.4.

Let X be a real Hilbert space. Let A:XX be a single-valued mapping. The map M:X2X is said to be relatively maximal monotone (with respect to A) if

M is monotone with respect to A,

R(I+ρM)=X for ρ>0.

Definition 2.5.

Let X be a real Hilbert space, and let A:XX be (r)-strongly monotone. Let M:X2X be a relatively maximal (m)-relaxed monotone mapping. Then the resolvent operator Jρ,m,AM:XX is defined by Jρ,m,AM(u)=(I+ρM)-1(u)for  r-ρm>0.

Proposition 2.6.

Let X be a real Hilbert space. Let A:XX be an (r)-strongly monotone mapping, and let M:X2X be a relatively maximal (m)-relaxed monotone mapping. Then the resolvent operator Jρ,m,AM=(I+ρM)-1 is single valued for r-ρm>0.

Proof.

For any zX, assume x,y(I+ρM)-1(z). Then we have -x+zρM(x),-y+zρM(y). Since M is relatively maximal (m)-relaxed monotone, and A is (r)-strongly monotone, it follows that -ρmx-y2-x-y,A(x)-A(y)-rx-y2(r-ρm)x-y20x=yfor  r-ρm>0.

Definition 2.7.

Let X be a real Hilbert space. A map M:X2X is said to be maximal monotone if

M is monotone,

R(I+ρM)=X for ρ>0.

Note that all relatively monotone mappings are relatively (m)-relaxed monotone for m>0. We include an example of the relative monotonicity and other of the relative (h)-hybrid relaxed monotonicity, a new notion to the problem on hand.

Example 2.8.

Let X=(-,+),  A(x)=-(1/2)x, and M(x)=-x for all xX. Then M is relatively monotone but not monotone, while M is relatively (m)-relaxed monotone for  m>0.

Example 2.9.

Let X be a real Hilbert space, and let M:X2X be maximal (m)-relaxed monotone. Then we have the Yosida approximation Mρ=ρ-1(I-JρM), where JρM=(I+ρM)-1 is the resolvent of M, that satisfies Mρ(u)-Mρ(v),JρM(u)-JρM(v)-mJρM(u)-JρM(v)2, that is, Mρ is relatively (m)-hybrid relaxed monotone (with respect to JρM).

3. Generalization to Rockafellar's Theorem

This section deals with a generalization to Rockafellar's theorem [1, Theorem 2] in light of the new framework of relative maximal (m)-relaxed monotonicity, while solving (1.1).

Theorem 3.1.

Let X be a real Hilbert space, let A:XX be (r)-strongly monotone, and let M:X2X be relatively maximal (m)-relaxed monotone. Then the following statements are mutually equivalent:

an element uX is a solution to (1.1),

for an uX, one has u=Rρ,m,AM(u), where Rρ,m,AM(u)=(I+ρM)-1(u)for  r-ρm>0.

Proof.

To show (i) (ii), if uX is a solution to (1.1), then for ρ>0 we have 0ρM(u)oru(I+ρM)(u)orRρ,m,AM(u)=u.

Similarly, to show (ii) (i), we have

u=Rρ,m,AM(u)u(I+ρM)(u)0ρM(u)0M(u).

Theorem 3.2.

Let X be a real Hilbert space, let A:XX be (r)-strongly monotone, and let M:X2X be relatively maximal (m)-relaxed monotone. Furthermore, suppose that M:X2X is relatively (h)-hybrid relaxed monotone and (AoRρk,m,AM) is (γ)-cocoercive with respect to Rρk,m,AM.

(i) For an arbitrarily chosen initial point x0, suppose that the sequence {xk} is generated by the proximal point algorithm (1.2) such that

xk+1-Rρk,m,AM(xk)ϵk, where Σk=0ϵk<,  r-ρkm>0,  Rρk,m,AM=(I+ρkM)-1, and the scalar sequence {ρk} satisfies ρkρ. Suppose that the sequence {xk} is bounded in the sense that the solution set of (1.1) is nonempty.

(ii) In addition to assumptions in (i), we further suppose that, for an arbitrarily chosen initial point x0, the sequence {xk} is generated by the proximal point algorithm (1.2) such that

xk+1-Rρk,m,AM(xk)δkxk+1-xk, where δk0,  Rρk,m,AM=(I+ρkM)-1, and the scalar sequences {δk} and {ρk}, respectively, satisfy Σk=0δk<, and ρkρ. Then the following implications hold:

(iii) the sequence {xk} converges strongly to a solution of (1.1),

(iv) rate of convergences

0limkδk+((γ-hρk)r)-11-δk<1, where 1/((γ-hρk)r)<1.

Proof.

Suppose that x* is a zero of M. We begin with the proof for Rρk,m,AM(xk)-Rρk,m,AM(x*)1(γ-hρk)rxk-x*, where γ-hρk>0. It follows from the definition of the generalized resolvent operator Rρk,m,AM, the relative (h)-hybrid relaxed monotonicity of M with respect to A and the (γ)-cocoercivity of (AoRρk,m,AM) with respect to Rρk,m,AM that xk-x*-(Rρk,m,AM(xk)-Rρk,m,AM(x*)),A(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))-hρkA(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))2 or xk-x*,A(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))Rρk,m,AM(xk)-Rρk,m,AM(x*),A(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))-hρkA(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))2γA(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))2-hρkA(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))2.

Next, we move to estimate

xk+1-x*Rρk,m,AM(xk)-x*+ϵk=Rρk,m,AM(xk)-Rρk,m,AM(x*)+ϵk1(γ-hρk)rxk-x*+ϵk.

For (γ-hρk)r>1, combining the previous inequality for all k, we have

xk+1-x*x0-x*+i=0kϵix0-x*+k=0ϵk. Hence, {xk} is bounded.

Next we turn our attention to convergence part of the proof. Since

xk+1-x*xk+1-Rρk,m,AM(xk)+Rρk,m,AM(xk)-Rρk,m,AM(x*),xk+1-Rρk,m,AM(xk)δkxk+1-xkδk[xk+1-x*+xk-x*],

we get

xk+1-x*xk+1-Rρk,m,AM(xk)+Rρk,m,AM(xk)-Rρk,m,AM(x*)δk[xk+1-x*+xk-x*]+1(γ-hρk)rxk-x*, where 1/(γ-hρk)r<1.

It follows that

xk+1-x*((γ-hρk)r)-1+δk1-δkxk-x*. It appears that (3.15) holds since 1/(γ-hρk)r<1 (seems to hold) and δk0.

Hence, the sequence {xk} converges strongly to x*.

To conclude the proof, we need to show the uniqueness of the solution to (1.1). Assume that x* is a zero of M. Then using xk-x*x0-x*+k=0ϵk  k, we have that

a*=limkinfxk-x* is nonnegative and finite, and as a result, xk-x*  a*. Consider x1* and x2* to be two limit points of {xk}, then we have xk-x1*=a1,xk-x2*=a2, and both exist and are finite. If we express xk-x2*2=xk-x1*2+2xk-x1*,x1*-x2*+x1*-x2*2, then it follows that limk  xk-x1*,x1*-x2*=12[a22-a12-x1*-x2*2]. Since x1* is a limit point of {xk}, the left hand side limit must tend to zero. Therefore, a12=a22-x1*-x2*2. Similarly, we obtain a22=a12-x1*-x2*2. This results in x1*=x2*.

Remark 3.3.

When M:X2X* equals f, the subdifferential of f, where f:X(-,+] is a functional on a Hilbert space X, can be applied for minimizing f. The function f is proper if f+ and is convex if f((1-λ)x+λy)(1-λ)f(x)+λf(y), where x,yX and 0<λ<1. Furthermore, the function f is lower semicontinuous on X if the set {x:xX,f(x)λλR} is closed in X.

The subdifferential f:X2X* of f at x is defined by

f(x)={x*X*:f(y)-f(x)y-x,x*yX}. In an earlier work , Rockafellar has shown that if f is a lower semicontinuous proper convex functional on X, then f:X2X* is maximal monotone on X, where X is any real Banach space. Several other special cases can be derived.

Suppose that A:XX is strongly monotone and (γ)-cocoercive, and let f:XR be (τ)-locally Lipschitz (for τ0) such that f:X2X* is (m)-relaxed monotone with respect to A, that is,

u*-v*,A(u)-A(v)-u-v2u,vX, where u*f(u), and v*f(v). Then f is relatively maximal (m)-relaxed monotone.

Acknowledgment

The author is greatly indebted to Professor Petru Jebelean and reviewers for their valuable comments and suggestions leading to the revised version.

RockafellarR. T.Monotone operators and the proximal point algorithmSIAM Journal on Control and Optimization1976145877898MR041048310.1137/0314056ZBL0358.90053AgarwalR. P.VermaR. U.The over-relaxed η−proximal point algorithm and nonlinear variational inclusion problemsNonlinear Functional Analysis and Applications. In pressAgarwalR. P.VermaR. U.Role of relative A-maximal monotonicity in overrelaxed proximal point algorithms with applicationsJournal of Optimization Theory and Applications2009143111510.1007/s10957-009-9554-zLanH.-Y.ChoY. J.VermaR. U.Nonlinear relaxed cocoercive variational inclusions involving (A,η)-accretive mappings in Banach spacesComputers & Mathematics with Applications2006519-1015291538MR223764910.1016/j.camwa.2005.11.036MoudafiA.ThéraM.Finding a zero of the sum of two maximal monotone operatorsJournal of Optimization Theory and Applications1997942425448MR146067410.1023/A:1022643914538ZBL0891.49005RockafellarR. T.Augmented Lagrangians and applications of the proximal point algorithm in convex programmingMathematics of Operations Research19761297116MR041891910.1287/moor.1.2.97ZBL0402.90076RockafellarR. T.On the maximal monotonicity of subdifferential mappingsPacific Journal of Mathematics197033209216MR0262827ZBL0199.47101TossingsP.The perturbed proximal point algorithm and some of its applicationsApplied Mathematics and Optimization1994292125159MR125405710.1007/BF01204180ZBL0791.65039VermaR. U.A-monotonicity and its role in nonlinear variational inclusionsJournal of Optimization Theory and Applications20061293457467MR228115110.1007/s10957-006-9079-7ZBL1123.49007VermaR. U.A-monotone nonlinear relaxed cocoercive variational inclusionsCentral European Journal of Mathematics20075238639610.2478/s11533-007-0005-5MR2300280ZBL1128.49011VermaR. U.A generalization to variational convergence for operatorsAdvances in Nonlinear Variational Inequalities200811297101MR2440280VermaR. U.Approximation solvability of a class of nonlinear set-valued variational inclusions involving (A,η)-monotone mappingsJournal of Mathematical Analysis and Applications20083372969975MR238634610.1016/j.jmaa.2007.01.114ZeidlerE.Nonlinear Functional Analysis and Its Applications. I1986New York, NY, USASpringerxxi+897Fixed-Point TheoremsMR816732ZeidlerE.Nonlinear Functional Analysis and Its Applications. II/B1990New York, NY, USASpringeri–xviNonlinear Monotone OperatorsMR1033498ZeidlerE.Nonlinear Functional Analysis and Its Applications. III1985New York, NY, UASSpringerxxii+662Variational Methods and OptimizationMR768749