Based on a notion of relatively maximal (m)-relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing Rockafellar's theorem (1976) on linear convergence using the proximal point algorithm in a real Hilbert space setting. Convergence analysis, based on this new model, is simpler and compact than that of the celebrated technique of Rockafellar in which the Lipschitz continuity at 0 of the inverse of the set-valued mapping is applied. Furthermore, it can be used to generalize the Yosida approximation, which, in turn, can be applied to first-order evolution equations as well as evolution inclusions.

1. Introduction

Let X be a real Hilbert space with the inner product 〈·,·〉 and with the norm ∥·∥ on X. We consider the inclusion problem. Find a solution to

0∈M(x),
where M:X→2X is a set-valued mapping on X.

Rockafellar [1, Theorem 2] discussed general convergence of the proximal point algorithm in the context of solving (1.1), by showing for M maximal monotone, that the sequence {xk} generated for an initial point x0 by the proximal point algorithm

xk+1≈Pk(xk)
converges strongly to a solution of (1.1), provided that the approximation is made sufficiently accurate as the iteration proceeds, where Pk=(I+ckM)-1 is the resolvent operator for a sequence {ck} of positive real numbers, that is bounded away from zero. We observe from (1.2) that xk+1 is an approximate solution to inclusion problem

0∈M(x)+ck-1(x-xk).

Next, we state the theorem of Rockafellar [1, Theorem 2], where an approach of using the Lipschitz continuity of M-1 instead of the strong monotonicity of M is considered, that turned out to be more application enhanced to convex programming. Moreover, it is well-known that the resolvent operator Pk=(I+ckM)-1 is nonexpansive, so it does not seem to be possible to achieve a linear convergence without having the Lipschitz continuity constant less than one in that setting. This could have been the motivation behind looking for the Lipschitz continuity of M-1 at zero which helped achieving the Lipschitz continuity of Pk with Lipschitz constant that is less than one instead.

Theorem 1.1.

Let X be a real Hilbert space, and let M:X→2X be maximal monotone. For an arbitrarily chosen initial point x0, let the sequence {xk} be generated by the proximal point algorithm (1.2) such that
∥xk+1-Pk(xk)∥≤ϵk,
where Pk=(I+ckM)-1, and the scalar sequences {ϵk} and {ck}, respectively, satisfy Σk=0∞ϵk<∞ and {ck} is bounded away from zero.

We further suppose that sequence {xk} is generated by the proximal point algorithm (1.2) such that

∥xk+1-Pk(xk)∥≤δk∥xk+1-xk∥,
where scalar sequences {δk} and {ck}, respectively, satisfy Σk=0∞δk<∞ and ck↑c≤∞.

Also, assume that {xk} is bounded in the sense that the solution set to (1.1) is nonempty, and that M-1 is (a)-Lipschitz continuous at 0 for a>0. Let

μk=aa2+ck2<1.
Then the sequence {xk} converges strongly to x*, a unique solution to (1.1) with

∥xk+1-x*∥≤αk∥xk-x*∥∀k≥k′,
where

0≤αk=μk+δk1-δk<1∀k≥k′,αk→0asck→∞.

As we observe that most of the variational problems, including minimization or maximization of functions, variational inequality problems, quasivariational inequality problems, minimax problems, decision and management sciences, and engineering sciences can be unified into form (1.1), the notion of the general maximal monotonicity has played a crucially significant role by providing a powerful framework to develop and use suitable proximal point algorithms in exploring and studying convex programming and variational inequalities. Algorithms of this type turned out to be of more interest because of their roles in certain computational methods based on duality, for instance the Hestenes-Powell method of multipliers in nonlinear programming. For more details, we refer the reader to [1–15].

In this communication, we examine the approximation solvability of inclusion problem (1.1) by introducing the notion of relatively maximal (m)-relaxed monotone mappings, and derive some auxiliary results involving relatively maximal (m)-relaxed monotone and cocoercive mappings. The notion of the relatively maximal (m)-relaxed monotonicity is based on the notion of A-maximal (m)-relaxed monotonicity introduced and studied in [9, 10], but it seems more application-oriented. We note that our approach to the solvability of (1.1) differs significantly than that of [1] in the sense that M is without the monotonicity assumption; there is no assumption of the Lipschitz continuity on M-1, and the proof turns out to be simple and compact. Note that there exists a huge amount of research on new developments and applications of proximal point algorithms in literature to approximating solutions of inclusion problems of the form (1.1) in different space settings, especially in Hilbert as well as in Banach space settings.

2. Preliminaries

In this section, first we introduce the notion of the relatively maximal (m)-relaxed monotonicity, and then we derive some basic properties along with some auxiliary results for the problem on hand.

Let X be a real Hilbert space with the norm ∥·∥ for X, and with the inner product 〈·,·〉.

Definition 2.1.

Let X be a real Hilbert space, and let M:X→2X be a multivalued mapping and A:X→X a single-valued mapping on X. The map M is said to be the following.

Monotone if
〈u*-v*,u-v〉≥0∀(u,u*),(v,v*)∈graph(M).

Strictly monotone if M is monotone and equality holds only if u=v.

(r)-strongly monotone if there exists a positive constant r such that
〈u*-v*,u-v〉≥r∥u-v∥2∀(u,u*),(v,v*)∈graph(M).

(r)-expanding if there exists a positive constant r such that
∥u*-v*∥≥r∥u-v∥∀(u,u*),(v,v*)∈graph(M).

Strongly monotone if
〈u*-v*,u-v〉≥∥u-v∥2∀(u,u*),(v,v*)∈graph(M).

Expanding if
∥u*-v*∥≥∥u-v∥∀(u,u*),(v,v*)∈graph(M).

(m)-relaxed monotone if there is a positive constant m such that
〈u*-v*,u-v〉≥-m∥u-v∥2∀(u,u*),(v,v*)∈graph(M).

(c)-cocoercive if there exists a positive constant c such that
〈u*-v*,u-v〉≥c∥u*-v*∥2∀(u,u*),(v,v*)∈graph(M).

Monotone with respect to A if
〈u*-v*,A(u)-A(v)〉≥0∀(u,u*),(v,v*)∈graph(M).

Strictly monotone with respect to A if M is monotone with respect to A and equality holds only if u=v.

(r)-strongly monotone with respect to A if there exists a positive constant r such that
〈u*-v*,A(u)-A(v)〉≥r∥u-v∥2∀(u,u*),(v,v*)∈graph(M).

(m)-relaxed monotone with respect to A if there exists a positive constant m such that
〈u*-v*,A(u)-A(v)〉≥-m∥u-v∥2∀(u,u*),(v,v*)∈graph(M).

(h)-hybrid relaxed monotone with respect to A if there exists a positive constant h such that
〈u*-v*,A(u)-A(v)〉≥-h∥A(u)-A(v)∥2∀(u,u*),(v,v*)∈graph(M).

(m)-cocoercive with respect to A if there exists a positive constant m such that
〈u*-v*,A(u)-A(v)〉≥m∥u*-v*∥2∀(u,u*),(v,v*)∈graph(M).

Definition 2.2.

Let X be a real Hilbert space, and let M:X→2X be a mapping on X. Furthermore, let A:X→X be a single-valued mapping on X. The map M is said to be the following.

Nonexpansive if
∥u*-v*∥≤∥u-v∥∀(u,u*),(v,v*)∈graph(M).

Cocoercive if
〈u*-v*,u-v〉≥∥u*-v*∥2∀(u,u*),(v,v*)∈graph(M).

Cocoercive with respect to A if
〈u*-v*,A(u)-A(v)〉≥∥u*-v*∥2∀(u,u*),(v,v*)∈graph(M).

Definition 2.3.

Let X be a real Hilbert space. Let A:X→X be a single-valued mapping. The map M:X→2X is said to be relatively maximal (m)-relaxed monotone (with respect to A) if

M is (m)-relaxed monotone with respect to A for m>0,

R(I+ρM)=X for ρ>0.

Definition 2.4.

Let X be a real Hilbert space. Let A:X→X be a single-valued mapping. The map M:X→2X is said to be relatively maximal monotone (with respect to A) if

M is monotone with respect to A,

R(I+ρM)=X for ρ>0.

Definition 2.5.

Let X be a real Hilbert space, and let A:X→X be (r)-strongly monotone. Let M:X→2X be a relatively maximal (m)-relaxed monotone mapping. Then the resolvent operator Jρ,m,AM:X→X is defined by
Jρ,m,AM(u)=(I+ρM)-1(u)forr-ρm>0.

Proposition 2.6.

Let X be a real Hilbert space. Let A:X→X be an (r)-strongly monotone mapping, and let M:X→2X be a relatively maximal (m)-relaxed monotone mapping. Then the resolvent operator Jρ,m,AM=(I+ρM)-1 is single valued for r-ρm>0.

Proof.

For any z∈X, assume x,y∈(I+ρM)-1(z). Then we have
-x+z∈ρM(x),-y+z∈ρM(y).
Since M is relatively maximal (m)-relaxed monotone, and A is (r)-strongly monotone, it follows that
-ρm∥x-y∥2≤-〈x-y,A(x)-A(y)〉≤-r∥x-y∥2⇒(r-ρm)∥x-y∥2≤0⇒x=yforr-ρm>0.

Definition 2.7.

Let X be a real Hilbert space. A map M:X→2X is said to be maximal monotone if

M is monotone,

R(I+ρM)=X for ρ>0.

Note that all relatively monotone mappings are relatively (m)-relaxed monotone for m>0. We include an example of the relative monotonicity and other of the relative (h)-hybrid relaxed monotonicity, a new notion to the problem on hand.

Example 2.8.

Let X=(-∞,+∞),A(x)=-(1/2)x, and M(x)=-x for all x∈X. Then M is relatively monotone but not monotone, while M is relatively (m)-relaxed monotone form>0.

Example 2.9.

Let X be a real Hilbert space, and let M:X→2X be maximal (m)-relaxed monotone. Then we have the Yosida approximation Mρ=ρ-1(I-JρM), where JρM=(I+ρM)-1 is the resolvent of M, that satisfies
〈Mρ(u)-Mρ(v),JρM(u)-JρM(v)〉≥-m∥JρM(u)-JρM(v)∥2,
that is, Mρ is relatively (m)-hybrid relaxed monotone (with respect to JρM).

3. Generalization to Rockafellar's Theorem

This section deals with a generalization to Rockafellar's theorem [1, Theorem 2] in light of the new framework of relative maximal (m)-relaxed monotonicity, while solving (1.1).

Theorem 3.1.

Let X be a real Hilbert space, let A:X→X be (r)-strongly monotone, and let M:X→2X be relatively maximal (m)-relaxed monotone. Then the following statements are mutually equivalent:

an element u∈X is a solution to (1.1),

for an u∈X, one has
u=Rρ,m,AM(u),
where
Rρ,m,AM(u)=(I+ρM)-1(u)forr-ρm>0.

Proof.

To show (i) ⇒ (ii), if u∈X is a solution to (1.1), then for ρ>0 we have
0∈ρM(u)oru∈(I+ρM)(u)orRρ,m,AM(u)=u.

Similarly, to show (ii) ⇒ (i), we have

u=Rρ,m,AM(u)⇒u∈(I+ρM)(u)⇒0∈ρM(u)⇒0∈M(u).

Theorem 3.2.

Let X be a real Hilbert space, let A:X→X be (r)-strongly monotone, and let M:X→2X be relatively maximal (m)-relaxed monotone. Furthermore, suppose that M:X→2X is relatively (h)-hybrid relaxed monotone and (AoRρk,m,AM) is (γ)-cocoercive with respect to Rρk,m,AM.

(i) For an arbitrarily chosen initial point x0, suppose that the sequence {xk} is generated by the proximal point algorithm (1.2) such that

∥xk+1-Rρk,m,AM(xk)∥≤ϵk,
where Σk=0∞ϵk<∞,r-ρkm>0,Rρk,m,AM=(I+ρkM)-1, and the scalar sequence {ρk} satisfies ρk↑ρ≤∞. Suppose that the sequence {xk} is bounded in the sense that the solution set of (1.1) is nonempty.

(ii) In addition to assumptions in (i), we further suppose that, for an arbitrarily chosen initial point x0, the sequence {xk} is generated by the proximal point algorithm (1.2) such that

∥xk+1-Rρk,m,AM(xk)∥≤δk∥xk+1-xk∥,
where δk→0,Rρk,m,AM=(I+ρkM)-1, and the scalar sequences {δk} and {ρk}, respectively, satisfy Σk=0∞δk<∞, and ρk↑ρ≤∞. Then the following implications hold:

(iii) the sequence {xk} converges strongly to a solution of (1.1),

(iv) rate of convergences

0≤limk→∞δk+((γ-hρk)r)-11-δk<1,
where 1/((γ-hρk)r)<1.

Proof.

Suppose that x* is a zero of M. We begin with the proof for
∥Rρk,m,AM(xk)-Rρk,m,AM(x*)∥≤1(γ-hρk)r∥xk-x*∥,
where γ-hρk>0. It follows from the definition of the generalized resolvent operator Rρk,m,AM, the relative (h)-hybrid relaxed monotonicity of M with respect to A and the (γ)-cocoercivity of (AoRρk,m,AM) with respect to Rρk,m,AM that
〈xk-x*-(Rρk,m,AM(xk)-Rρk,m,AM(x*)),A(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))〉≥-hρk∥A(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))∥2
or
〈xk-x*,A(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))〉≥〈Rρk,m,AM(xk)-Rρk,m,AM(x*),A(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))〉-hρk∥A(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))∥2≥γ∥A(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))∥2-hρk∥A(Rρk,m,AM(xk))-A(Rρk,m,AM(x*))∥2.

∥xk+1-x*∥≤∥xk+1-Rρk,m,AM(xk)∥+∥Rρk,m,AM(xk)-Rρk,m,AM(x*)∥≤δk[∥xk+1-x*∥+∥xk-x*∥]+1(γ-hρk)r∥xk-x*∥,
where 1/(γ-hρk)r<1.

It follows that

∥xk+1-x*∥≤((γ-hρk)r)-1+δk1-δk∥xk-x*∥.
It appears that (3.15) holds since 1/(γ-hρk)r<1 (seems to hold) and δk→0.

Hence, the sequence {xk} converges strongly to x*.

To conclude the proof, we need to show the uniqueness of the solution to (1.1). Assume that x* is a zero of M. Then using ∥xk-x*∥≤∥x0-x*∥+∑k=0∞ϵk∀k, we have that

a*=limk→∞inf∥xk-x*∥
is nonnegative and finite, and as a result, ∥xk-x*∥→a*. Consider x1* and x2* to be two limit points of {xk}, then we have
∥xk-x1*∥=a1,∥xk-x2*∥=a2,
and both exist and are finite. If we express
∥xk-x2*∥2=∥xk-x1*∥2+2〈xk-x1*,x1*-x2*〉+∥x1*-x2*∥2,
then it follows that
limk→∞〈xk-x1*,x1*-x2*〉=12[a22-a12-∥x1*-x2*∥2].
Since x1* is a limit point of {xk}, the left hand side limit must tend to zero. Therefore,
a12=a22-∥x1*-x2*∥2.
Similarly, we obtain
a22=a12-∥x1*-x2*∥2.
This results in x1*=x2*.

Remark 3.3.

When M:X→2X* equals ∂f, the subdifferential of f, where f:X→(-∞,+∞] is a functional on a Hilbert space X, can be applied for minimizing f. The function f is proper if f≢+∞ and is convex if
f((1-λ)x+λy)≤(1-λ)f(x)+λf(y),
where x,y∈X and 0<λ<1. Furthermore, the function f is lower semicontinuous on X if the set
{x:x∈X,f(x)≤λ∀λ∈R}
is closed in X.

The subdifferential ∂f:X→2X* of f at x is defined by

∂f(x)={x*∈X*:f(y)-f(x)≥〈y-x,x*〉∀y∈X}.
In an earlier work [7], Rockafellar has shown that if f is a lower semicontinuous proper convex functional on X, then ∂f:X→2X* is maximal monotone on X, where X is any real Banach space. Several other special cases can be derived.

Suppose that A:X→X is strongly monotone and (γ)-cocoercive, and let f:X→R be (τ)-locally Lipschitz (for τ≥0) such that ∂f:X→2X* is (m)-relaxed monotone with respect to A, that is,

〈u*-v*,A(u)-A(v)〉≥-∥u-v∥2∀u,v∈X,
where u*∈∂f(u), and v*∈∂f(v). Then ∂f is relatively maximal (m)-relaxed monotone.

Acknowledgment

The author is greatly indebted to Professor Petru Jebelean and reviewers for their valuable comments and suggestions leading to the revised version.

RockafellarR. T.Monotone operators and the proximal point algorithmAgarwalR. P.VermaR. U.The over-relaxed η−proximal point algorithm and nonlinear variational inclusion problemsNonlinear Functional Analysis and Applications. In pressAgarwalR. P.VermaR. U.Role of relative A-maximal monotonicity in overrelaxed proximal point algorithms with applicationsLanH.-Y.ChoY. J.VermaR. U.Nonlinear relaxed cocoercive variational inclusions involving (A,η)-accretive mappings in Banach spacesMoudafiA.ThéraM.Finding a zero of the sum of two maximal monotone operatorsRockafellarR. T.Augmented Lagrangians and applications of the proximal point algorithm in convex programmingRockafellarR. T.On the maximal monotonicity of subdifferential mappingsTossingsP.The perturbed proximal point algorithm and some of its applicationsVermaR. U.A-monotonicity and its role in nonlinear variational inclusionsVermaR. U.A-monotone nonlinear relaxed cocoercive variational inclusionsVermaR. U.A generalization to variational convergence for operatorsVermaR. U.Approximation solvability of a class of nonlinear set-valued variational inclusions involving (A,η)-monotone mappingsZeidlerE.ZeidlerE.ZeidlerE.