Let H be a real Hilbert space and C⊂H a closed convex subset. Let T:C→C be a nonexpansive mapping with the nonempty set of fixed points Fix(T). Kim and Xu (2005) introduced a modified Mann iteration x0=x∈C, yn=αnxn+(1−αn)Txn, xn+1=βnu+(1−βn)yn, where u∈C is an arbitrary (but fixed) element, and {αn} and {βn} are two sequences in (0,1). In the case where 0∈C, the minimum-norm fixed point of T can be obtained by taking u=0. But in the case where 0∉C, this iteration process becomes invalid because xn may not belong to C. In order to overcome this weakness, we introduce a new modified Mann iteration by boundary point method (see Section 3 for details) for finding the minimum norm fixed point of T and prove its strong convergence under some assumptions. Since our algorithm does not involve the computation of the metric projection PC, which is often used so that the strong convergence is guaranteed, it is easy implementable. Our results improve and extend the results of Kim, Xu, and some others.
1. Introduction
Let C be a subset of a real Hilbert space H with an inner product and its induced norm is denoted by 〈·,·〉 and ∥·∥, respectively. A mapping T:C→C is called nonexpansive if ∥Tx-Ty∥≤∥x-y∥ for all x,y∈C. A point x∈C is called a fixed point of T if Tx=x. Denote by Fix(T)={x∈C|Tx=x} the set of fixed points of T. Throughout this paper, Fix(T) is always assumed to be nonempty.
The iteration approximation processes of nonexpansive mappings have been extensively investigated by many authors (see [1–12] and the references therein). A classical iterative scheme was introduced by Mann [13], which is defined as follows. Take an initial guess x0∈C arbitrarily and define {xn}, recursively, by
(1)xn+1=αnxn+(1-αn)Txn,n≥0,
where {αn} is a sequence in the interval [0,1]. It is well known that under some certain conditions the sequence {xn} generated by (1) converges weakly to a fixed point of T, and Mann iteration may fail to converge strongly even if it is in the setting of infinite-dimensional Hilbert spaces [14].
Some attempts to modify the Mann iteration method (1) so that strong convergence is guaranteed have been made. Nakajo and Takahashi [1] proposed the following modification of the Mann iteration method (1):
(2)x0=x∈C,yn=αnxn+(1-αn)Txn,Cn={z∈C:∥yn-z∥≤∥xn-z∥},Qn={z∈C:〈xn-z,x0-xn〉≥0},xn+1=PCn∩Qn(x0),
where PK denotes the metric projection from H onto a closed convex subset K of H. They proved that if the sequence {αn} is bounded above from one, then {xn} defined by (2) converges strongly to PFix(T)(x0). But, at each iteration step, an additional projection is needed to calculate, which is not easy in general. To overcome this weakness, Kim and Xu [15] proposed a simpler modification of Mann's iteration scheme, which generates the iteration sequence {xn} via the following formula:
(3)x0=x∈C,yn=αnxn+(1-αn)Txn,xn+1=βnu+(1-βn)yn,
where u∈C is an arbitrary (but fixed) element in C, and {αn} and {βn} are two sequences in (0,1). In the setting of Banach spaces, Kim and Xu proved that the sequence {xn} generated by (3) converges strongly to the fixed point PFix(T)u of T under certain appropriate assumptions on the sequences {αn} and {βn}.
In many practical problems, such as optimization problems, finding the minimum norm fixed point of nonexpansive mappings is quite important. In the case where 0∈C, taking u=0 in (3), the sequence {xn} generated by (3) converges strongly to the minimum norm fixed point of T [15]. But, in the case where 0∉C, the iteration scheme (3) becomes invalid because xn may not belong to C.
To overcome this weakness, a natural way to modify algorithm (3) is adopting the metric projection PC so that the iteration sequence belongs to C; that is, one may consider the scheme as follows:
(4)x0=x∈C,yn=αnxn+(1-αn)Txn,xn+1=PC(βnu+(1-βn)yn).
However, since the computation of a projection onto a closed convex subset is generally difficult, algorithm (4) may not be a well choice.
The main purpose of this paper is to introduce a new modified Mann iteration for finding the minimum norm fixed point of T, which not only has strong convergence under some assumptions but also has nothing to do with any projection operators. At each iteration step, a point in ∂C (the boundary of C) is determined via a particular way, so our modification method is called boundary point method (see Section 3 for details). Moreover, since our algorithm does not involve the computation of the metric projection, it is very easy implementable.
The rest of this paper is organized as follows. Some useful lemmas are listed in the next section. In the last section, a function defined on C is given firstly, which is important for us to construct our algorithm, then our algorithm is introduced and the strong convergence theorem is proved.
2. Preliminaries
Throughout this paper, we adopt the notations listed as follows:
xn→x:{xn} converges strongly to x;
xn⇀x:{xn} converges weakly to x;
ωw(xn) denotes the set of cluster points of {xn} (i.e., ωw(xn)={x:∃{xnk}⊂{xn} such that xnk⇀x});
∂C denotes the boundary of C.
We need some lemmas and facts listed as follows.
Lemma 1 (see [16]).
Let K be a closed convex subset of a real Hilbert space H and let PK be the (metric of nearest point) projection from H onto K (i.e., for x∈H, PKx is the only point in K such that ∥x-PKx∥=inf{∥x-z∥:z∈K}). Given x∈H and z∈K. Then z=PKx if and only if there holds the following relation:
(5)〈x-z,y-z〉≤0,∀y∈K.
Since Fix(T) is a closed convex subset of a real Hilbert space H, so the metric projection PFix(T) is reasonable and thus there exists a unique element, which is denoted by x†, in Fix(T) such that ∥x†∥=infx∈Fix(T)∥x∥; that is, x†=PFix(T)0. x† is called the minimum norm fixed point of T.
Lemma 2 (see [17]).
Let H be a real Hilbert space. Then there holds the following well-known results:
∥x-y∥2=∥x∥2-2〈x,y〉+∥y∥2 for all x,y∈H;
∥x+y∥2≤∥x∥2+2〈y,x+y〉 for all x,y∈H.
We will give a definition in order to introduce the next lemma. A set C⊂H is weakly closed if for any sequence {xn}⊂C such that xn⇀x, there holds x∈C.
Lemma 3 (see [18, 19]).
If C⊂H is convex, then C is weakly closed if and only if C is closed.
Assume C⊂H is weakly closed; a function f:C→ℝ1 is called weakly lower semicontinuous at x0∈C if for any sequence {xn}⊂C such that xn⇀x; then f(x)≤liminfn→∞f(xn) holds. Generally, we called fweakly lower semi-continuous over C if it is weakly lower semi-continuous at each point in C.
Lemma 4 (see [18, 19]).
Let C be a subset of a real Hilbert space H and let f:C→ℝ1 be a real function; then f is weakly lower semi-continuous over C if and only if the set {x∈C|f(x)≤a} is weakly closed subset of H, for any a∈ℝ1.
Lemma 5 (see [20]).
Let C be a closed convex subset of a real Hilbert space H and let T:C→C be a nonexpansive mapping such that Fix(T)≠∅. If a sequence {xn} in C is such that xn⇀z and ∥xn-Txn∥→0, then z=Tz.
The following is a sufficient condition for a real sequence to converge to zero.
Lemma 6 (see [21, 22]).
Let {αn} be a nonnegative real sequence satisfying
(6)αn+1≤(1-γn)αn+γnδn+σn,n=0,1,2….
If {γn}n=1∞⊂(0,1), {δn}n=1∞ and {σn}n=1∞ satisfy the conditions:
∑n=1∞γn=∞;
either limsupn→∞δn≤0 or ∑n=1∞|γnδn|<∞;
∑n=1∞|σn|<∞;
then limn→∞αn=0.
3. Iterative Algorithm
Let C be a closed convex subset of a real Hilbert space H. In order to give our main results, we first introduce a function h:C→[0,1] by the following definition:
(7)h(x)=inf{λ∈[0,1]∣λx∈C},∀x∈C.
Since C is closed and convex, it is easy to see that h is well defined. Obviously, h(x)=0 for all x∈C in the case where 0∈C. In the case where 0∉C, it is also easy to see that h(x)x∈∂C and h(x)>0 for every x∈C (otherwise, h(x)=0; we have 0∈C; this is a contradiction).
An important property of h(x) is given as follows.
Lemma 7.
h(x) is weakly lower semi-continuous over C.
Proof.
If 0∈C, then h(x)=0 for all x∈C and the conclusion is clear. For the case 0∉C, using Lemma 4, in order to show that h(x) is weakly lower semi-continuous, it suffices to verify that
(8)Ca-={x∈C∣h(x)≤a}
is a weakly closed subset of H for every a∈ℝ1; that is, if {xn}⊂Ca- such that xn⇀x, then x∈Ca- (i.e., h(x)≤a). Without loss of generality, we assume that 0<a<1 (otherwise, there hold Ca-=C for a≥1 and Ca-=∅ for a≤0, resp., and the conclusion holds obviously). Noting C is convex, we have from the definition of h(x) that for each λ∈[a,1], λxn∈C holds for all n≥1. Clearly, λxn⇀λx. Using Lemma 3, then λx∈C. This implies that
(9)[a,1]⊂{λ∈(0,1]∣λx∈C}.
Consequently,
(10)h(x)=inf{λ∈(0,1]∣λx∈C}≤a
and this completes the proof.
Since the function h(x) will be important for us to construct the algorithm of this paper below, it is necessary to explain how to calculate h(x) for any given x∈C in actual computing programs. In fact, in practical problem, C is often a level set of a convex function c; that is, C is of the form C={x∈H|c(x)≤r}, where r is a real constant. Without loss of generality, we assume that
(11)C={x∈H∣c(x)≤0}
and 0∉C. Then it is easy to see that, for a given x∈C, we have
(12)h(x)=inf{λ∈(0,1]∣c(λx)=0}.
Thus, in order to get the value h(x), we only need to solve a algebraic equation with a single variable λ, which can be solved easily using many methods, for example, dichotomy method on the interval [0,1]. In general, solving a algebraic equation above is quite easier than calculating the metric projection PC. To illustrate this viewpoint, we give the following simple example.
Example 8.
Let A:H→H be a strongly positive linear bounded operator with coefficient r; that is, there is a constant r>0 with the property 〈Ax,x〉≥r∥x∥2, for all x∈H. Define a convex function φ:H→ℝ1 by
(13)φ(x)=〈Ax,x〉-3〈x,u〉+〈Ax*,x*〉,∀x∈H,
where u≠0 is a given point in H and x* is the only solution of the equation Ax=u. (Notice that A is a monogamy.) Setting C={x∈H:φ(x)≤0}, then it is easy to show that C is a nonempty convex closed subset of H such that 0∉C. (Note that φ(x*)=-〈Ax*,x*〉<0 and φ(0)=〈Ax*,x*〉>0.) For a given x∈C, we have φ(x)≤0. In order to get h(x), let φ(λx)=0, where λ∈(0,1] is an unknown number. Thus we obtain an algebraic equation
(14)〈Ax,x〉λ2-3〈x,u〉λ+〈Ax*,x*〉=0.
Consequently, we have
(15)λ=3〈x,u〉-9〈x,u〉2-4〈Ax,x〉〈Ax*,x*〉2〈Ax,x〉=2〈Ax*,x*〉3〈x,u〉+9〈x,u〉2-4〈Ax,x〉〈Ax*,x*〉,
that is,
(16)h(x)=2〈Ax*,x*〉3〈x,u〉+9〈x,u〉2-4〈Ax,x〉〈Ax*,x*〉.
Now we give a new modified Mann iteration by boundary point method.
Algorithm 9.
Define {xn} in the following way:
(17)x0=x∈C,yn=αnxn+(1-αn)Txn,xn+1=αnλnxn+(1-αn)yn,
where {αn}⊂(0,1) and λn=max{λn-1,h(xn)}, n=0,1,2,….
Since C is closed and convex, we assert by the definition of h that, for any given x∈C, βx∈C holds for every β∈[h(x)x,1], and then (xn)⊂C is guaranteed, where (xn) is generated by Algorithm 9. Obviously, λn=0 for all n≥0 if 0∈C. If 0∉C, calculating the value h(xn) implies determining h(xn)xn, a boundary point of C, and thus our algorithm is called boundary point method.
Theorem 10.
Assume that {αn} and {λn} satisfy the following conditions:
αn/(1-λn)→0;
∑n=1∞αn(1-λn)=∞;
∑n=1∞|αn-αn-1|<∞.
Then {xn} generated by (17) converges strongly to x†=PFix(T)0.
Proof.
We first show that {xn} is bounded. Taking p∈Fix(T) arbitrarily, we have
(18)∥xn+1-p∥=∥(1-αn)2αnλn(xn-p)+αn(1-αn)(xn-p)+(1-αn)2(Txn-p)-αn(1-λn)p∥≤(αnλn+αn-αn2)∥xn-p∥+(1-αn)2∥Txn-p∥+αn(1-λn)∥p∥≤[1-αn(1-λn)]∥xn-p∥+αn(1-λn)∥p∥≤max{∥xn-p∥,∥p∥}.
By induction,
(19)∥xn-p∥≤max{∥x0-p∥,∥p∥},n≥0.
Thus, {xn} is bounded and so are {Txn} and {yn}. As a result, we obtain by condition (D1) that
(20)∥xn+1-yn∥=∥αnλnxn+(1-αn)yn-yn∥≤αn(λn∥xn∥+∥yn∥)→0,∥yn-Txn∥=∥αnxn+(1-αn)Txn-Txn∥≤αn(∥xn∥+∥Txn∥)→0.
We next show that
(21)∥xn-Txn∥→0.
It suffices to show that
(22)∥xn+1-xn∥→0.
Using (17), it follows from direct calculating that
(23)∥xn+1-xn∥=∥αnλnxn+(1-αn)yn-[αn-1λn-1xn-1+(1-αn-1)yn-1]∥=∥(1-αn)(yn-yn-1)-(αn-αn-1)yn-1+αnλn(xn-xn-1)+λn-1(αn-αn-1)xn-1+αn(λn-λn-1)xn-1∥≤(1-αn)∥yn-yn-1∥+αnλn∥xn-xn-1∥+|αn-αn-1|(∥yn-1∥+λn-1∥xn-1∥)+αn|λn-λn-1|∥xn-1∥,(24)∥yn-yn-1∥=∥αnxn+(1-αn)Txn-[αn-1xn-1+(1-αn-1)Txn-1]∥=∥(1-αn)(Txn-Txn-1)-(αn-αn-1)Txn-1+αn(xn-xn-1)+(αn-αn-1)xn-1∥≤(1-αn)∥xn-xn-1∥+|αn-αn-1|∥Txn-1∥+αn∥xn-xn-1∥+|αn-αn-1|∥xn-1∥=∥xn-xn-1∥+|αn-αn-1|(∥Txn-1∥+∥xn-1∥).
Substituting (24) into (23), we obtain
(25)∥xn+1-xn∥≤(1-αn(1-λn))∥xn-xn-1∥+|αn-αn-1|×{∥yn-1∥+λn-1∥xn-1∥+(1-αn)×(∥Txn-1∥+∥xn-1∥)}+αn|λn-λn-1|∥xn-1∥.
Note the fact that ∑n=1∞|λn-λn-1|<∞ (since (λn)⊂[0,1] is monotone increasing) and conditions (D1)–(D3); it concludes by using Lemma 6 that ∥xn+1-xn∥→0. Noting (20) and (25), we obtain
(26)∥xn-Txn∥≤∥xn+1-xn∥+∥xn+1-yn∥+∥yn-Txn∥→0.
Using Lemma 5, it derives that ωw(xn)⊂Fix(T).
Then we show that
(27)limsupn→∞〈-x†,xn+1-x†〉≤0.
Indeed take a subsequence {xnk} of {xn} such that
(28)limsupn→∞〈-x†,xn-x†〉=limk→∞〈-x†,xnk-x†〉.
Without loss of generality, we may assume that xnk⇀x-. Noticing x†=PFix(T)0, we obtain from x-∈Fix(T) and Lemma 1 that
(29)limsupn→∞〈-x†,xn-x†〉=limk→∞〈-x†,xnk-x†〉=〈-x†,x--x†〉≤0.
Finally, we show that ∥xn-x†∥→0. Using Lemma 2 and (17), it is easy to verify that
(30)∥xn+1-x†∥2=∥αnλnxn+(1-αn)yn-x†∥2=∥αn(λnxn-x†)+(1-αn)(yn-x†)∥2≤(1-αn)2∥yn-x†∥2+2αn〈λnxn-x†,xn+1-x†〉≤(1-αn)2∥αnxn+(1-αn)Txn-x†∥2+2αnλn〈xn-x†,xn+1-x†〉+2αn(1-λn)〈-x†,xn+1-x†〉≤(1-αn)2∥xn-x†∥2+2αnλn∥xn-x†∥×∥xn+1-x†∥+2αn(1-λn)〈-x†,xn+1-x†〉.
Hence,
(31)∥xn+1-x†∥2≤(1-γn)∥xn-x†∥2+γnσn,
where
(32)γn=αn2(1-λn)-αn1-αnλn,σn=2(1-λn)2(1-λn)-αn〈-x†,xn+1-x†〉.
It is not hard to prove that γn→0, ∑n=0∞γn=∞ by conditions (D1) and (D2), and limsupn→∞σn≤0 by (29). By Lemma 6, we concludes that xn→x†, and the proof is finished.
Finally, we point out that a more general algorithm can be given for calculating the fixed point PFix(T)u for any given u∈H. In fact, it suffices to modify the definition of the function h by the following form:
(33)h(x)=inf{λ∈[0,1]∣(1-λ)u+λx∈C},∀x∈C.
Algorithm 11.
Define {xn} in the following way:
(34)x0=x∈C,yn=αnxn+(1-αn)Txn,xn+1=αn((1-λn)u+λnxn)+(1-αn)yn,
where {αn}⊂(0,1) and λn=max{λn-1,h(xn)}(n=0,1,2,…), where h is defined by (33).
By an argument similar to the proof of Theorem 10, it is easy to obtain the result below.
Theorem 12.
Assume that u∉C, and {αn} and {λn} satisfy the same conditions as in Theorem 10; then {xn} generated by (34) converges strongly to x*=PFix(T)u.
Acknowledgments
This work was supported in part by the Fundamental Research Funds for the Central Universities (ZXH2012K001) and in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.
NakajoK.TakahashiW.Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups2003279237237910.1016/S0022-247X(02)00458-4MR1974031ZBL1035.47048JungJ. S.Iterative approaches to common fixed points of nonexpansive mappings in Banach spaces2005302250952010.1016/j.jmaa.2004.08.022MR2107851ZBL1062.47069BauschkeH. H.The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space1996202115015910.1006/jmaa.1996.0308MR1402593ZBL0956.47024ChangS. S.Viscosity approximation methods for a finite family of nonexpansive mappings in Banach spaces200632321402141610.1016/j.jmaa.2005.11.057MR2260191ZBL1111.47057BrowderF. E.PetryshynW. V.Construction of fixed points of nonlinear mappings in Hilbert space196720197228MR021765810.1016/0022-247X(67)90085-6ZBL0153.45701MarinoG.XuH. K.Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces2007329133634610.1016/j.jmaa.2006.06.055MR2306805ZBL1116.47053MarinoG.XuH. K.Convergence of generalized proximal point algorithms20043479180810.3934/cpaa.2004.3.791MR2106300ZBL1095.90115MoudafiA.Viscosity approximation methods for fixed-points problems20002411465510.1006/jmaa.1999.6615MR1738332ZBL0957.47039HalpernB.Fixed points of nonexpanding maps196773957961MR021893810.1090/S0002-9904-1967-11864-0ZBL0177.19101IshikawaS.Fixed points by a new iteration method197444147150MR033646910.1090/S0002-9939-1974-0336469-5ZBL0286.47036ReichS.Weak convergence theorems for nonexpansive mappings in Banach spaces197967227427610.1016/0022-247X(79)90024-6MR528688ZBL0423.47026ShiojiN.TakahashiW.Strong convergence of approximated sequences for nonexpansive mappings in Banach spaces1997125123641364510.1090/S0002-9939-97-04033-1MR1415370ZBL0888.47034MannW. R.Mean value methods in iteration19534506510MR005484610.1090/S0002-9939-1953-0054846-3ZBL0050.11603GenelA.LindenstraussJ.An example concerning fixed points19752218186MR039084710.1007/BF02757276ZBL0314.47031KimT. H.XuH. K.Strong convergence of modified Mann iterations2005611-2516010.1016/j.na.2004.11.011MR2122242ZBL1091.47055Martinez-YanesC.XuH. K.Strong convergence of the CQ method for fixed point iteration processes200664112400241110.1016/j.na.2005.08.018MR2215815ZBL1105.47060LiM.YaoY.Strong convergence of an iterative algorithm for λ-strictly pseudo-contractive mappings in Hilbert spaces2010181219228MR2665950BeauzamyB.198268Amsterdam, The NetherlandsNorth-HollandNorth-Holland Mathematics StudiesMR670943DiestelJ.1975485Berlin, GermanySpringerLecture Notes in MathematicsMR0461094GoebelK.KirkW. A.199028Cambridge, UKCambridge University PressCambridge Studies in Advanced Mathematics10.1017/CBO9780511526152MR1074005WangF.XuH. K.Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem201020101310208510.1155/2010/102085MR2603074ZBL1189.65107LiuL. S.Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces1995194111412510.1006/jmaa.1995.1289MR1353071ZBL0872.47031