The projected-gradient method is a powerful tool for solving constrained convex optimization problems and has extensively been studied. In the present paper, a projected-gradient method is presented for solving the minimization problem, and the strong convergence analysis of the suggested gradient projection method is given.

1. Introduction

In the present paper, our main purpose is to solve the following minimization problem:minx∈Cf(x),
where C is a nonempty closed and convex subset of a real Hilbert space H, f:H→R is a real-valued convex function.

Now it is well known that the projected-gradient method is a powerful tool for solving the above minimization problem and has extensively been studied. See, for instance, [1–8]. The classic algorithm is the following form of the projected-gradient method:xn+1=PC(xn-γ∇f(xn)),n≥0,
where γ>0 is an any constant, PC is the nearest point projection from H onto C, and ∇f denotes the gradient of f.

It is known [1] that if f has a Lipschitz continuous and strongly monotone gradient, then the sequence {xn} generated by (1.2) can be strongly convergent to a minimizer of f in C. If the gradient of f is only assumed to be Lipschitz continuous, then {xn} can only be weakly convergent if H is infinite dimensional. An interesting problem is how to appropriately modify the projected gradient algorithm so as to have strong convergence? For this purpose, recently, Xu [9] introduced the following algorithm:xn+1=θnh(xn)+(1-θn)PC(xn-γn∇f(xn)),n≥0.
Under some additional assumptions, Xu [9] proved that the sequence {xn} converges strongly to a minimizer of (1.1). At the same time, Xu [9] also suggested a regularized method:xn+1=PC(I-γn(∇f+αnI))xn,n≥0.
Consequently, Yao et al. [10] proved the strong convergence of the regularized method (1.4) under some weaker conditions.

Motivated by the above works, in this paper we will further construct a new projected gradient method for solving the minimization problem (1.1). It should be pointed out that our method also has strong convergence under some mild conditions.

2. Preliminaries

Let C be a nonempty closed convex subset of a real Hilbert space H. A bounded linear operator B is said to be strongly positive on H if there exists a constant α>0 such that〈Bx,x〉≥α‖x‖2,∀x∈H.
A mapping T:C→C is called nonexpansive if‖Tx-Ty‖≤‖x-y‖,∀x,y∈C.
A mapping T:C→C is said to be an averaged mapping, if and only if it can be written as the average of the identity I and a nonexpansive mapping; that is,T=(1-α)I+αR,
where α∈(0,1) is a constant and R:C→C is a nonexpansive mappings. In this case, we call T is α-averaged.

A mapping T:C→C is said to be ν-inverse strongly monotone (ν-ism), if and only if〈x-y,Tx-Ty〉≥ν‖Tx-Ty‖2,x,y∈C.
The following proposition is well known, which is useful for the next section.

Proposition 2.1 (See [<xref ref-type="bibr" rid="B9">9</xref>]).

(1) The composite of finitely many averaged mappings is averaged. That is, if each of the mappings {Ti}i=1N is averaged, then so is the composite T1,…,TN. In particular, if T1 is α1-averaged and T2 is α2-averaged, where α1,α2∈(0,1), then the composite T1T2 is α-averaged, where α=α1α2-α1α2.

(2) T is ν-ism, then for γ>0, γT is (ν/γ)-ism.

Recall that the (nearest point or metric) projection from H onto C, denoted by PC, assigns, to each x∈H, the unique point PC(x)∈C with the property ‖x-PC(x)‖=inf{‖x-y‖:y∈C}.
We use S to denote the solution set of (1.1). Assume that (1.1) is consistent, that is, S≠∅. If f is Frechet differentiable, then x*∈C solves (1.1) if and only if x*∈C satisfies the following optimality condition:〈∇f(x*),x-x*〉≥0,∀x∈C,
where ∇f denotes the gradient of f. Observe that (2.6) can be rewritten as the following VI〈x*-(x*-∇f(x*)),x-x*〉≥0,∀x∈C.
(Note that the VI has been extensively studied in the literature, see, for instance [11–25].) This shows that the minimization (1.1) is equivalent to the fixed point problemPC(x*-γ∇f(x*))=x*,
where γ>0 is an any constant. This relationship is very important for constructing our method.

Next we adopt the following notation:

xn→x means that xn converges strongly to x;

xn⇀x means that xn converges weakly to x;

Fix(T):={x:Tx=x} is the fixed points set of T.

Lemma 2.2 (See [<xref ref-type="bibr" rid="B10">26</xref>]).

Let {xn} and {yn} be bounded sequences in a Banach space X and let {βn} be a sequence in [0,1] with
0<liminfn→∞βn≤limsupn→∞βn<1.
Suppose
xn+1=(1-βn)yn+βnxn
for all n≥0 and
limsupn→∞(‖yn+1-yn‖-‖xn+1-xn‖)≤0.
Then, limn→∞∥yn-xn∥=0.

Lemma 2.3 (See [<xref ref-type="bibr" rid="B11">27</xref>] (demiclosedness principle)).

Let C be a closed and convex subset of a Hilbert space H and let T:C→C be a nonexpansive mapping with Fix(T)≠∅. If {xn} is a sequence in C weakly converging to x and if {(I-T)xn} converges strongly to y, then
(I-T)x=y.
In particular, if y=0, then x∈Fix(T).

Lemma 2.4 (See [<xref ref-type="bibr" rid="B12">28</xref>]).

Assume {an} is a sequence of nonnegative real numbers such that
an+1≤(1-γn)an+δn,
where {γn} is a sequence in (0,1) and {δn} is a sequence such that

∑n=1∞γn=∞;

limsupn→∞δn/γn≤0 or ∑n=1∞|δn|<∞.

Then, limn→∞an=0.3. Main Results

Let C be a closed convex subset of a real Hilbert space H. Let f:C→R be a real-valued Frechet differentiable convex function with the gradient ∇f. Let A:C→H be a ρ-contraction. Let B:H→H be a self-adjoint, strongly positive bounded linear operator with coefficient α>0. First, we present our algorithm for solving (1.1). Throughout, we assume S≠∅.

Algorithm 3.1.

For given x0∈C, compute the sequence {xn} iteratively by
xn+1=PC(I+(σA-B)θn)PC(I-γ∇f)xn,n≥0,
where σ>0,γ>0 are two constants and the real number sequence {θn}⊂[0,1].

Remark 3.2.

In (3.1), we use two projections. Now, it is well-known that the advantage of projections, which makes them successful in real-word applications, is computational.

Next, we show the convergence analysis of this Algorithm 3.1.

Theorem 3.3.

Assume that the gradient ∇f is L-Lipschitzian and σρ<α. Let {xn} be a sequence generated by (3.1), where γ∈(0,2/L) is a constant and the sequence {θn} satisfies the conditions: (i) limn→∞θn=0 and (ii) ∑n=0∞θn=∞. Then {xn} converges to a minimizer x̃ of (1.1) which solves the following variational inequality:
x̃∈Ssuchthat〈σA(x̃)-B(x̃),x-x̃〉≤0,∀x∈S.

By Algorithm 3.1 involved in the projection, we will use the properties of the metric projection for proving Theorem 3.3. For convenience, we list the properties of the projection as follows.

Proposition 3.4.

It is well known that the metric projection PC of H onto C has the following basic properties:

∥PC(x)-PC(y)∥≤∥x-y∥, for all x,y∈H;

〈x-y,PC(x)-PC(y)〉≥∥PC(x)-PC(y)∥2, for every x,y∈H;

〈x-PC(x),y-PC(x)〉≤0, for all x∈H, y∈C.

The Proof of Theorem <xref ref-type="statement" rid="thm3.3">3.3</xref>

Let x*∈S. First, from (2.8), we note that PC(I-γ∇f)x*=x*. By (3.1), we have
‖xn+1-x*‖=‖PC(I+(σA-B)θn)PC(I-γ∇f)xn-x*‖≤‖PC(I+(σA-B)θn)PC(I-γ∇f)xn-PC(I+(σA-B)θn)PC(I-γ∇f)x*‖+‖PC(I+(σA-B)θn)x*-x*‖≤[1-(α-σρ)θn]‖xn-x*‖+θn‖σA(x*)-B(x*)‖=[1-(α-σρ)θn]‖xn-x*‖+(α-σρ)θn‖σA(x*)-B(x*)‖α-σρ≤max{‖xn-x*‖,‖σA(x*)-B(x*)‖α-σρ}.
Thus, by induction, we obtain
‖xn-x*‖≤max{‖x0-x*‖,‖σA(x*)-B(x*)‖α-σρ}.

Note that the Lipschitz condition implies that the gradient ∇f is (1/L)-inverse strongly monotone (ism), which then implies that γ∇f is (1/γL)-ism. So, I-γ∇f is (γL/2)-averaged. Now since the projection PC is (1/2)-averaged, we see that PC(I-γ∇f) is ((2+γL)/4)-averaged. Hence we have that
PC=12I+12RPC(I-γ∇f)=2-γL4I+2+γL4T=(1-β)I+βT,
where R,T are nonexpansive and β=(2+γL)/4∈(0,1). Then we can rewrite (3.1) as
xn+1=(12I+12R)(I+(σA-B)θn)[(1-β)xn+βTxn]=1-β2xn+β2Txn+(θn2(σA-B)+R2(I+(σA-B)θn))[(1-β)xn+βTxn]=1-β2xn+1+β2yn,
where
yn=21+β(θn2(σA-B)+R2(I+(σA-B)θn))[(1-β)xn+βTxn]+β1+βTxn.
Set zn=(1-β)xn+βTxn for all n. Since {xn} is bounded, we deduce {A(xn)}, {B(xn)}, and {Txn} are all bounded. Hence, there exists a constant M>0 such that
supn{‖(σA-B)zn‖}≤M.
Thus,
‖yn+1-yn‖≤21+β‖θn+12(σA-B)zn+1-θn2(σA-B)zn‖+β1+β‖Txn+1-Txn‖+11+β‖R(θn+1σA+(I-θn+1B))zn+1-R(I+(σA-B)θn)zn‖≤11+β(θn+1+θn)M+β1+β‖xn+1-xn‖+11+β‖zn+1-zn‖+11+β‖θn+1(σA-B)zn+1-θn(σA-B)zn‖≤21+β(θn+1+θn)M+‖xn+1-xn‖.
It follows that
limsupn→∞(‖yn+1-yn‖-‖xn+1-xn‖)≤0.
This together with Lemma 2.2 implies that
limn→∞‖yn-xn‖=0.
So,
limn→∞‖xn+1-xn‖=limn→∞1+β2‖yn-xn‖=0.
Since
‖xn-PC(I-γ∇f)xn‖≤‖xn-xn+1‖+‖xn+1-PC(I-γ∇f)xn‖=‖xn-xn+1‖+‖PC(I+(σA-B)θn)PC(I-γ∇f)-PC(I-γ∇f)xn‖≤‖xn-xn+1‖+θn‖(σA-B)PC(I-γ∇f)xn‖,
we deduce
limn→∞‖xn-PC(I-γ∇f)xn‖=0.
Next we prove
limsupk→∞〈σA(x*)-B(x*),xn-x*〉≤0,
where x* is the unique solution of VI (3.2).

Indeed, we can choose a subsequence {xni} of {xn} such that
limsupn→∞〈σA(x*)-B(x*),xn-x*〉=limi→∞〈σA(x*)-B(x*),xni-x*〉.
Since {xni} is bounded, there exists a subsequence of {xni} which converges weakly to a point x̃. Without loss of generality, we may assume that {xni} converges weakly to x̃. Since γ∈(0,2/L), PC(I-γ∇f) is nonexpansive. Thus, from (3.14) and Lemma 2.3, we have xni⇀x̃∈Fix(PC(I-γ∇f))=S. Therefore,
limsupn→∞〈σA(x*)-B(x*),xn-x*〉=limi→∞〈σA(x*)-B(x*),xni-x*〉=〈σA(x*)-B(x*),x̃-x*〉≤0.
Finally, we show xn→x̃. By using the property of the projection PC, we have
‖xn+1-x̃‖2=‖PC(I+(σA-B)θn)PC(I-γ∇f)xn-PC(x̃)‖2≤〈(I+(σA-B)θn)PC(I-γ∇f)xn-x̃,xn+1-x̃〉=〈(I+(σA-B)θn)(PC(I-γ∇f)xn-x̃),xn+1-x̃〉+θn〈σA(x̃)-B(x̃),xn+1-x̃〉≤‖I+(σA-B)θn‖‖PC(I-γ∇f)xn-PC(I-γ∇f)x̃‖‖xn+1-x̃‖+θn〈σA(x̃)-B(x̃),xn+1-x̃〉≤[1-(α-σρ)θn]‖xn-x̃‖‖xn+1-x̃‖+θn〈σA(x̃)-B(x̃),xn+1-x̃〉≤1-(α-σρ)θn2‖xn-x̃‖2+12‖xn+1-x̃‖2+θn〈σA(x̃)-B(x̃),xn+1-x̃〉.
It follows that
‖xn+1-x̃‖2≤[1-(α-σρ)θn]‖xn-x̃‖2+2θn〈σA(x̃)-B(x̃),xn+1-x̃〉=[1-(α-σρ)θn]‖xn-x̃‖2+(α-σρ)θn{2α-σρ〈σA(x̃)-B(x̃),xn+1-x̃〉}.
It is obvious that limsupn→∞((2/(α-σρ))〈σA(x̃)-B(x̃),xn+1-x̃〉≤0). Then we can apply Lemma 2.4 to the last inequality to conclude that xn→x̃. The proof is completed.

In (3.1), if we take A=0 and B=I, then (3.1) reduces to the following.

Algorithm 3.5.

For given x0∈C, compute the sequence {xn} iteratively by
xn+1=PC(1-θn)PC(I-γ∇f)xn,n≥0,
where σ>0,γ>0 are two constants and the real number sequence {θn}⊂[0,1].

From Theorem 3.3, we have the following result.

Theorem 3.6.

Assume that the gradient ∇f is L-Lipschitzian and σρ<α. Let {xn} be a sequence generated by (3.20), where γ∈(0,2/L) is a constant and the sequences {θn} satisfies the conditions: (i) limn→∞θn=0 and (ii) ∑n=0∞θn=∞. Then {xn} converges to a minimizer x̃ of (1.1) which is the minimum norm element in S.

Proof.

As a consequence of Theorem 3.3, we obtain that the sequence {xn} generated by (3.20) converges strongly to x̃ which satisfies
x̃∈Ssuchthat〈-x̃,x-x̃〉≤0,∀x∈S.
This implies
‖x̃‖2≤〈x,x̃〉≤‖x‖‖x̃‖,∀x∈S.
Thus,
‖x̃‖≤‖x‖,∀x∈S.
That is, x̃ is the minimum norm element in S. This completes the proof.

Acknowledgment

Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin, NSFC 11071279 and NSFC 71161001-G0105.

LevitinE. S.PolyakB. T.Constrained minimization problemsGafniE. M.BertsekasD. P.Two metric projection methods for constrained optimizationCalamaiP. H.MoreJ. J.Projected gradient methods for linearly constrained problemsPolyakB. T.WangC.XiuN.Convergence of the gradient projection method for generalized convex minimizationXiuN. H.WangC. Y.ZhangJ. Z.Convergence properties of projection and contraction methods for variational inequality problemsRuszczynskiA.XiuN. H.WangC. Y.KongL.A note on the gradient projection method with exact stepsize ruleXuH. K.Averaged mappings and the gradient-projection algorithmYaoY.KangS. M.WuJ.YangP. X.A regularized gradient projection method for the minimization problemYaoY.NoorM. A.LiouY. C.Strong convergence of a modified extra-gradient method to the minimum-norm solution of variational inequalitiesYaoY.NoorM. A.LiouY. C.KangS. M.Iterative algorithms for general multi-valued variational inequalitiesYaoY.LiouY. C.KangS. M.Two-step projection methods for a system of variational inequalityproblems in Banach spacesJournal of Global Optimization. In press10.1007/s10898-011-9804-0NoorM. A.Some developments in general variational inequalitiesKorpelevichG. M.An extragradient method for finding saddle points and for other problemsCengL. C.YaoJ. C.An extragradient-like approximation method for variational inequality problems and fixed point problemsYaoY.YaoJ. C.On modified iterative method for nonexpansive mappings and monotone mappingsHeB. S.YangZ. H.YuanX. M.An approximate proximal-extragradient type method for monotone variational inequalitiesYaoY.ChenR.XuH. K.Schemes for finding minimum-norm solutions of variational inequalitiesYaoY.NoorM. A.On viscosity iterative methods for variational inequalitiesYaoY.NoorM. A.On modified hybrid steepest-descent methods for general variational inequalitiesYaoY.NoorM. A.NoorK. I.LiouY. C.YaqoobH.Modified extragradient methods for a system of variational inequalities in Banach spacesCengL. C.HadjisavvasN.WongN. C.Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problemsCianciarusoF.ColaoV.MugliaL.XuH. K.On an implicit hierarchical fixed point approach to variational inequalitiesLuX.XuH. K.YinX.Hybrid methods for a class of monotone variational inequalitiesSuzukiT.Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spacesGoebelK.KirkW. A.XuH. K.Iterative algorithms for nonlinear operators