A PROXIMAL POINT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION PROBLEMS IN BANACH SPACES

In this paper we show the weak convergence and stability of the proximal point method when applied to the constrained convex optimization problem in uniformly convex and uniformly smooth Banach spaces. In addition, we establish a nonasymptotic estimate of convergence rate of the sequence of functional values for the unconstrained case. This estimate depends on a geometric characteristic of the dual Banach space, namely its modulus of convexity. We apply a new technique which includes Banach space geometry, estimates of duality mappings, nonstandard Lyapunov functionals and generalized projection operators in Banach spaces.


Introduction
The proximal method, or more exactly, "the proximal point algorithm", is one of the most important successive approximation methods for finding fixed points of nonexpansive mappings in Hilbert spaces.This method, which is therefore not new, (see [12], [17], [18]), was earlier used for regularizing linear equations ( [12], [13]), and seems to have been applied the first time to convex minimization by Martinet (see [15], [16]).The first important results (like approximate versions, linear and finite convergence) in the more general framework of maximal monotone operators in a Hilbert space are due to Rockafellar [20].Nowadays it is still the object of intensive investigation (see [14] for a survey on the method).
The proximal method can be seen as a regularization scheme in which the regularization parameter needs not approach zero, thus avoiding a possible ill behavior of the regularized problems.
We will state here some main properties of this algorithm and some applications to convex programming and maximal monotone inclusions.
Let H be a Hilbert space with inner product •, • and Ω a closed and convex subset of H. Consider the problem min x∈Ω f (x), (1.1) where f :H → IR is a convex functional.It is a familiar fact that in direct methods for solving (1.1), the tools for proving existence of solutions are the convexity properties of the functional.Based on the fact that x 0 ∈ argmin x∈H f (x) iff 0 ∈ ∂f (x 0 ), we can pose the problem Find x 0 ∈ H such that 0 ∈ T (x 0 ), (1.2) where T : H → P(H) is a maximal monotone operator.
In the particular case in which T = ∂f , problem (1.2) is equivalent to (1.1).In this way, the theory of subdifferentials can transform our original problem (1.1) in the study of the range of the monotone operator ∂f : H → P(H).Namely, we want to determine if 0 ∈ R(∂f ), where R(T ) stands for the range of an operator T .
Replacing ∂f by an arbitrary monotone operator T , we transform an optimization problem into a more general one, involving monotone operators.
The proximal method for (1.1) in a Hilbert space H generates a sequence {x k } ⊂ H in the following way: x 0 ∈ H, x k+1 = argmin x∈H (f (x) + λ k x − x k 2 ), (1.3) where Martinet showed in [15] that if {x k } is bounded, then it converges weakly to a minimizer of f .In [20] Rockafellar studied the convergence properties of this algorithm, when applied to problem (1.2).In this case, the sequence {x k } is defined by: From now on, unless explicit mention, the parameters λ k will be taken as in (1.4).As we can easily see, in the case of T = ∂f , this procedure reduces to (1.3).It is shown in [20] that {x k } converges weakly to a zero of T , provided that the set of zeroes is nonempty.Rockafellar also proves linear convergence rate if either one of the following conditions is satisfied: (a) T is strongly monotone.(b) T −1 is Lipschitz continuous at 0 and {x k } bounded.Finally, he furnishes a criterion for convergence in a finite number of iterations, which requires the nonemptiness of the interior of the set T −1 (0).As a particular case of the latter result, we have finite convergence if H is finite dimensional, T = ∂f , and f is polyhedral convex (i.e., the epigraph of f is a polyhedral convex set).In the optimization case, we observe directly from (1.3) that the objective function for each subproblem is coercive (remember that g : H → IR is coercive if and only if lim x →+∞ g(x)/ x = +∞).In particular, this property implies boundedness of the level sets of g, which ensures existence of solutions of each subproblem.The uniqueness, on the other hand, is ensured by the strict convexity of the objective function given in (1.3).So this algorithm is a true regularization and the sequence {x k } is well defined.
The structure of the iteration in (1.3) suggests the possibility of adding other kind distances to the function f .For instance, a Bregman distance D g , where g : H → IR is a strictly convex function, satisfying some additional properties (see, e.g.[6]).D g is defined as: The proximal point method with Dg(x, x k ) substituting for x − x k 2 in (1.3) has been analyzed in [7] for convex optimization in finite dimensional spaces and in [6] for the variational inequality problem in a Hilbert space, i.e., find x * ∈ Ω ⊂ H such that there exists y * ∈ T (x * ) with y * , x − x * ≥ 0, (1.6) for all x ∈ Ω.
All previous results apply to a Hilbert space.In a Banach space, the iteration (1.3) has as optimality conditions where J is the duality mapping, defined in section 3.If the Banach space is not hilbertian, J is not a linear operator, so that (1.7) is not equivalent to ) is more natural that (1.7) (e.g., with constant λ k , the left hand side of (1.8) is the same in all iterations).
In this paper we study the proximal point method in Banach spaces, where x k+1 is the solution of (1.8).It is easy to check that this is equivalent to where D g is as in (1.5) with g(x) = x 2 .We emphasize that this is not the iteration given by (1.3), because, if the space is not a Hilbert one, then , the method considered here is a particular case of the proximal point method with Bregman distances studied in [7].We give a full convergence analysis in an arbitrary uniformly convex and uniformly smooth Banach space.We present also stability and convergence rate results.

Previous results in a Hilbert space
Let H be a Hilbert space and f : H → IR ∪ {∞} be a proper, closed and convex functional.We recall below known results due to Güler (see [10]) about the convergence properties of the proximal method, applied to the minimization of f .From now on we will write f * := f (x * ), where x * is any minimizer of f .Theorem 2.1.[10] Consider the sequence defined by (1.3), where λ k is taken as in (1.4).Define σ k := k i=1 λ −1 i , by convention, σ 0 = 0. Suppose that the set of minimizers of f , which we call X * , is not empty and take x * ∈ X * .Then {x k } converges weakly to a minimizer of f .In this conditions, the following global convergence estimate holds: The next lemma shows that the proximal point method can be defined in terms of the metric projection operator P Ω in a Hilbert space, and therefore method (2.2) below shares all well known properties of the proximal point method.
Lemma 2.2.Let f : H → IR be a closed and convex function, Ω ⊂ H a closed and convex set and P Ω the orthogonal projection onto Ω.For fixed a ∈ H and λ positive, consider the following two problems: Problem (1): Find x ∈ Ω such that The sets of solutions of Problem (1) and Problem (2) coincide.
The following theorem is a direct consequence of the lemma above and Theorem 2.1.Theorem 2.3.Consider the sequence {x k } given by 1) Take x 0 ∈ Ω. 2) Given x k ∈ Ω, find y ∈ H and x k+1 ∈ Ω such that: Then if problem (1.1) has solutions, it holds that (i) the sequence {x k } is well defined and bounded, (ii) vii) the whole sequence converges weakly to a solution, i.e., there exists a unique weak accumulation point.
All these statements are easy consequences of Lemma 2.2 and the results in [10] and [20].We will show in Section 4, Theorem 4.1, that all these results are also valid in a Banach space, and hence Theorem 2.3 is a particular case of Theorem 4.1.We present an estimate of convergence rate in Section 4.3.

The Banach space concept
Let B be a uniformly convex and uniformly smooth Banach space, (see [1] and [5]).The operator J : B → B * is the normalized duality mapping associated to B, determined by the equalities where •, • stands for the usual dual product in B, • is the norm in the Banach space B and • B * is the norm in the dual space B * .For later use, we state the following lemma, whose proof can be found in [1].
We recall next the analytical expressions of the duality mapping J(•) in the uniformly smooth and uniformly convex Banach spaces l p , L p and Sobolev spaces W p m , p ∈ (1, ∞), (see [1]).
For a convex and closed set Ω ⊂ B, we define the normality operator N Ω : B → P(B * ), in the following way: It is easy to check that N Ω (•) is the subdifferential of the indicator function χ Ω (x) associated to the set Ω, i.e.
The function χ Ω is convex and closed, hence N Ω (•) is a maximal monotone operator.
We follow in the sequel the theory developed in [1], where a generalized projection operator Π Ω (•) in a Banach space B is introduced.
Take the Lyapunov functional W : B × B → IR + given by It follows from the definition of J that ∇ z W (x, z) = 2(Jz − Jx).It also holds that W (x, z) ≥ 0. In the sequel we will need a property of W (x, z) established in [1], namely The generalized projection operator has also been introduced in [1], where the following lemma is proved.

Lemma 3.2. (i)
The operator Π Ω (•) is well defined in any uniformly convex and uniformly smooth Banach space.(ii) In the conditions of the definitions above, the following inequality holds for any fixed x ∈ B and any z ∈ Ω Π Ω (•) coincides with the classical metric projection P Ω (•), and the inequality (ii) in Lemma 3.2 reduces to the Kolmogorov criterion which characterizes the metric projection P Ω (x): x − P Ω (x), P Ω (x) − z ≥ 0, for all z ∈ Ω.
Consider now the following two algorithms: 1) Take x 0 ∈ Ω.
2) Given z k , define z k+1 by the system In a similar way as we established the equivalence of the algorithms in Lemma 2.2, we will show next that the sequences defined by (3.5) and (3.6) coincide when they start from the same point.Observe that the solution of (3.5) exists and is unique by coerciveness and strict monotonicity of J and monotonicity of N Ω (see [21], Corollary 32.35).We point out also that existence of the iterates in (3.6) is not obvious at all.Nevertheless, as a by-product of the following lemma, we will show the existence of solution of all the subproblems (3.6).Lemma 3.4.In the algorithms (3.5) and (3.6), if z 0 = x 0 , then z k is well defined and z k = x k for any k ≥ 0.

Proof. By Lemma 3.2 and the definition of normality operator, for any
We proceed by induction.The result holds for k = 0 by hypothesis.Suppose that z k is well defined and z k = x k ; we must show that z k+1 exists and We remark that x k+1 is uniquely determined by the previous inclusion because of strict monotonicity of J.By (3.8) there exist u k+1 ∈ ∂f (x k+1 ) and Since J is onto, there exists y ∈ B such that Since x k+1 ∈ Ω, for proving (a) it will be enough to show that for any z ∈ Ω.In fact, by (3.10) and the properties of w k+1 , Now we proceed to prove (b): by (3.9), (3.10) and the induction hypothesis, (3.11) which implies (b).Now, using (a) and (3.11) we conclude that the system (3.6) has a solution z k+1 , and this solution coincides with x k+1 .The lemma is complete.

Convex functionals in a Banach space
The following results deal with convergence of the sequence given by (3.6) in a uniformly convex and uniformly smooth Banach space B. Under such conditions, we get boundedness of the sequence and optimality of any weak accumulation point.Weak convergence of the whole sequence is established for a special kind of Banach spaces, namely those in which there exists a weak-to-weak continuous duality operator J. where f :B → IR is a convex functional.Define the sequence {x k } as: 1) Take x 0 ∈ Ω.
2) Given x k , define x k+1 by the system with λ k positive.Theorem 4.1.Consider the sequence {x k } given by (4.2).Suppose that the set of minimizers of f , which we call X * , is not empty and fix x * ∈ X * .Then it holds that (i) The sequence {x k } is well defined and bounded, (ii) that there exists a weak-to-weak continuous duality mapping, then the whole sequence converges weakly to a solution, i.e., there exists a unique weak accumulation point.
Proof.(i) As we mentioned before, in order to prove welldefinedness of the iterates of algorithm (4.2), it is enough to observe that by Lemma 3.4 they coincide with the iterates of algorithm (3.5), which is well defined, as discussed before.In order to prove boundedness, we will show that the sequence W (x k , x * ) is decreasing, in which case the result will follow from boundedness of the level sets of the function W (•, x * ).Recall that W is given by (3.3) and x * is any solution of problem (4.1).By the definition of W (x, ξ), (4.2) and the projection properties, we get where u k+1 ∈ ∂f (x k+1 ).As W (x k , x k+1 ) ≥ 0, the previous equation yields At this point we use the fact that x * ∈ X * and the gradient inequality to obtain Hence we proved that W (x k , x * ) is a decreasing sequence, which is also bounded below, and therefore it converges.Using (3.4), we get which implies where C 0 is any real number larger than x 0 + 2 x * .This establishes the first statement.
(ii) This result is a consequence of (4.4) and the fact that (iii) Take a sequence {x k j } which converges weakly to x.We will show that x is a minimizer of f .By (ii) and the assumption on λ k we obtain that By weak lower semicontinuity of f , Therefore x is also a minimizer of f .(iv) By (iii), it is enough to prove that f (x k ) is decreasing.In order to prove this we use the gradient inequality and (4.2) Observe that the last two terms in the rightmost side of (4.5) are nonnegative; the first one because of the properties of the generalized projection and the fact that x k ∈ Ω, and the second one because of the monotonicity of J. Hence we get f (x k ) ≥ f (x k+1 ).
(v) By (iv) we obtain that Therefore the assumption on λ k yields On the other hand, The second term on the right hand side of the previous equation is nonnegative by the projection properties.For the first one, we use (3.1) with x = x k and y = x k+1 : where we are using the fact that {x k } is bounded by C 0 .Now the properties of δ B (•), namely the fact that it is an increasing and continuous function such that δ B (0) = 0, imply that lim k→∞ (vi) From (3.2) By virtue of the fact that h B (τ ) tends to 0 as τ tends to 0, which holds for any uniformly smooth Banach space, we have Let now u k := Jx k − Jy, where x k and y are taken as in (4.2).We know, by the definition of the algorithm, that u k ∈ ∂f (x k+1 ).In order to prove (vi) it is enough to show that Jx k+1 − Jx k B * , where we have used the gradient inequality, the definition of the algorithm, the projection properties and Cauchy-Schwarz.Now, using in the previous chain of inequalities (4.6) and the boundedness of {x k }, we obtain the desired result.
(vii) Consider the duality mapping J in (4.2), which is weak-to-weak continuous.We will show that there exists only one weak accumulation point.Suppose there are two points z 1 , z 2 , which are weak limits of subsequences of {x k }.By part (i) and (iii), we know that there exist nonnegative numbers l 1 and l 2 such that Let l := lim k→∞ Jx k , z 2 −z 1 .Let {x k j } and {x l j } be subsequences converging weakly to z 1 and z 2 respectively.Then, taking k = k j in (4.7) and using the weak-to-weak continuity of J, we get that l = Jz 1 , z 2 − z 1 .Repeating the same argument with k = l j in (4.7), we get l = Jz 2 , z 2 − z 1 .Hence, Jz 2 − Jz 1 , z 2 − z 1 = 0.According to Lemma 3.1 and the properties of the duality mapping, we conclude that z 1 = z 2 , which establishes the uniqueness of the weak accumulation point.The proof of item (vii) and of the theorem is now complete.
We have proved that existence of solutions of Problem 4.1 is sufficient to guarantee convergence of the sequence generated by Method (4.2).The next lemma shows that it is also a necessary condition.More precisely, we will prove that the sequence {x k } is unbounded when X * is empty.Proof.Theorem 4.1 provides the proof of the "only if" part.We proceed to prove the "if" part.Suppose that {x k } is bounded.Then its weak closure {x k } is bounded and there exists a closed, convex and bounded set D such that where D o is the interior of D. It follows that any weak accumulation point of {x k } belongs to D o .Now we apply method (4.2) to the function f = f + χ D , where, as before, χ D denotes the indicator function of the set D. If {x k } is the sequence generated by the algorithm for f , then xk+1 is uniquely determined by the following system in unknowns xk+1 , ỹ (see Lemma 3.4).
We will first prove by induction that the sequence {x k } coincides with the sequence {x k }, resulting from applying method (4.2) to f rather than f , when x 0 = x0 .Suppose that xk = x k .x k+1 is uniquely determined by the following system in unknowns x k+1 , y: Since x k+1 belongs to D o by (4.8), we have that N D (Π Ω (y)) = N D (x k+1 ) = 0, which, together with (4.10) and the induction hypothesis, gives Therefore, Π Ω (y) is the unique solution of (4.9), and, as a consequence, xk+1 = Π Ω (y) = x k+1 .The induction step is complete.
Consider now the problem Since D is closed, convex and bounded, and so the operator ∂f + N D has bounded domain, this problem is known to have solutions (see, e.g.[21], Corollary 32.35), and we can apply Theorem 4.1 to conclude that the sequence {x k } is weakly convergent to a solution of (4.11).Since {x k } and {x k } coincide, as proved above, we conclude that {x k } is weakly convergent to a solution x of (4.11).We prove next that x belongs to X * , i.e. it is a solution of min f (x) subject to x ∈ Ω. x belongs to the weak closure of {x k }, and therefore to D o , so that N D (x) = 0.In view of this fact and (4.11), we have Inclusion (4.12) implies that x belongs to X * , which is therefore nonempty. where is the ε-subdifferential of f at the point x (see, for instance, [11]).The parameters {ε k } and {λ k } are chosen in the following way: We will show next that for this choice of the parameters the method (4.2) is stable.We point out that the system given by (4.13) has always a solution as a straightforward consequence of the solvability of (4.2).Theorem 4.3.Take {x k } as in (4.13).In the conditions of Theorem 4.1, the algorithm given by (4.2) is stable, i.e., (i) where w − lim stands for weak limit.
Proof.Take x * ∈ X * .We consider as always the sequence {W (x k , x * )}.Let W k := W (x k , x * ).Then, by definition of W (•, •), we get , where we use the projection properties, and that u k+1 ∈ ∂ ε k f (x k+1 ) satisfies the inclusion (4.13).Applying the definition of ε-subdifferential in the previous inequality, we obtain (4.14) and our assumption on the parameters {ε k } and {λ k } imply that the sequence {W k } is convergent and that lim k→∞ f (x k ) = f * , and then, with the same argument as in Theorem 4.1, we obtain weak convergence of {x k } to a minimizer.The theorem is proved.

Remark 4.4. In the particular case in which Ω = B, we get stability results for the unconstrained problem.
We recall now the definition of Hausdorff distance H between sets A 1 and A 2 , defined as Note that if A 1 and A 2 are singletons, then H reduces to the usual distance.Theorem 4.5.Consider the sequence {x k } given by the following algorithm: 1) Take x0 ∈ Ω.
2) Given xk , define xk+1 by the system where ε k and λ k are taken as in (4.13).Suppose that the operator T ε : B → P(B * ) is such that for all x ∈ Ω and for some finite nondecreasing positive function θ(•).
If problem (4.15) is solvable and if the sequence {x k } is bounded by C 0 , then the whole sequence {x k } converges weakly to a minimizer, and the functional values converge to f * .
Proof.Indeed, in analogy with Theorem 4.3, it follows from (4.15) that ) satisfies inequality (4.16).The previous inequality can be rewritten in the form (4.17) ).By the gradient inequality, we have which, together with (4.17), leads to Using now the Cauchy-Schwartz inequality, (4.16) and the boundedness of the sequences {x k } and {λ k }, the last expression becomes where C is a bound for 2 x * − xk .Now the assertion is obtained by the same argument as in Theorem 4.3.
We remark that when the set Ω is bounded, then the sequence {x k } is ensured to be also bounded.where f : B → IR is a convex functional.We propose the following method for this problem: Consider the sequence {x k } given by 1) Take x 0 ∈ B.
2) Given x k , find x k+1 ∈ B, such that: where the parameters λ k are chosen such that 0 < λ k ≤ λ.
The method (4.18) is the classical proximal method in a Banach space and also a particular case of (4.2) for Ω = B. We will provide in this section a convergence rate estimate for the proximal point method in a Banach space.The lemmas we prove next are essential in the proof of Theorems 4.9 and 5.1 below.
Lemma 4.6.Assume that {α k } is a sequence of nonnegative real numbers satisfying the implicit recursive inequality (4.19)where ψ : IR + → IR + is a continuous and increasing function such that ψ(0) = 0 and µ is a positive constant.Then for all k > k, where Consider now two alternatives for any fixed k ∈ N : The proof of (4.20) will be performed in three steps.1) We claim that the set N 1 is unbounded.Indeed, if this is not true, there exists N such that for all k > N, hypothesis H 2 is satisfied.By (4.19) If N 1 were bounded, then taking limits as k → ∞ in the previous chain of inequalities, we obtain a contradiction.Indeed, the rightmost term goes to −∞, while the leftmost one is nonnegative.Thus N 1 must be unbounded.
2) Let {k j } denote all the ordered elements of N 1 .In the second step we will prove that where C 1 is as in the lemma.If k j+1 = k j +1 then (4.21) is trivially satisfied.
Otherwise, take k ∈ [k j + 1, k j+1 + 1].We emphasize that for k of the form k j + 1 we have that and, for k not of this form, it holds that Then, for all k such that k j + 2 ≤ k ≤ k j+1 we have: , etc. Applying iteratively (4.19), we obtain Therefore, denoting by S p := p i=1 i −1 , we get where the second inequality holds because {α k } is a nonnegative sequence, the third one by definition of N 1 , and the last one because ψ is an increasing function, and, consequently, ψ −1 is also increasing.In order to estimate the leftmost expression in (4.22), we use a result from [9]: where η(p) ∈ (0, 1  2 ) and E ≈ 0, 577 is the Euler constant.Applying the previous equality in (4.22) we obtain, It may thus be concluded that and (4.21) is valid.
3) We will show now that and consequently where we use that the sequence {α k } is decreasing in the first inequality, definition of N 1 in the second one, the assumption on k and the properties of ψ in the third one, and step (2) in the last one.We show next that the estimate (4.20) holds at least beginning from some k, where 0 Therefore, using again [9], we obtain Then all the results above are valid for k 1 ≤ C 2 .The proof of the lemma is now complete.
Next we consider the particular case of inequality (4.19) with ψ(t) = t 2 , which allows us to improve upon our previous estimate.Proof.We proceed by induction.The result holds for k = 1 by definition of C 1 .Assume that it holds for k.Then, the recursive inequality and the induction hypothesis imply that and it suffices to prove that the rightmost expression in (4.24) is less than or equal to C 1 /(k + 1), which is equivalent, after some algebra, to As we mentioned before, the convergence rate estimate that we present next is an application of Lemma 4.6.Theorem 4.9.Let {x k } be the sequence given by (4.18) with 0 < λ ≤ λ k ≤ λ.Suppose that the set of minimizers of f , which we call X * , is nonempty and take x * ∈ X * .Then results (i)-(vii) of Theorem 4.1 hold.Moreover, defining for all k > k.If C 0 is a bound for {x k } as in Theorem 4.1, then the constants above are given by the relations Proof.Statements (i) − (vi) hold because of our choice of λ k and the fact that this algorithm is a particular case of (4.2) for Ω = B.
For establishing (4.26) we show first that which will allow us to apply Lemma 4.6 to the sequence By the definition of the method and the gradient inequality, there exists w k+1 ∈ ∂f (x k+1 ) such that (4.28) Using Lemma 3.1 and the fact that the sequence is bounded by C 0 , the previous inequality becomes (4.29) On the other hand,

The previous chain of inequalities implies that
which, together with (4.29) and the fact that δ B * (•) is an increasing function, gives (4.27).Now all the assertions of the theorem follow from Lemma 4.6 and the fact that ψ −1 (z) = R −1 2 δ −1 B * (z), and the proof is complete.We recall that spaces p , L p and the Sobolev spaces W p m are uniformly convex and uniformly smooth for all p ∈ (1, ∞) and, denoting any of these spaces by B, we have In the examples above, as well as in other spaces considered by Pisier in [19], it holds that δ B (ε) ≥ Cε γ , where γ ≥ 2 and C is a constant.Under such assumption, we get from (4.29) and (4.30) Therefore, it follows from Lemma 4.6 that there exists k ∈ [0, exp(R for all k > k, where For such spaces, we do not need a positive lower bound for λ k . Remark 4.10.Recalling that u 0 = f (x 0 )−f * , (4.23) gives a precise relation between k 1 , the first element of N 1 , and the initial data.As a corollary, we conclude that if the functional value at the initial point is close to the minimum value f * of f , then k 1 cannot be too large.
In Theorem 4.9 we obtained estimate (4.26) for an arbitrary uniformly convex and uniformly smooth Banach space B. Using Lemma 4.8 we can improve the mentioned estimate for a very wide family of Banach spaces, namely the ones which satisfy that δ B * (ε) ≥ Cε 2 , C = const.This family includes the Hilbert spaces, because Theorem 4.11.Suppose that all conditions of Theorem 4.9 hold with 0 Proof.Since δ B * (ε) ≥ Cε 2 , we have from (4.29) Now (4.31) follows from Lemma 4.8 and the fact that u 1 ≤ u 0 .Corollary 4.12.Consider the spaces p , L p and W p m .Suppose that 2 ≤ p < ∞.If 0 < λ k ≤ λ, then, under the hypotheses of Theorem 4.9, we have for all k ≥ 1, where q = p/(p − 1).
An estimate similar to (4.31) was obtained in [10] for the particular case of a Hilbert space.Lemma 4.8 allows us to obtain a much simpler proof for this case.In Hilbert spaces, J is the identity operator, and the method (4.18) takes the form Using (4.30), we have 2 .Now Lemma 4.8 immediately gives the following estimate:

Convergence analysis for uniformly convex functionals
We will establish now strong convergence in the case of uniformly convex functionals.In this section, a uniformly convex functional will be understood as a functional f such that there exists a continuous and increasing function for all x ∈ B, and all u ∈ ∂f (x), where x * is the minimizer of f on Ω. Theorem 5.1.Suppose that f is a uniformly convex functional and take Ψ(•) as in (5.1).Then, the sequence generated by (4.2) with 0 < λ k ≤ λ converges strongly to x * and there exists k ∈ [0, exp(2 −1 λD 0 + 1)] such that Proof.By (4.3), the gradient inequality and definition of Ψ(•), we get Then the sequence {W (x k , x * )} is nonnegative and decreasing, hence convergent.We show next that the properties of Ψ(•) and the previous inequality imply the strong convergence of {x k }.Indeed, the following property of W (•, •) can be found in [1], Theorem 7.5: ], for any x, y ∈ B, such that x ≤ C, y ≤ C. Applying the leftmost inequality in (5.4) to (5.3), and using the assumption on C 0 , we have Then the previous inequality can be rewritten as ). Observe that Ψ is continuous and Ψ(0) = 0, because ρ −1 B (0) = 0. Then we are in conditions of Lemma 4.6 with α k = W k , ψ(t) = Ψ(t) and µ = 2/λ.This lemma implies that for all k ≥ k, where k ≤ exp (2 −1 λW 0 + 1).Combining now (5.4) with the previous inequality, we get 8C 2 0 δ B ( which gives The constant D 0 is obtained applying the formula (3.4) in the bound for k and thus this establishes (5.2).Remark 5.2.In the same way as in Section 4, the estimate (5.2) can be improved for the special case in which Ψ(t) = t 2 .It is also obvious that if Ψ(t) = t, then W k converges to 0 with a linear convergence rate.Remark 5.3.In [2], explicit versions of gradient type methods were considered for {λ k } and {ε k } such that lim k→∞ λ −1 k = 0 and 0 ≤ ε k ≤ Cλ −1 k , for some constant C. The authors proved in this reference all the results of Theorem 4.1 in the case of a Banach space in which δ B (ε) ≥ Cε 2 .The same assumption on the parameter {λ k } is still necessary for uniformly convex functionals in an arbitrary uniformly convex and uniformly smooth Banach space (see [3]).

Call for Papers
Thinking about nonlinearity in engineering areas, up to the 70s, was focused on intentionally built nonlinear parts in order to improve the operational characteristics of a device or system.Keying, saturation, hysteretic phenomena, and dead zones were added to existing devices increasing their behavior diversity and precision.In this context, an intrinsic nonlinearity was treated just as a linear approximation, around equilibrium points.
Inspired on the rediscovering of the richness of nonlinear and chaotic phenomena, engineers started using analytical tools from "Qualitative Theory of Differential Equations," allowing more precise analysis and synthesis, in order to produce new vital products and services.Bifurcation theory, dynamical systems and chaos started to be part of the mandatory set of tools for design engineers.
This proposed special edition of the Mathematical Problems in Engineering aims to provide a picture of the importance of the bifurcation theory, relating it with nonlinear and chaotic dynamics for natural and engineered systems.Ideas of how this dynamics can be captured through precisely tailored real and numerical experiments and understanding by the combination of specific tools that associate dynamical system theory and geometric tools in a very clever, sophisticated, and at the same time simple and unique analytical environment are the subject of this issue, allowing new methods to design high-precision devices and equipment.
Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/.Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/according to the following timetable:

4. 1 .
Constrained minimization problem: convergence analysis.Let Ω be a closed and convex subset of B. Consider the problem min x∈Ω f (x), (4.1)

Theorem 4 . 2 .
Under the hypotheses of Theorem 4.1, X * is nonempty if and only if the sequence {x k } is bounded.

4. 3 .
Unconstrained minimization problem: convergence rate estimate.We study now the problem min x∈B f (x),