Parallel synchronous algorithm for nonlinear fixed point problems

We give in this paper a convergence result concerning parallel synchronous algorithm for nonlinear fixed point problems with respect to the euclidian norm in $\Rn$. We then apply this result to some problems related to convex analysis like minimization of functionals, calculus of saddle point, convex programming...


Introduction.
This study is motivated by the paper of Bahi [3] where he has given a convergence result concerning parallel synchronous algorithm for linear fixed point problems using nonexpansive linear mappings with respect to a weighted maximum norm. Our goal is to extend this result to a nonlinear fixed point problems, with respect to the euclidian norm, where F : R n → R n is a nonlinear operator. Section 2 is devoted to a brief description of asynchronous parallel algorithm.
In section 3 we prove the main result concerning the convergence of the general algorithm in the synchronous case to a fixed point of a nonlinear operator from R n to R n . A particular case of this algorithm (Algorithm of Jacobi) is applied in section 4 to the operator F = (I + T ) −1 which is called the proximal mapping associated with the maximal monotone operator T (see Rockafellar[9]).
Asynchronous algorithms are used in the parallel treatment of problems taking in consideration the interaction of several processors. Write R n as the n i . All vectors x ∈ R n considered in this study are splitted in the form x = (x 1 , ..., x α ) where x i ∈ R n i . Let R n i be equipped with the inner product ., . i and the associated norm . i = ., . 1/2 i . R n will be equipped with the inner product x, y ∈ R n and the associated norm It will be equipped also with the maximum norm defined by, • ∀i ∈ {1, ..., α}, the subset {p ∈ N, i ∈ J(p)} is infinite.
Consider an operator F = (F 1 , ..., F α ) : R n → R n and define the asynchronous algorithm associated with F by, (see Bahi and al. [1], El Tarazi [4]). It will be denoted by (F, x 0 , J, S). This algorithm describes the behaviour of iterative process executed asynchronously on a parallel computer with α processors. At each iteration p + 1, the i th processor computes x p+1 i by using (2) (Bahi [2]). J(p) is the subset of the indexes of the components updated at the p th step. p − s i (p) is the delay due to the i th processor when it computes the i th block at the p th iteration. If we take s i (p) = p ∀i ∈ {1, ..., α}, then (2) describes synchronous algorithm (without delay). During each iteration, every processor executes a number of computations that depend on the results of the computations of other processors in the previous iteration. Within an iteration, each processor does not interact with other processors, all interactions takes place at the end of iterations (Bahi [3]). If we take s i (p) = p ∀p ∈ N, ∀i ∈ {1, ..., α} J(p) = {1, ..., α} ∀p ∈ N then (2) describes the algorithm of Jacobi. If we take .., α} J(p) = p + 1 (mod α) ∀p ∈ N then (2) describes the algorithm of Gauss-Seidel. For more details about asynchronous algorithms see [1], [2], [3] and [4]. In the following theorem, Bahi [3] has shown the convergence of the sequence {x p } defined by (2) in the synchronous linear case, i.e F is a linear operator and s i (p) = p, ∀p ∈ {1, ..., α}.

Remark 1
The hypothesis (h 0 ) means that the processors are synchronized and all the components are infinitely updated at the same iteration. This subsequence can be chosen by the programmer.
3 Convergence of the general algorithm.
We establish in this section the convergence of the general parallel synchronous algorithm to a fixed point of a nonlinear operator F : R n → R n with respect to the euclidian norm defined in section 2. We recall that an operator F from R n to R n is said to be nonexpansive with respect to a norm Then any parallel synchronous 3 algorithm defined by (2) associated with the operator F converges to a fixed point x * of F .

Proof.
(i) We prove first that the sequence {x p } p∈N is bounded. ∀i ∈ {1, ..., α} we have, this proves that the sequence {x p } p∈N is bounded with respect the maximum norm and then it's bounded with respect the euclidian norm .
contains a subsequence noted also {x p k } k∈N which is convergent in R n to an x * . We show that x * is a fixed point of F . For it, we consider the sequence {y p = x p − F (x p )} p∈N and prove that lim k→∞ y p k = 0.
However, in (i) we have shown in particular that the sequence { x p − u ∞ } p∈N is decreasing (and it's positive), it's therefore convergent, then the se- Which proves that x p → x * with respect to the uniform norm .. ∞ .

Remark 2
We have used the hypothesis (h 2 ) to prove that the sequence {x p } p∈N is bounded. In the case of the parallel algorithm of Jacobi where J(p) = {1, ..., α} ∀p ∈ N, we don't need this hypothesis, since in this case x p+1 = F (x p ) ∀p ∈ N, and use (h 3 ) to obtain hence the corollary,

Solutions of maximal monotone operators.
In this section, we apply the parallel Jacobi algorithm to the proximal mapping F = (I + T ) −1 associated with the maximal monotone operator T . We give first a general result concerning the maximal monotone operators. Such operators have been studied extensively because of their role in convex analysis (minimization of functionals, min-max problems, convex programming, ...) and certain partial differential equations (Rockafellar [9]). Let T be a multivalued maximal monotone operator defined from R n to R n . A fundamental problem is to determine an x * in R n satisfying 0 ∈ T x * which will be called a solution of the operator T .
Theorem 4 Let T be a multivalued maximal monotone operator such that T −1 0 = φ. Then every parallel Jacobi algorithm associated with the singlevalued mapping F = (I + T ) −1 converges in R n to an x * solution of the problem 0 ∈ T x. Proof.
Thus, the solutions of T are the fixed points of F , so the condition T −1 0 = φ implies the existence of a fixed point u of R n . It remains to show that F verifies the condition (h 3 ) and apply Corollary 3. Consider x i ∈ R n (i = 1, 2) and put

Minimization of functional.
Corollary 5 Let f : R n → R ∪ {∞} be a lower semicontinuous convex function which is proper (i.e not identically +∞). Suppose that the minimization problem min R n f (x) has a solution. Then any parallel Jacobi algorithm associated with the single-valued mapping F = (I + ∂f ) −1 converges to a minimizer of f in R n .
Proof. Since in this case the subdifferential ∂f is maximal monotone. Moreover the minimizers of f are the solutions of ∂f . We then apply Theorem 4 to ∂f .

Saddle point.
In this paragraph, we apply Theorem 4 to calculate a saddle point of functional L : R n × R p → [−∞, +∞]. Recall that a saddle point of L is an element (x * , y * ) of R n × R p satisfying L(x * , y) ≤ L(x * , y * ) ≤ L(x, y * ), ∀(x, y) ∈ R n × R p which is equivalent to Suppose that L(x, y) is convex lower semicontinuous in x ∈ R n and concave upper semicontinuous in y ∈ R p . Such functionals are called saddle functions in the terminology of Rockafellar [6]. Let T L be a multifunction defined in R n × R p by If L is proper and closed in a certain general sense, then T L is maximal monotone; see Rockafellar [6,7]. In this case the global saddle points of L (with respect to minimizing in x and maximizing in y) are the elements (x, y) solutions of the problem (0, 0) ∈ T L (x, y). That is (0, 0) ∈ T L (x * , y * ) ⇐⇒ (x * , y * ) = arg min x∈R n max y∈R p L(x, y) We can then apply Theorem 4 to the operator T L so, Corollary 6 Let L be a proper saddle function from R n ×R p into [−∞, +∞] having a saddle point. Then any parallel Jacobi algorithm associated with the single-valued mapping F = (I + T L ) −1 from R n × R p into R n × R p converges to a saddle point of L.

Convex programming.
We consider now the convex programming problem, This problem can be reduced to an unconstrained one by mens of the Lagrangian, where x ∈ R n and y ∈ (R + ) m . We observe that L is a saddle function in the sense of [6,p. 363], due to the assumptions of convexity and continuity. The dual problem associated with (P ) is, If (x * , y * ) is a saddle point of the Lagrangian L then x * is an optimal solution of the primal problem (P ) and y * is an optimal solution of the dual problem (D). Let ∂L(x, y) the subdifferential of L at (x, y) ∈ R n × R p , be defined as the set of vectors (u, v) ∈ R n × R p satisfying (see Luque [5] and Rockafellar [6]). Then the operator T L : (x, y) → {(u, v) : (u, −v) ∈ ∂L(x, y)} is maximal monotone (Rockafellar[6, Cor. 37.5.2]), so we apply Theorem 4 to T L .
Corollary 7 Suppose that the convex programming (P ) defined by (5) has a solution. Then any parallel Jacobi algorithm associated with the single-valued mapping F = (I + T L ) −1 from R n × R p to R n × R p converges to a saddle point (x * , y * ) of L, and so x * is a solution of the primal (P ) and y * a solution of the dual (D).

Variational inequality.
A simple formulation of the variational inequality problem is to find an x * ∈ R n satisfying where A : R n → R n is a single-valued monotone and maximal operator 4 . Which is equivalent to find an x * ∈ R n such that where N(x) is the normal cone to R n at x defined by (see Rockafellar [6,9]), N(x) = {y ∈ R n : y, x − z ≥ 0 ∀z ∈ R n } Rockafellar [9] has considered the multifunction T defined in R n by T x = Ax + N(x) (8) and shown in [8] that T is maximal monotone. The relation 0 ∈ T x * is so that reduced to −Ax * ∈ N(x * ) or −Ax * , x * − z ≥ 0 ∀z ∈ R n which is the variational inequality (7). Therefore the solutions of the operator T (defined by (8)) are exactly the solutions of the variational inequality (7). By using Theorem 4 we can write Corollary 8 Let A : R n → R n be a single-valued monotone and hemicontinuous operator such that the problem (7) has a solution, then any parallel Jacobi algorithm associated with the single-valued mapping F = (I + T ) −1 where T is defined by (8) converges to x * solution of the problem (7).