A Modified Scaled Spectral-Conjugate Gradient-Based Algorithm for Solving Monotone Operator Equations

Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano, Kano, Nigeria Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Ga-Rankuwa, Pretoria 0204, South Africa Faculty of Science and Technology, Rajamangala University of Technology Phra Nakhon (RMUTP), 1381, Pracharat 1 Road, Wongsawang, Bang Sue, Bangkok 10800, 0ailand KMUTT Fixed Point Research Laboratory, Room SCL 802 Fixed Point Laboratory, Science Laboratory Building, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology 0onburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, 0rung Khru, Bangkok 10140, 0ailand Department of Mathematics, Ekiti State University, Ado Ekiti 360001, Nigeria Department of Mathematics, Faculty of Science, Usmanu Danfodio University Sokoto, Sokoto, Nigeria


Introduction
We desire in this work to propose an algorithm to solve the problem: where F: R n ⟶ R n is monotone and Lipschitz continuous and C ⊆ R n is nonempty, closed, and convex. Solving problems of form (1) are becoming interesting in recent years due to its appearance in many areas of science, engineering, and economy, for example, in forecasting of financial market [1], constrained neural networks [2], economic and chemical equilibrium problems [3,4], signal and image processing [5,6], phase retrieval [7,8], power flow equations [9], nonnegative matrix factorisation [10,11], and many more. Some notable methods for finding solution to (1) are: Newton's method, quasi-Newton method, Gauss-Newton method, Levenberg-Marquardt method, and their variants [12][13][14][15]. ese methods are prominent due to their fast convergence property. However, their convergence is local, and they require computing and storing of the Jacobian matrix at each iteration. In addition, there is a need to solve a linear equation at each iteration. ese and other reasons make them unattractive especially for largescale problems. To avoid the above drawbacks, methods that are globally convergent and also do not require computing and storing of the Jacobian matrix were introduced. Examples of such methods are the spectral (SG) and conjugate (CG) gradient methods. However, SG and CG methods for solving (1) are usually combined with the projection method proposed in [16]. For instance, Zhang and Zhou [17] extended the work of Birgin and Martinéz [18] for unconstrained optimization problems by combining it with the projection method and proposed a spectral gradient projection-based algorithm for solving (1). Dai et al. [19] extend the modified Perry's CG method [20] for solving unconstrained optimization problem to solve (1) by combining it with the projection method. Liu and Li [21] incorporated the Dai-Yuan (DY) [22] CG method with the projection method and proposed a spectral Dai-Yuan (SDY) projection method for solving nonlinear monotone equations. e method was shown to be globally convergent under appropriate assumptions. Furthermore, to popularize and boost the efficiency of the DY CG method, Liu and Feng [23] proposed a spectral DY-type CG projection method (PDY), where the spectral parameter is derived such that the direction is descent. It is worth mentioning that all the methods mentioned above require the operator in (1) to be monotone. Recently, Li and Zheng [24] proposed scaled three-term derivative-free methods for solving (1). e method is an extension of the method proposed by Bojari and Eslahchi [25]. However, to establish the convergence of the method, Li and Zheng assume that the operator is uniformly monotone which is a stronger condition. Some other related ideas on spectral gradient-type and spectral conjugate gradient-type methods for finding solution to (1) were studied in [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41] and references therein.
In this work, motivated by the strong condition imposed on the operator by Li and Zheng [24], we seek to relax the condition on the operator from uniformly monotone to monotone. is is achieved by modifying the two search directions defined by Li and Zheng. In addition, the global convergence is established under the assumption that the operator is monotone and Lipschitz continuous. Numerical examples to support the theoretical results are also given.
Notations: unless or otherwise stated, the symbol ‖ · ‖ stands for Euclidean norm on R n . F(x k ) is abbreviated to F k . Furthermore, P C [·] is the projection mapping from R n onto C given by P C [x] � arg min ‖x − y‖: x ∈ R n , y ∈ C , for a nonempty closed and convex set C⊆R n .

Motivation and Algorithm
In this section, we will begin by recalling a three-term spectral-conjugate gradient method for solving (1). Given an initial point x 0 , the method generates a sequence x k via the following formula: where x k+1 and x k are the current and previous points, respectively. α k is the stepsize obtained via a line search and d K is the search direction defined as where θ k , β k , and c k are parameters and Based on the three-term direction above, we will propose a modified scaled three-term derivative-free algorithms for solving (1). e algorithms are a modification of the two algorithms proposed by Li and Zheng [24]. e aim of the modification is to relax the uniformly monotone assumption on the operator. e search directions defined in [24] were shown to be bounded under the uniformly monotone assumption. Our main interest is to modify the search directions defined in [24] and prove their boundedness without requiring the uniformly monotone assumption. e directions in [24] are defined as follows: STDF1: STDF2: To obtain a lower bound for the term d T k−1 y k−1 , Li and Zheng used the uniformly monotone assumption. So, in order to relax this condition, we replace the term d T k−1 y k−1 in the directions defined by (4) and (5) with d T k−1 w k−1 . In addition, we replace c k−1 and c in (4) and (5) with c k−1 , s k−1 in (5) with d k−1 . Hence, we define the new directions as follows: PSTDF1: PSTDF2: where Remark 1.
From (10), a lower bound for the term d T k−1 w k−1 is obtained without any assumption on the operator F.
Let Sol(C, F) be the solution set of (1) and assume that the following holds.

Assumption 1.
e constraint set C is nonempty, closed, and convex.

Assumption 2.
e operator F is monotone, that is, In the following algorithm, we generate approximate solutions to problem (1) under Assumptions 1-3 Algorithm 1.
Step 3. Compute Step 4. If z k ∈ C and ‖F(z k )‖ ≤ tol, then stop. Else, compute Journal of Mathematics where Step 5. Let k � k + 1 and repeat from Step 1.

Theoretical Results
In this section, we will establish the convergence analysis of the proposed algorithm. However, we require the following important lemmas. e following lemma shows that the proposed directions are descent.

Lemma 1.
e search directions defined by (7) and (8) satisfy the sufficient descent condition.
Proof. Multiplying both sides of (7) by F T k , we have Also, multiplying both sides of (8) by F T k , we have Hence, for all k, the directions defined by (7) and (8) satisfy e lemma below shows that the linesearch (14) is welldefined and the stepsize is bounded away from zero. □ Lemma 2 (see [5]). Suppose Assumptions 1-3 are satisfied. If d k , z k , and x k are sequences defined by (7), (13), and (15), respectively, then (i) For all k, there is α k � ϑρ i satisfying (14) for some i ∈ N ∪ 0 { } and ∀k ≥ 0. (ii) α k obtained via (14) satisfies Lemma 3 (see [5]). Suppose Assumptions 1-3 are fulfilled, then the sequences z k and x k defined by (13) and (15) are bounded. Furthermore, Lemma 4 (see [5]). From Lemma 3, we have Remark 2. Since x k is bounded from Lemma 3 and F is continuous from Assumption 3, F k is also bounded. at is, there exists c 1 , c 2 > 0 such that, for all k, All are now set to establish the convergence of the proposed algorithm.

Theorem 1. Suppose Assumptions 1 and 2 are satisfied. If
x k is a sequence defined by (15), then Furthermore, the sequence x k converges to a solution of problem (1).
Proof. Suppose that lim inf k⟶∞ ‖F k ‖ ≠ 0, then there is a positive constant ] > 0 such that, for all k ≥ 0, By (17), (18), and the Cauchy-Schwartz inequality, we have that, for all k ≥ 0, To complete the proof of the theorem, we need to show that the search direction d k defined by (7) and (8) are bounded.

Numerical Examples on Monotone
Operator Equations is segment of the paper would demonstrate the computational efficiency of the PSTDF algorithm relative to STDF algorithm [24]. For PSTDF algorithm, we have PSTDF1 which corresponds to the direction defined by (7) and PSTDF2 corresponding to the one defined by (8). Similarly, for the STDF algorithm, we have STDF1 and STDF2 corresponding to (4) and (5), respectively. e parameters chosen for the implementation of the PSTDF algorithm are ϑ � 1, μ 1 � 1.9, μ 2 � 0.8, ρ � 0.8, and t � 10 − 4 . e parameters for STDF algorithm are chosen as reported in [24]. e metrics considered are the number of iteration (NOI), number of function evaluations (NFE), and the CPU time (TIME). We used eight test problems with dimension n � 1000, 5000, 10, 000, 50, 000, and 100, 000 and five initial points   . . . , 2) T . e algorithms were coded in MATLAB R2019a and run on a PC with Intel (R) Core (TM) i3-7100U processor with 8 GB RAM and CPU 2.40 GHz. e iteration process is stopped whenever ‖F(x k )‖ ≤ 10 − 5 . Failure is declared if this condition is not satisfied after 1000 iterations. Table 1 consists of the test problems considered, where the function F is F(x) � (f 1 (x), f 2 (x), . . . , f n (x)) T and x � (x 1 , x 2 , . . . , x n ) T . e result of the experiments in Tabular form can be found in the link https://documentcloud.adobe.com/link/ review?uri�urn:aaid:scds:US:77a9a900-2156-4344-a9d9-b42e3a3dc8e5. It can be observed from the results that the algorithms successfully solved all the problems considered without a single failure. However, to better illustrate the performance of each algorithm, we employ the Dolan and Moré [47] performance profiles and plot Figures 1-3. Figures 1-3 represent the performance of the algorithms based on NOI, NFE, and TIME, respectively. In terms of NOI (Figure 1), the best performing algorithm is PSTDF2 with 70% success, followed by PSTDF1 with 51% success. STDF1 and STDF2 record less than 10% success each. Based on NFE (Figure 2), the best performing algorithm is PSTDF1 with around 42% success, followed by PSTDF2 with almost 40% success. STDF1 and STDF2 record 20% and around 15% success, respectively. Lastly, in terms of TIME (Figure 3), PSTDF2 performs better with around 50% success, followed by PSTDF1 with more than 30% success. STDF1 and STDF2 record around 20% and 5% success, respectively.

Journal of Mathematics
Overall, we can conclude that PSTDF1 and PSTDF2 outperform STDF1 and STDF2 based on the metrics considered.

Conclusions
In this paper, a modified scaled algorithm based on the spectral-conjugate gradient method for solving nonlinear monotone operator equations was proposed. e algorithm replaces the stronger assumption of uniformly monotone on the operator in the work of Li and Zheng (2020) with just monotone, which is weaker. Interestingly, the search directions were shown to be descent independent of line search and also without monotonicity assumption (unlike in the work of Li and Zheng). Furthermore, the convergence results were established under monotonicity and Lipschitz continuity assumptions on the operator. Numerical experiments on some benchmark problems were conducted to illustrate the good performance of the proposed algorithm.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.