Latest Inversion-Free Iterative Scheme for Solving a Pair of Nonlinear Matrix Equations

In this work, the following system of nonlinear matrix equations is considered, X1 + A∗X− 1 1 A + B∗X− 1 2 B � I andX2 + C∗X− 1 2 C + D∗X− 1 1 D � I, where A, B, C, andD are arbitrary n × n matrices and I is the identity matrix of order n. Some conditions for the existence of a positive-definite solution as well as the convergence analysis of the newly developed algorithm for finding the maximal positive-definite solution and its convergence rate are discussed. Four examples are also provided herein to support our results.


Introduction and Preliminaries
Consider the system of matrix equations of the form where Q is an n × n Hermitian positive definite matrix (HPD, for short), A, B, C, and D are complex matrices of order n × n, Ω 1 (X), Ω 2 (X), P 1 (X), P 2 (X), Q 1 (X), and Q 2 (X) are mappings from the set of positive definite matrices to itself, and A * is the conjugate transpose of A.
We can see that the above equations incorporate a few linear as well as nonlinear matrix equations (NMEs, in short). Over the last few decades, many researchers studied systems (1) and (2) with different types of P i , Q i , and Ω i , i � 1, 2.
For a matrix D ∈ H(n), s 1 (D) ≥ s 2 (D) ≥ · · · ≥ s n (D) denotes its singular values and |‖D‖| denotes the sum of these values, i.e., the trace norm of D. e Frobenius norm will be denoted by ‖D‖ F � ( n i,j�1 |D ij | 2 ) 1/2 . For P, Q ∈ H(n), P ≥ Q (resp. P > Q) indicates that P − Q is positive semidefinite (resp. positive definite). O and I stand for the zero and unit matrix in H(n), respectively.

Conditions in Support of the Existence of a Positive Definite Solution
We begin with two useful lemmas.
Now, let us consider the system of NMEs of the form Taking Y � X − 1 1 and Z � X − 1 2 , the system of nonlinear matrix equation (3) is equivalent to en, from Lemma 2, which gives is, in turn, implies that I ≤ Y and I ≤ Z. □ Lemma 3 (see Theorem 2.1 in [16]). Let U and V be positive operators acting on a Hilbert space H in such a way that hold for any s ≥ 1.
To prove (iii), if A is nonsingular, then, by AA * < Y − 1 , we obtain We have Since s 2 n (A)I ≤ AA * ≤ s 2 1 (A)I, where s 1 (A) and s n (A) are the maximal and the minimal singular value of the matrix A, respectively, by (11) and (12) and Remark 1, we obtain For the case of nonsingular matrices D, C, and B, we can obtain similarly.
where the matrices W and U are nonsingular and the columns is a solution of (4).

Journal of Mathematics
Example 1. We can find W, U, K 1 , K 2 , L 1 , and L 2 (see eorem 7.2.7, page 440, in [17]) for the following numerical experiment (2) Also, it is easy to derive It meets all of the requirements of eorem 2.

Construction of Iteration Schemes
is section contains a new iteration scheme for the NMEs (4) in the context of [2,11].
By pre-and postmultiplying the first and the second equation of (4), respectively, by Y and Z, we obtain After some simple calculation, we have us, to attain a HPDS of (4), we need to solve (23). From (23), the iterative scheme is as follows.

Convergence Analysis
is section contains the proof that the sequences (Y n , Z n ) generated by Algorithm 1, where Y 0 � I and Z 0 � I are initial conditions and converge to the minimal positivedefinite solution of equation (4).

Theorem 4. Suppose that the system of NMEs (4) has a PDS and the sequence
Proof. Let (Y, Z) be any PDS of the system of NMEs (4). We Using this condition in Lemma 1 together with NMEs (4), Using this fact in Lemma 1 together with NMEs (4), we obtain that is, Y k+2 ≤ Y. en, by this condition and Lemma 2, we have Using this fact in Lemma 1 together with the NMEs (22) and the fact Z i+1 > O, we have Also, from the NMEs (24), we get that Next, by using the fact together with the NMEs (24), Lemma 1, and the fact Y k > 0, it follows that Using this fact with Lemma 2, we have is fact, together with the NMEs (24), Lemma 1, and the fact Z k > O, yields us, by using the principle of induction, we have that the relations O < Y n ≤ Y n+1 ≤ Y and O < Z n ≤ Z n+1 ≤ Z hold for n ∈ N * . So, the sequence (Y n , Z n ) is well defined, increasing, and bounded above. Let  erefore, from eorem 4, the sequence (Y n , Z n ) generated by Algorithm 1 with the initialization Y 0 � I and Z 0 � I converges to the inverse of the maximal PDS of NMEs (3), respectively.
Step 1: input A, B, C, D ∈ C n×n . Take Y 0 � I and Z 0 � I for initialization and tolerance error ϵ ≥ 0. Set n: � 0.

Rate of Convergence
Lemma 4 (see [18]). If 0 < θ ≤ 1 and U, V ∈ P(n) with U, V ≥ cI > O, then, for every unitarily invariant norm, Z) is a PDS of the matrix equation (4) under Algorithm 1 and ϵ > 0 is arbitrary, then we get the following. and and for all n large enough.
In the same way, we can prove (B).

Numerical Examples
Two examples are presented in this section in support of Algorithm 1. Take Residue � ‖Y n+1 − Y n ‖ + ‖Z n+1 − Z n ‖, tolerance � 1e − 10, and norm as Frobenius norm. For comparative analysis, a basic fixed point algorithm (BFP, for short) has been used: ese examples, together with tables and graphs showing different input data, solutions, iteration number of different schemes, error, CPU time, average computational time, etc., are illustrated here. For a better understanding, we have used line graphs, bar graphs, pie graphs, and surface plotting.

Example 2. Take
After applying Algorithm 1 with the initial conditions Y 0 � I and Z 0 � I, we obtain X 1 �  Example 3. In the current example, a new, randomly chosen set of coefficients A, B, C, D ∈ C n×n is used. e construction is followed by Here, "r" and "fro" stand for dimension and Frobenius norm of the matrix, respectively. A different set of initial conditions has been chosen here as e result of this experiment using Algorithm 1 is shown in Table 2.
Example 4. Here, we consider some randomly generated matrices: , , , where r � dimension of the matrices, rand(r) � random matrices of order r, and tol � 1e − 10. After applying Algorithm 1 (ALgo1) and basic fixed-point method under the initial conditions Y 0 � I and Z 0 � I, we get Table 3 that shows the various outputs for different r. e 5th column of Table 3 Figure 26 represents the average CPU time through bar graphs for different dimensions Remark 3. We can infer from the above discussions that our algorithm is less expensive in terms of computation after evaluating these examples w.r.t. different sets of parameters.
Remark 4. We can infer from the above discussions that the solutions are positive definite.                    No. of Iterations for dim=12   No. of Experiment, dim=20

Conclusion
In this study, a new iterative algorithm has been developed. All numerical tests are in agreement with the theoretical findings of this research done. Finally, based on the numerical results, we have concluded that the new iterative approach is extremely powerful and efficient in finding numerical solutions for a wide range of nonlinear matrix equations including complex ones. It also produces very accurate results with less iterations and lower computational costs, compared with the basic fixed-point approach.

Data Availability
No data were used to support the findings of the study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.