ESTIMATES OF DETERMINANTS AND OF THE ELEMENTS OF THE INVERSE MATRIX UNDER CONDITIONS OF OSTROVSKII THEOREM

In 1951 A. M. Ostrovskii published the remarkable theorem being the far generalization of the known J. Hadamard theorem concerning the matrix with the diagonal predomination [3]. The mentioned Ostrovskii result is the following. Let A = (aij ) be the square matrix of the order n. Let for a certain value of the parameter α, 0 < α < 1, the following conditions be satisfied: ∣∣aii∣∣> p1−α i q i , i = 1,2, . . . ,n, (1)

In 1951 A. M. Ostrovskii published the remarkable theorem being the far generalization of the known J. Hadamard theorem concerning the matrix with the diagonal predomination [3].The mentioned Ostrovskii result is the following.Let A = (a ij ) be the square matrix of the order n.Let for a certain value of the parameter α, 0 < α < 1, the following conditions be satisfied: where a ji , i = 1, 2, . . ., n. ( Then the matrix A is regular.If we put formally in (1) α = 0 or α = 1, we come to Hadamard theorem for matrices with predomination with respect to the row in the first and with respect to the column in the second case (see [2]).Thus, introducing the parameter α Ostrovskii succeeded in the happy connection of different type conditions and-that is maybe, more important-he opened the way of using Hölder inequality in investigations of tests of the matrix regularity.
(Remember that the matrix is called regular if it has an inverse matrix.Denotion of the sum in (2) means that the summation is taken over all indexes j = 1, 2, . . ., except j = i.) In the following presentation magnitudes are significant.In the accepted notations Ostrovskii conditions (1) could be now written shortly Below we frequently use Hölder inequality which is convenient to take the form where α, β > 0 and α + β = 1.The equality sign in this inequality is true if and only if arg(x i y i ) = const and In this paper, we accept the following order of the location of the material: first we present the proof of Ostrovskii theorem using the contraction condition (Theorem 1); then we point estimates (upper and lower) of the absolute value of the determinant of the matrix satisfying Ostrovskii theorem and also clarify when obtained estimates become equality (Theorem 2); and then we prove the central result containing estimates of the absolute value of inverse matrices where for the diagonal elements it has been possible to obtain both upper and lower estimates of the absolute value (Theorem 3).
We write A = B + C where B is a diagonal matrix with elements a 11 , a 22 , . . ., a nn .From (1) it follows that the diagonal elements of the matrix A are not equal to zero and, consequently, the matrix B is regular.So we can write A = B(I + S), where I is an identical matrix and S = B −1 C. Theorem 1 clarifies the geometrical sense of Ostrovskii conditions.Before we formulate it, we suppose that all q i 's are positive and introduce into consideration the norm Theorem 1.Let Ostrovskii conditions (4) be satisfied.Then the matrix S is a contraction where σ i is defined by (3) and the norm of the matrix S is induced by the norm (6).
Proof.Let y = Sx.Then We estimate the sum in the right-hand side using Hölder inequality Using (3) we obtain from (9) that from where, according to (7), we find The estimate (11) is true for every i = 1, 2, . . ., n. Adding all these estimates and raising both parts of the obtained inequality to the power α, we get from where by the definition of the norm of the vector, (6), the estimate Sx ≤ σ x follows.As the relation ( 12) is true for every vector x, then S ≤ σ and the theorem is proved.
From Theorem 1, it follows that the matrix I + S has an inverse and the estimate (I + S) −1 ≤ 1/(1 − σ ) also follows (cf.[2, page 205]).Therefore, the matrix A is regular and from the obvious relation A −1 = (I + S) −1 B −1 , the estimate is fulfilled.
Theorem 2. Let Ostrovskii conditions (4) be satisfied.Then the estimates are fulfilled.The equality sign in both upper and lower estimates is valid if and only if the conditions are satisfied.(We emphasize that the satisfaction to conditions (15) does not mean that the matrix A is diagonal.) Proof.The proof is based on the known idea of [1, page 47].Let the matrix D = (d ij ) be defined as follows: and our problem is reduced to the estimate of the absolute value of the determinant of the matrix D from above or from below depending on the change of the sign in (16).Let λ 1 , λ 2 , . . ., λ n be the complete system of eigenvalues of the matrix D. For estimating the determinant, we use the following well-known relation: stating that the determinant of the matrix is the product of all its eigenvalues.Let λ be any eigenvalue of the matrix D, and x be the corresponding eigenvector such that Dx = λx.According to (16) we obtain from where it follows that The matrix Ã of the system of homogeneous linear equations (20) coincides with the matrix A, except diagonal elements which are changed to We show that no eigenvalue of the matrix D with the plus exceeds 1 by its absolute value.Suppose the contrary-let |λ| > 1.Then according to (21), we obtain that is, for the matrix Ã Ostrovskii conditions are fulfilled.Therefore the system (20) has only zero solution that contradicts nontriviality of the eigenvector.Thus, by the proved We now show that no eigenvalue of the matrix D with the minus be less than 1 by its absolute value.Suppose the contrary-let |λ| < 1.Then according to (1) and ( 21) we find that We show now that the equality signs are possible in (14) if and only if conditions (15) are satisfied.Since the sufficiency of the mentioned conditions follows immediately from (14), we prove only their necessity where, without loss of generality, we suppose in addition that all diagonal elements of A are positive.
If in (14) the equality sign is achieved in the upper (lower) estimate, then | det D| = 1, where the matrix D is taken with the plus (minus) sign.As we have just known in the case under consideration all eigenvalues of the matrix D not exceed (not less than) 1 by their absolute value and hence equality (23) is possible if and only if all eigenvalues have their absolute values equal to 1. Now, we show that really the equality | det D| = 1 is possible in the only case when all eigenvalues of the matrix D equal to 1 such that In fact, if we suppose that for any eigenvalue λ = 1 we have |λ| = 1, then letting u and v be correspondantly real and imaginary part of λ (u < 1), and according to (21), we find that that is, for the matrix Ã Ostrovskii conditions are fulfilled.Therefore, the system (20) has only zero solution inspite of the nontriviality of the eigenvector.Thus, all eigenvalues of the matrix D equal to 1. Since the sum of eigenvalues of the matrix equals to its trace, we obtain from (16), where the upper (lower) sign corresponds to the matrix D with the plus (minus).It is not difficult to see that, due to nonnegativity of σ i each of the relations (25) (for plus and minus signs) is possible if and only if conditions (15) are satisfied.The theorem is thus proved.

Theorem 3. Let Ostrovskii conditions (15) be satisfied. Then for elements of the inverse matrix
are fulfilled, where (the last record means that the maximum is taken over all j = 1, 2, . . ., n except j = i).
Proof.We fix the number j .Components of the j th column of the matrix A −1 form a solution of the system From ( 29) we obtain, using Hölder inequality (5), that from where it follows that We multiply both sides of (32) by q α i , divide by |a ii | and use (3); further, raise the leftand the right-hand sides of the obtained inequality to the power 1/α.Then we find that Similarly, from (30) we find that The main inequalities-( 33) and (34)-are obtained and we can go further.First of all, add all inequalities (33); according to (28) we obtain i =j We now write in details the left-and the right-hand sides of this inequality, emphasizing on the left those terms which are included into the estimate (34) and on the right the term containing x j : Since according to (4), σj < 1, then from (36) we find that The numerator of the fraction in the right-hand side of (37) is clearly nonnegative, therefore Estimating by (38) the right-hand side of (34), we obtain Reducing q j , dividing by |x j | 1/α and raising both the left-and the right-hand sides to the power α, we come to the important formula (If q j = 0, then the system (29) and (30) has an evident solution x i = 0 for i = j and x j = 1/a jj so the estimate (40) is obviously true because σ j = 0; the value of |x j | is positive because in the opposite case from (39) it would follow that x j = 1/a jj , and we would have the explicit contradiction.)From the estimate (40) both upper and lower estimates in (26) follow immediately.
In order to obtain estimates for off-diagonal elements of the inverse matrix, we add inequality (34) together with all inequalities (33) (changing beforehand in these inequalities i by s) except the ith one (here we fix the number i too).We obtain We write in details the left-and the right-hand sides of (41), emphasizing the ith and the j th variables and the expression used in the estimate (33): Using again condition (4), from (42) we obtain Since the numerator of the fraction in the right-hand side of ( 43) is nonnegative, we come to the estimate This formula cannot still be used for the estimate of x i because in its left side the member |a ij ||x j | 1/α is absent (see formula (33)).We enforce inequality (44) by changing σ 1/α i in the first term on the right side by 1. Then we can write From ( 33) and (45) we obtain from where the following important relation follows Since the derivative of the function (1 − u 1/α ) α /(1 − u) being considered on the half-interval [0, 1) is positive, this function is strongly increasing.Therefore, putting 1/(a jj x j ) = 1 − t (note that according to (40) |t| ≤ j ), we can write Now from (47) the intermediate estimate follows which we put, remembering the definition of σ i (see ( 3)), into the next form The transition from (47) to (50) was made under the supposition that q i > 0. In the case q i = 0-there is no arguments to exclude it-instead of the inequality (33) we use the inequality generating (33): Instead of the estimate (47), we obtain now from where we come, as above, to the estimate (50) (note that in the case under consideration i = 0).As we can easily see, the transpose matrix A satisfies Ostrovskii conditions too where the index α should be changed by the conjugate index β = 1 − α.In this case quantities σ i for the matrix A coincide with quantities σ i for the matrix A. Because (A ) −1 = (A −1 ) , we obtain from the estimate (50), used for the transpose matrix, that We can see that the quantity we are interested in admits two different estimates (50) and (53).Therefore, the estimate contains the minimum of two encountered expressions is also true.We would not write this estimate in view of its awkwardness but take up closely the problem being stated to obtain the announced estimate (27).If i = j , then each of estimates (50) and (53) turns into (27).If i = j , then we reason as follows.Since the derivative of the function φ γ (u) = (1 − u)/(1 − u 1/γ ) where 0 < γ < 1 being considered on the half-interval [0, 1) is negative, this function is strongly decreasing.Therefore if, for example, i < j , then putting γ = β we obtain φ β ( i ) > φ β ( j ) from where it follows that Anatolij I. Perov 141that is, for the matrix Ã Ostrovskii conditions are fulfilled.Therefore, the system (20) has only zero solution that contradicts nontriviality of the eigenvector.Thus, by the proved |λ i | ≥ 1 for i = 1, 2, . . ., n, from where according to (18) the estimate | det D| ≥ 1 follows.The relation (17) in that case shows that the lower estimate in (14) has been established too.