The indefinite inner product defined by

Nowadays, iterative methods are used extensively for solving general large sparse linear systems in many areas of scientific computing because they are easier to implement efficiently on high-performance computers than direct methods.

Projection methods for solving systems of linear equations have been known for some time. The initial development was done by A. de la Garza [

One process by which an approximate solution

In this paper, we introduce three iterative methods in the space with hyperbolic inner product. These methods are indefinite Arnoldi, indefinite Lanczos (ILM), and indefinite full orthogonalization (IFOM), and we define new algorithms to run these hyperbolic versions. By the numerical examples, we will compare these indefinite algorithms with their common definite modes, from the point of the number of iterations and the required time to run the algorithms.

This paper is organized as follows: in Section

We know that there are many applications which require a nonstandard scalar product which is usually defined by

Let

If

Let

Then, it follows that

In this section, we construct the indefinite Arnoldi’s method and then we turn it into a practical algorithm.

Let a matrix

It is well known that the construction of a basis with Arnoldi’s method for the Krylov subspace

Choose a vector

Define

For

For

Compute

Compute

EndDo

EndDo

Assume that the indefinite Arnoldi’s algorithm does not stop before the

By considering the following expression, the proof is straightforward:

Define

Then, the following relations are valid:

In particular, if

Indeed, in general, (

Now, to see (

On the other hand, given that the vectors

According to the definition of

We have

In other words,

Therefore, relation (

Particularly, if

It is noteworthy that these concepts are used in [

The purpose of this section is to build an algorithm to solve the linear system:

This leads to

Now, if

Thus, relation (

Our explanations are summarized in Algorithm

Compute

Define the

For

Compute

For

EndDo

Compute

EndDo

Compute

The residual vector of the approximate solution

By using relations (

We expect that FOM performs better than IFOM in terms of the number of iterations and the required time to run because in the IFOM method, the product of entries of

Let

The IFOM algorithm (Algorithm

The FOM algorithm does the same with 149 replications and at

Despite the superiority of the FOM on the IFOM, there is an important property of the IFOM algorithm that is shown in the next section.

This section is devoted to the indefinite Lanczos method. As can be seen in the following, this method is expressed as a special case of the indefinite Arnoldi’s method in the complex space for the special case when the matrix

A matrix

Assume that indefinite Arnoldi’s method is applied to a

From the indefinite Arnoldi’s method, we have relation (

Thus,

Therefore, the resulting matrix

In fact, we have the following:

Thus,

By using the hyperbolic inner product both sides in

This implies that

With this explanation, the hyperbolic version of the Hermitian Lanczos algorithm can be formulated as given in Algorithm

Now, consider the linear system

Thus, using the above algorithm, Algorithm

Similar to what has already been proven for the IFOM algorithm, here also it can be seen that

The advantage of ILM (the indefinite Lanczos method) is that it solves some classes of linear systems with different coefficient matrices, for different choices of matrix

Choose an initial vector

Set

For

EndDo

Compute

Set

For

EndDo

Set

Compute

Consider the linear system

The IFOM algorithm brings the linear system

The FOM does the same with 118 iterations and within

The ILM algorithm does the same with 130 iterations and within

It shows that ILM is more efficient than the IFOM and FOM. However, the number of its iterations is higher but less time is required.

Consider the assumptions of the previous example, except that

By IFOM: the number of iterations is 140, within

By FOM: the number of iterations is 136, within

By ILM: the number of iterations is 154, within

Consider the linear system

Then, to achieve the condition

By IFOM: the number of iterations is 175, within

By FOM: the number of iterations is 167, within

By ILM: the number of iterations is 190, within

As it is seen, the ILM method is superior to FOM and IFOM methods. It is because of the low length of the recursive relation in its algorithm (tree terms for it) (see Figure

For two

Using the above point, the number of multiplication operations required to perform steps (3)–(12) of the IFOM algorithm is equal to

However, the number of required multiplication operations to do steps (3)–(11) of the ILM algorithm is

Comparison of (

In the aforementioned algorithms, the inverses of the upper Hessenberg matrix

What was said above shows that the run speed of the

The indefinite inner product defined by

All data used to support the findings of this study are accessible and these data are cited at the relevant places within the text. The only exception is MATLAB codes for drawing the figures of the paper, which are also available from the corresponding author upon request.

The authors declare that they have no conflicts of interest.