On a Discrete Inverse Problem for Two Spectra

The Jacobi matrices tridiagonal symmetric matrices appear in variety of applications. A distinguishing feature of the Jacobi matrices from others is that they are related to certain three-term recursion equations second-order linear difference equations . Therefore, these matrices can be viewed as the discrete analogue of Sturm-Liouville operators, and their investigation have many similarities with Sturm-Liouville theory 1 . AnN ×N real Jacobi matrix J is a matrix of the form


Introduction
The Jacobi matrices tridiagonal symmetric matrices appear in variety of applications.A distinguishing feature of the Jacobi matrices from others is that they are related to certain three-term recursion equations second-order linear difference equations .Therefore, these matrices can be viewed as the discrete analogue of Sturm-Liouville operators, and their investigation have many similarities with Sturm-Liouville theory 1 .
An N × N real Jacobi matrix J is a matrix of the form where for each n, a n and b n are arbitrary real numbers such that a n is different from zero: Let J 1 be the truncated matrix obtained by deleting from J the last row and last column: Denote the eigenvalues of the matrices J and J 1 by λ 1 , . . ., λ N and μ 1 , . . ., μ N−1 , respectively.The finite sequences {λ k } N k 1 and {μ k } N−1 k 1 are called the two spectra of the matrix J.The subject of the present paper is the solution of the inverse problem consisting of the following parts.
i Is the matrix J determined uniquely by its two spectra?
ii To indicate an algorithm for the construction of the matrix J from its two spectra; iii To find necessary and sufficient conditions for two given sequences of real numbers {λ k } N k 1 and {μ k } N−1 k 1 to be the two spectra for some matrix of the form 1.1 with entries from class 1.2 .This problem was solved earlier in 2, 3 .In the present paper we offer another and more effective, as it seems to us, method of solution for this problem.
Other versions of the inverse problem for two spectra are investigated in 1, 4-9 .
The paper consists, besides this introductory section, of two sections.Section 2 is auxiliary and presents briefly the solution of the inverse problem for finite Jacobi matrices in terms of the eigenvalues and normalizing numbers.A solution to this problem is presented in 1, Section 4.6 and 10 .In Section 3, we solve our main problem formulated above.At the basis of this solution is the formula where These formulae express the normalizing numbers β k of a finite Jacobi matrix in terms of two of its spectra.The formulae 1.4 and 1.5 give a conditional solution i.e., assuming that there exists a matrix of the form 1.1 which has the sequences {λ k } N k 1 and {μ k } N−1 k 1 as two of its spectra of the inverse problem in terms of two spectra because once we know the numbers {λ k } N k 1 and {β k } N k 1 , we can form the matrix J by the prescription given in Section 2. Next, we give necessary and sufficient conditions for two sequences of real numbers {λ k } N k 1 and {μ k } N−1 k 1 to be two spectra of a Jacobi matrix of the form 1.1 with entries in the class 1.2 , that is, we solve the main problem of this paper.The conditions consist of the following single and simple condition: that is, the numbers λ k and μ k interlace.

Preliminaries on the Inverse Spectral Problem
In this section, we follow the author's paper 10 .Given a Jacobi matrix J of the form 1.1 with the entries 1.2 , consider the eigenvalue problem Jy λy for a column vector y {y n } N−1 n 0 , that is equivalent to the second-order linear difference equation for {y n } N n −1 , with the boundary conditions: Denote by {P n λ } N n −1 and {Q n λ } N n −1 the solutions of 2.1 satisfying the initial conditions: For each n ≥ 0, P n λ is a polynomial of degree n and is called a polynomial of first kind and Q n λ is a polynomial of degree n − 1 and is known as a polynomial of second kind.The equality holds so that the eigenvalues of the matrix J coincide with the zeros of the polynomial P N λ .If P N λ 0 0, then {P n λ 0 } N−1 n 0 is an eigenvector of J corresponding to the eigenvalue λ 0 .Any eigenvector of J corresponding to the eigenvalue λ 0 is a constant multiple of {P n λ 0 } N−1 n 0 .As shown in 10, Section 8 , the equations hold, where the prime denotes the derivative with respect to λ.
Since the real Jacobi matrix J of the form 1.1 , 1.2 is self-adjoint, its eigenvalues are real.Let λ 0 be a zero of the polynomial P N λ .The zero λ 0 is an eigenvalue of the matrix J by 2.5 , and hence it is real.Putting λ λ 0 in 2.7 and using P N λ 0 0, we get The right-hand side of 2.8 is different from zero because the polynomials P n λ have real coefficients and hence are real for real values of λ, and besides P 0 λ 1.Therefore, P N λ 0 / 0, that is, the zero λ 0 of the polynomial P N λ is simple.Hence the P N λ , as a polynomial of degree N, has N distinct zeros.Thus, any real Jacobi matrix J of the form 1.1 , 1.2 has precisely N real and distinct eigenvalues.
Let R λ J − λI −1 be the resolvent of the matrix J by I we denote the identity matrix of needed dimension and e 0 the N-dimensional column vector with the components 1, 0, . . ., 0. The rational function w λ − R λ e 0 , e 0 λI − J −1 e 0 , e 0 , 2.9 we call the resolvent function of the matrix J, where •, • stands for the standard inner product in C N .This function is known also as the Weyl-Titchmarsh function of J.In 10, Section 5 it is shown that the entries R nm λ of the matrix R λ J − λI −1 resolvent of J are of the form

2.10
where Therefore, according to 2.9 and using initial conditions 2.3 and 2.4 , we get We often will use the following well-known simple useful lemma.We bring it here for easy reference.

Lemma 2.1. Let A λ and B λ be polynomials with complex coefficients and deg
, where z 1 , . . ., z N are distinct complex numbers and b is a nonzero complex number.Then, there exist uniquely determined complex numbers a 1 , . . ., a N such that for all values of λ different from z 1 , . . ., z N .The numbers a k are given by the equation

2.14
Proof.For each k ∈ {1, . . ., N}, define the polynomial where a k is defined by 2.14 .Obviously F λ is a polynomial and deg we have for all j ∈ {1, . . ., N}.Thus, the polynomial F λ of degree ≤ N − 1 has N distinct zeros z 1 , . . ., z N .Then F λ ≡ 0 and we get This proves 2.13 .Note that the decomposition 2.13 is unique as for the a k in this decomposition 2.14 necessarily holds.
Denote by λ 1 , . . ., λ N all the zeros of the polynomial P N λ which coincide by 2.5 with the eigenvalues of the matrix J and which are real and distinct : where c is a nonzero constant.Therefore applying Lemma 2.1 to 2.12 , we can get for the resolvent function w λ the following decomposition: where Further, putting λ λ k in 2.6 and 2.7 and taking into account that P N λ k 0, we get respectively.It follows from 2.23 that Q N λ k / 0 and therefore β k / 0. Comparing 2.22 , 2.23 , and 2.24 , we find that whence we obtain, in particular, that β k > 0.
Since {P n λ k } N−1 n 0 is an eigenvector of the matrix J corresponding to the eigenvalue λ k , it is natural, according to the formula 2.25 , to call β k the normalizing number of the matrix J corresponding to the eigenvalue λ k .
The collection of the eigenvalues and normalizing numbers: of the matrix J of the form 1.1 , 1.2 is called the spectral data of this matrix.Determination of the spectral data of a given Jacobi matrix is called the direct spectral problem for this matrix.
Thus, the spectral data consist of the eigenvalues and associated normalizing numbers derived by decomposing the resolvent function Weyl-Titchmarsh function into partial fractions using the eigenvalues.The resolvent function w λ of the matrix J can be constructed by using 2.12 .Another convenient formula for computing the resolvent function is see 10, Section 5 where J 1 is the matrix obtained from J by deleting the first row and first column of J.
It follows from 2.27 that λw λ tends to 1 as λ → ∞.Therefore multiplying 2.21 by λ and passing then to the limit as λ → ∞, we find

2.28
The inverse spectral problem is stated as follows.
i To see if it is possible to reconstruct the matrix J, given its spectral data 2.26 .If it is possible, to describe the reconstruction procedure; ii To find the necessary and sufficient conditions for a given collection 2.26 to be spectral data for some matrix J of the form 1.1 with entries belonging to the class 1.2 .
The solution of this problem is well known see 1, Section 4.6 and 10 and let us bring here the final result.
Given a collection 2.26 , where λ 1 , . . ., λ N are real and distinct and β 1 , . . ., β N are positive, define the numbers: and using these numbers introduce the determinants: where G λ n j 0 x j λ j .

2.33
If To prove that D n 0 for n ≥ N, let us define the linear functional Ω on the linear space of all polynomials in λ with complex coefficients as follows: if G λ is a polynomial, then the value Ω, G λ of the functional Ω on the element polynomial G is

2.35
Let m ≥ 0 be a fixed integer and set

2.39
Therefore, 0, . . ., 0, t m , t m 1 , . . ., t m N−1 , 1 is a nontrivial solution of the homogeneous system of linear algebraic equations: with the unknowns x 0 , x 1 , . . ., x m , x m 1 , . . ., x m N−1 , x m N .Therefore, the determinant of this system, which coincides with D N m , must be equal to zero.
Theorem 2.3.Let an arbitrary collection 2.26 of numbers be given.In order for this collection to be the spectral data for a Jacobi matrix J of the form 1.1 with entries belonging to the class 1.2 , it is necessary and sufficient that the following two conditions be satisfied: i The numbers λ 1 , . . ., λ N are real and distinct.
ii The numbers β 1 , . . ., β N are positive and such that Under the conditions (i) and (ii) we have D n > 0 for n ∈ {0, 1, . . ., N − 1} and the entries a n and b n of the matrix J for which the collection 2.26 is spectral data, are recovered by the formulae where D n is defined by 2.30 and 2.29 , and Δ n is the determinant obtained from the determinant D n by replacing in D n the last column by the column with the components s n 1 , s n 2 , . . ., s 2n 1 .
It follows from the above solution of the inverse problem that the matrix 1.1 is not uniquely restored from the spectral data.This is linked with the fact that the a n are determined from 2.41 uniquely up to a sign.To ensure that the inverse problem is uniquely solvable, we have to specify additionally a sequence of signs and −.Namely, let {σ 0 , σ 1 , . . ., σ N−2 } be a given finite sequence, where for each n ∈ {0, 1, . . ., N − 2} the σ n is or −.We have 2 N−1 such different sequences.Now to determine a n uniquely from 2.41 for n ∈ {0, 1, . . ., N − 2}, we can choose the sign σ n when extracting the square root.In this way, we get precisely 2 N−1 distinct Jacobi matrices possessing the same spectral data.The inverse problem is solved uniquely from the data consisting of the spectral data and a sequence {σ 0 , σ 1 , . . ., σ N−2 } of signs and −.Thus, we can say that the inverse problem with respect to the spectral data is solved uniquely up to signs of the off-diagonal elements of the recovered Jacobi matrix.In particular, the inverse problem is solvable uniquely in the class of entries a n > 0, b n ∈ R.

Inverse Problem for Two Spectra
Let J be an N × N Jacobi matrix of the form 1.1 with entries satisfying 1.2 .Define J 1 to be the truncated Jacobi matrix given by 1.3 .We denote the eigenvalues of the matrices J and J 1 by respectively.We call the collections {λ k k 1, . . ., N } and {μ k k 1, . . ., N − 1 } the two spectra of the matrix J.
The inverse problem for two spectra consists in the reconstruction of the matrix J by two of its spectra.
We will reduce the inverse problem for two spectra to the inverse problem for eigenvalues and normalizing numbers solved above in Section 2.
First, let us study some necessary properties of the two spectra of the Jacobi matrix J.
Let P n λ and Q n λ be the polynomials of the first and second kind for the matrix J.By 2.5 we have Note that we have used the fact that a N−1 1.Therefore, the eigenvalues λ 1 , . . ., λ N and μ 1 , . . ., μ N−1 of the matrices J and J 1 coincide with the zeros of the polynymials P N λ and P N−1 λ , respectively.
Dividing both sides of 2.6 by P N−1 λ P N λ gives Therefore, by formula 2.12 for the resolvent function w λ , we obtain Lemma 3.1.The matrices J and J 1 have no common eigenvalues, that is, λ k / μ j for all values of k and j.
Proof.Suppose that λ is an eigenvalue of the matrices J and J 1 .Then by 3.1 and 3.2 we have P N λ P N−1 λ 0. But this is impossible by 2.6 .

Lemma 3.2. The equality (trace formula)
Proof.For any matrix A a jk N j,k 1 the spectral trace of A coincides with the matrix trace of 3.6 Therefore, we can write

3.7
Subtracting the last two equalities side by side, we arrive at 3.5 .
Lemma 3.3.The eigenvalues of J and J 1 interlace: Proof.Let us set ψ λ P N−1 λ P N λ , 3.9 so that ψ λ is a rational function whose poles coincide with the eigenvalues of J and whose zeros coincide with the eigenvalues of J 1 .Applying Lemma 2.1 to the rational function ψ λ we can write where Next, 2.24 shows that P N−1 λ k P N λ k > 0, that is, P N−1 λ k and P N λ k have the same sign.Then 3.11 implies that γ k > 0 k 1, . . ., N .Differentiating 3.10 we get

3.13
Consequently, the function ψ λ has no zero in the intervals −∞, λ 1 and λ N , ∞ , and exactly one zero in each of the intervals λ 1 , λ 2 , . . ., λ N−1 , λ N .Since the zeros of the function ψ λ coincide with the eigenvalues of J 1 by 3.9 , the proof is complete.
The following lemma gives a formula for calculating the normalizing numbers β 1 , . . ., β N in terms of the two spectra.
Multiply both sides of the last equality by λ − λ k and pass then to the limit as λ → λ k .Taking into account that P N λ k 0, P N λ k / 0, P N−1 λ k / 0 see 2.23 and 2.24 , we get 3.17 Next, by 3.1 and 3.2 we have

3.18
Substituting these in the right-hand side of 3.17 , we obtain where

3.20
Replacing k by m in 3.19 and then summing this equation over m 1, . . ., N and taking into account 2.28 , we get 3.15 .The lemma is proved.Proof.Given the two spectra {λ k } N k 1 and {μ k } N−1 k 1 of the matrix J, we determine uniquely the normalizing numbers β k k 1, . . ., N of the matrix J by 3.14 and 3.15 .Since the collection of the eigenvalues and normalizing numbers {λ k ,β k k 1, . . ., N } of the matrix J determines J uniquely in the class 3.21 , the proof is complete.
The following theorem solves the inverse problem in terms of the two spectra.Its proof given below contains an effective procedure for the construction of the Jacobi matrix from its two spectra.Theorem 3.6.In order for giving two collections of real numbers {λ k } N k 1 and {μ k } N−1 k 1 to be the spectra of two matrices J and J 1 , respectively, of the forms 1.1 and 1.3 with the entries in the class 1.2 , it is necessary and sufficient that the following inequalities be satisfied: 3.22 Proof.The necessity of the condition 3.22 has been proved above in Lemma 3.3.To prove the sufficiency, suppose that two collections of real numbers {λ k } N k 1 and {μ k } N−1 k 1 are given which satisfy the inequalities in 3.22 .We construct β k k 1, . . ., N according to these data by 3.14 and 3.15 .It follows from 3.22 that

3.23
Therefore, the expression on the right-hand side of 3.14 is positive and hence β k > 0 k 1, . . ., N .Next, it follows directly from 3.14 and 3.15 that Consequently, the collection {λ k ,β k k 1, . . ., N } satisfies the conditions of Theorem 2.3, and hence there exists a Jacobi matrix J of the form 1.1 with entries from the class 1.2 such that the λ k k 1, . . ., N are the eigenvalues and the β k k 1, . . ., N are the corresponding normalizing numbers for J. Having the matrix J, we construct the matrix J 1 by 1.3 .It remains to show that {μ k } N−1 k 1 is the spectrum of the constructed matrix J 1 .Denote the eigenvalues of J 1 by μ 1 < • • • < μ N−1 .By Lemma 3.3, 3.24 We have to show that μ k μ k k 1, . . ., N − 1 .

Theorem 3 . 5 .
(uniqueness result).The two spectra {λ k } N k 1 and {μ k } N−1 k 1 of the Jacobi matrix J of the form 1.1 in the classa n > 0, b n ∈ R,3.21uniquely determine the matrix J.
j / m λ j − λ m N−1 j 1 μ j − λ m .3.26On the other hand, by our construction of β k , we have 3.14 and 3.15 .Equating the righthand sides of 3.25 and 3.14 , we obtain a Denote by A the n 1 × n 1 matrix corresponding to the determinant D n given by 2.30 .Then for arbitrary real column vector x x 0 , x 1 , . . ., x n T , we have s n s n 1 • • • s 2n , n 0,1, 2, . . ..2.30 Lemma 2.2.For the determinants D n defined by 2.30 and 2.29 , we have D n > 0 for n ∈ {0, 1, . .., N − 1} and D n 0 for n ≥ N.Proof.