A Test Matrix for an Inverse Eigenvalue Problem

We present a real symmetric tri-diagonal matrix of order $n$ whose eigenvalues are $\{2k \}_{k=0}^{n-1}$ which also satisfies the additional condition that its leading principle submatrix has a uniformly interlaced spectrum, $\{2l + 1 \}_{l=0}^{n-2}$. The matrix entries are explicit functions of the size $n$, and so the matrix can be used as a test matrix for eigenproblems, both forward and inverse. An explicit solution of a spring-mass inverse problem incorporating the test matrix is provided.


Introduction
We are motivated by the following inverse eigenvalue problem first studied by Hochstadt in 1967 [11]. Given two strictly interlaced sequences of real values, are the eigenvalues of the leading principal submatrix of B, that is, where B o is obtained from B by deleting the last row and column. The condition on the data-set (1.1) is both necessary and sufficient for the existence of a unique Jacobian matrix solution to the problem (see [6] §4.3 or [17] §1.2 for a history of the problem, and §3 of this paper for additional background theory).
A number of different constructive procedures to produce the exact solution of this inverse problem have been developed [1,3,7,8,10,12], but none provide an explicit characterization of the entries of the solution matrix, B, in terms of the data-set (1.1). Computer implementation of these procedures introduces floating point error and associated numerical stability issues. Loss of significant figures due to accumulation of round-off error makes some of the known solution procedures undesirable. Determining the extent of round-off in the numerical solution,B, computed from a given data-set requires a priori knowledge of the exact solution B. In the absence of this knowledge, an additional numerical computation of the forward problem to find the spectra, λ(B) and λ(B o ) allows comparison to the original data.
Test matrices, with known entries and known spectra, are therefore helpful in comparing the efficacy of the various solution algorithms in regards to stability. It is particularly helpful when said test matrices can be produced at arbitrary size. However some existant test matrices given as a function of matrix size n suffer the following trait: when ordered by size, the minimum spacing between consecutive eigenvalues is a decreasing function of n. This trait is potentially undesirable since the reciprocal of this minimum separation between eigenvalues can be thought of as a condition number on the sensitivity of the eigenvectors (invariant subspaces) to perturbation (see [9], Theorem 8.1.12). Some of the algorithms for the inverse problem seem to suffer from this form of ill-conditioning. From a motivation to avoid confounding the numerical stability issue with potential increased ill-conditioning of the data-set as a function of n, the authors developed a test matrix which has equally spaced, and uniformly interlaced, simple eigenvalues.
In Section §2 we provide the explicit entries of such a matrix, A(n). We claim that its eigenvalues are equally spaced as λ(A(n)) = {0, 2, 4, . . . , 2n − 2}, while its leading principal submatrix A o (n) has eigenvalues uniformly interlaced with those of A(n), namely: A short proof verifies the claims. In Section §3 we present some background theory concerning Jacobian matrices, and in Section §4 apply our test matrix to a model of a physical spring-mass system, an application which leads naturally to Jacobian matrices.

Main Result
Let A(n) be a real, symmetric, tridiagonal, n × n matrix with entries . . , n − 2 a n−1,n = n(n − 1) 2 and let A o (n) be the principal submatrix of A(n), that is, the (n − 1) × (n − 1) matrix obtained from A(n) by deleting the last row and column.
where L is lower bidiagonal. We find: Therefore C has eigenvalue 0 and thus A(n + 1) has eigenvalue 2n.

Discussion
A real, symmetric n × n tridiagonal matrix B is called a Jacobian matrix when its off-diagonal elements are non-zero ( [6], p. 46). We write identical to B except for a switched sign on the m th off-digonal element. In regards to the spectrum of the matrix, there is therefore no loss of generality in accepting the convention that a Jacobian matrix be expressed with negative off-diagonal elements, that is, b i > 0, ∀ i = 1, . . . , n − 1 in (3.1). While Cauchy's interlace theorem [13] guarantees that the eigenvalues of any square, real, symmetric (or even Hermitian) matrix will interlace those of its leading (or trailing) principal submatrix, the interlacing cannot be strict, in general [5]. However, specializing to the case of Jacobian matrices restricts the interlacing to strict inequalities. That is, Jacobian matrices possess distinct eigenvalues, and the eigenvalues of the leading (or trailing) principal submatrix are also distinct, and strictly interlace those of the original matrix (see [6], Theorems 3.1.3 and 3.1.4. See also [9] exercise P8.4.1, p. 475: when a tridiagonal matrix has algebraically multiple eigenvalues, the matrix fails to be Jacobian). The inverse problem is also well-posed: there is a unique (up to the signs of the off-diagonal elements) Jacobian matrix B having given spectra specified as per (1.1) (see [6], Theorem 4.2.1, noting that the interlaced spectrum of n − 1 eigenvalues (λ o ) n−1 1 can be used to calculate the last components of each of the n ortho-normalized eigenvectors of B via equation 4.3.31). Therefore, the matrix A(n) in theorem 2.1 is the unique Jacobian matrix with eigenvalues equally spaced by two, starting with smallest eigenvalue zero, whose leading principal submatrix has eigenvalues also equally spaced by two, starting with smallest eigenvalue one.
As a consequence of the theorem, we now have form the arithmetic sequence,

3)
while the eigenvalues of its leading principal submatrix, W o n , form the uniformally interlaced sequence The form and properties of W n were first hypothesised by the third author while programming Fortran algorithms to reconstruct band matrices from spectral data [17]. Initial attempts to prove the spectral properties of W n by both he and his graduate supervisor (the first author) failed. Later, the first author produced the short induction argument of Theorem (2.1), in July, 1996. Alas, the fax on which the argument was communicated to the third author was lost in a crossborder academic move, and so the matter languished until recently. In summer of 2013, the second and third authors, looking for a tractable problem for a summer undergraduate research project, assigned the problem of this paper: "hypothesize, and then verify, if possible, the explicit entries of an n × n symmetric, tridiagonal matrix with eigenvalues (3.3), such that the eigenvalues of its principal submatirx are (3.4)." Meanwhile a building renovation in summer of 2013 required the third author to clean out his office, including boxes of old papers and notes. During this process the misplaced fax from the first author was found. However, to our delight, the student, Mr. Alexander De Serre-Rothney, was also able to independently complete both parts of the problem. His proof is now found in [15]. Though longer than the one presented here, his proof utilizes the spectral properties of another tridiagonal (non-symmetric) matrix, the so-called Kac-Sylvester matrix, K n , of size (n + 1) × (n + 1) [2,4,14,16]: which has eigenvalues λ(K n ) = {2k − n} n k=0 .

A Spring-Mass Model problem
Then the natural frequencies of the systems in We will consider the system with natural frequencies {1, 3, . . . , 2n − 1} for system (a), and {2, 4, . . . , 2n − 2} for system (b). We can use the matrix A(n) to help solve this system for the spring stiffnesses and the masses.
for (m i ) n i=1 , and k 1 . The bottom, n th , equation is where we choose m 1/2 n = α. We will thus be able to express m 1/2 i in terms of the scaling parameter α.

Conclusion
A family of n×n symmetric tri-diagonal matrices, W n , whose eigenvalues are simple and uniformally spaced, and whose leading principle submatrix has uniformly interlaced, simple eigenvalues, has been presented (3.2). Members of the family are characterized by a specified smallest eigenvalue, a o and gap size, c between eigenvalues. The matrices are termed Jacobian, since the off-diagonal entries are all non-zero. The matrix entries are explicit functions of the size n, a o and c, so the matrices can be used as a test matrices for eigenproblems, both forward and inverse. The matrix W n for specified smallest eigenvalue a o and gap c is unique up to the signs of the off-diagonal elements. In Section §4, the form of W n was used as an explicit solution of a spring-mass vibration model (Figure 4.1), and the inverse problem to determine the lumped masses and spring stiffnesses was solved explicitely. Both the lumped masses, m n−i given by equation (4.4), and spring stiffnesses, k n−i from equation (4.5) show super-exponential growth. Consequently m n /m 1 , k n /k 1 become vanishingly small as n → ∞. As a result, the spring-mass system of Figure 4.1 cannot be used as a discretized model for a physical beam in longitudinal vibration, as the model becomes unrealistic in the limit as n → ∞.