Fine Spectra of Tridiagonal Symmetric Matrices

The fine spectra of upper and lower triangular banded matrices were examined by several authors. Here we determine the fine spectra of tridiagonal symmetric infinite matrices and also give the explicit form of the resolvent operator for the sequence spaces 𝑐0, 𝑐, l1, and l∞.


Introduction
The spectrum of an operator is a generalization of the notion of eigenvalues for matrices.The spectrum over a Banach space is partitioned into three parts, which are the point spectrum, the continuous spectrum, and the residual spectrum.The calculation of these three parts of the spectrum of an operator is called the fine spectrum of the operator.
The spectrum and fine spectrum of linear operators defined by some particular limitation matrices over some sequence spaces was studied by several authors.We introduce the knowledge in the existing literature concerning the spectrum and the fine spectrum.Wenger 1 examined the fine spectrum of the integer power of the Cesàro operator over c and, Rhoades 2 generalized this result to the weighted mean methods.Reade 3 worked on the spectrum of the Cesàro operator over the sequence space c 0 .Gonzáles 4 studied the fine spectrum of the Cesàro operator over the sequence space p .Okutoyi 5 computed the spectrum of the Cesàro operator over the sequence space bv.Recently, Rhoades and Yildirim 6 examined the fine spectrum of factorable matrices over c 0 and c.Cos ¸kun 7 studied the spectrum and fine spectrum for the p-Cesàro operator acting over the space c 0 .Akhmedov and Bas ¸ar 8, 9 have determined the fine spectrum of the Cesàro operator over the sequence spaces c 0 , ∞ , and p .In a recent paper, Furkan, et al. 10 determined the fine spectrum of B r, s, t over the sequence spaces c 0 and c, where B r, s, t is a lower triangular triple-band matrix.Later, Altun and Karakaya 11 computed the fine spectra for Lacunary matrices over c 0 and c.

International Journal of Mathematics and Mathematical Sciences
In this work, our purpose is to determine the fine spectra of the operator, for which the corresponding matrix is a tridiagonal symmetric matrix, over the sequence spaces c 0 , c, 1 , and ∞ .Also we will give the explicit form of the resolvent for this operator and compute the norm of the resolvent operator when it exists and is continuous.
Let X and Y be Banach spaces and T : X → Y be a bounded linear operator.By R T , we denote the range of T , that is, By B X , we denote the set of all bounded linear operators on X into itself.If X is any Banach space, and let T ∈ B X then the adjoint T * of T is a bounded linear operator on the dual X * of X defined by T * φ x φ Tx for all φ ∈ X * and x ∈ X.Let X / {θ} be a complex normed space and T : D T → X be a linear operator with domain D T ⊂ X.With T , we associate the operator where λ is a complex number and I is the identity operator on D T .If T λ has an inverse, which is linear, we denote it by T −1 λ , that is and call it the resolvent operator of T λ .If λ 0, we will simply write T −1 .Many properties of T λ and T −1 λ depend on λ, and spectral theory is concerned with those properties.For instance, we will be interested in the set of all λ in the complex plane such that T −1 λ exists.Boundedness of T −1 λ is another property that will be essential.We shall also ask for what λ s the domain of T −1 λ is dense in X.For our investigation of T , T λ , and T −1 λ , we need some basic concepts in spectral theory which are given as follows see 12, pages 370-371 .
Let X / {θ} be a complex normed space, and let T : D T → X be a linear operator with domain D T ⊂ X.A regular value λ of T is a complex number such that R1 T −1 λ exists, R2 T −1 λ is bounded, and R3 T −1 λ is defined on a set which is dense in X.The resolvent set ρ T of T is the set of all regular values λ of T .Its complement σ T C \ ρ T in the complex plane C is called the spectrum of T .Furthermore, the spectrum σ T is partitioned into three disjoint sets as follows: the point spectrum σ p T is the set such that T −1 λ does not exist.A λ ∈ σ p T is called an eigenvalue of T .The continuous spectrum σ c T is the set such that T −1 λ exists and satisfies R3 but not R2 .The residual spectrum σ r T is the set such that T −1 λ exists but does not satisfy R3 .
A triangle is a lower triangular matrix with all of the principal diagonal elements nonzero.We shall write ∞ , c, and c 0 for the spaces of all bounded, convergent, and null sequences, respectively.And by p , we denote the space of all p-absolutely summable sequences, where 1 ≤ p < ∞.Let μ and γ be two sequence spaces and A a nk be an infinite matrix of real or complex numbers a nk , where n, k ∈ N.Then, we say that A defines a matrix mapping from μ into γ, and we denote it by writing A : μ → γ, if for every sequence By μ : γ , we denote the class of all matrices A such that A : μ → γ.Thus, A ∈ μ : γ if and only if the series on the right side of 1.4 converges for each n ∈ N and every x ∈ μ, and we have Ax { Ax n } n∈N ∈ γ for all x ∈ μ.
A tridiagonal symmetric infinite matrix is of the form where q, r ∈ C. The spectral results are clear when r 0, so for the sequel we will have r / 0.
Theorem 1.1 cf. 13 .Let T be an operator with the associated matrix A a nk .

i T ∈ B c if and only if
ii T ∈ B c 0 if and only if 1.6 and 1.7 with a k 0 for each k.
iii T ∈ B ∞ if and only if 1.6 .
In these cases, the operator norm of T is In this case, the operator norm of T is Corollary 1.2.Let μ ∈ {c 0 , c, 1 , ∞ }. S q, r : μ → μ is a bounded linear operator and S q, r μ:μ |q| 2|r|.
Proof.Since 1 ⊂ c 0 ⊂ c, it is enough to show that σ p S, c ∅. Let λ be an eigenvalue of the operator S.An eigenvector x x 0 , x 1 , . . .∈ c corresponding to this eigenvalue satisfies the linear system of equations:

2.1
If x 0 0, then x k 0 for all k ∈ N. Hence x 0 / 0. Without loss of generality we can suppose x 0 1.Then x 1 λ − q /r and the system of equations turn into the linear homogeneous recurrence relation where p q − λ /r.The characteristic polynomial of the recurrence relation is There are three cases here.
Case 1 p −2 .Then characteristic polynomial has only one root: α 1.Hence, the solution of the recurrence relation is of the form where A and B are constants which can be determined by the first two terms x 0 and x 1 . 1 x 0 A B0, so we have A 1.And −p x 1 A B1, so we have B 1. Then x n n 1.This means x n / ∈ c.So, we conclude that there is no eigenvalue in this case.
Case 2 p 2 .Then characteristic polynomial has only one root: α −1.The solution of the recurrence relation, found as in Case 1, is x n n 1 −1 n .So, there is no eigenvalue in this case.
Case 3 p / ± 2 .Then the characteristic polynomial has two distinct roots α 1 / ± 1 and The solution of the recurrence relation is of the form
Repeating all the steps in the proof of this theorem for ∞ , we get to the following.

2.8
Moreover, this operator is continuous and the domain of the operator is the whole space c 0 .
Proof.Let α 1 and α 2 be as it is stated in the theorem.From 1/r S λ x y we get to the system of equations:

2.9
This is a nonhomogenous linear recurrence relation.Using the fact that x n , y n ∈ c 0 , for 2.9 we can reach a solution with generating functions.This solution can be given by International Journal of Mathematics and Mathematical Sciences where

2.11
Let T t nk .We can see that by using Theorem 1.1, T ∈ B c 0 .So 1/α 2 1 − 1 T is the resolvent operator of 1/r S λ and is continuous.
If T : μ → μ μ is 1 or c 0 is a bounded linear operator represented by the matrix A, then it is known that the adjoint operator T * : μ * → μ * is defined by the transpose A t of the matrix A. It should be noted that the dual space c * 0 of c 0 is isometrically isomorphic to the Banach space 1 and the dual space * 1 , of 1 is isometrically isomorphic to the Banach space ∞ .
And by Cartlidge 14 , if a matrix operator A is bounded on c, then σ A, c σ A, ∞ .Hence we have σ S, c 0 σ S, 1 σ S, ∞ σ S, c .What remains is to show that σ S, c 0 ⊂ q − 2r, q 2r .By Theorem 2.3, there exists a resolvent operator of S λ which is continuous and the whole space c 0 is the domain if the roots of the polynomial P x x 2 px 1 satisfy So, if λ ∈ σ S, c 0 then 2.12 is not satisfied.Since α 1 α 2 1, 2.12 is not satisfied means, the roots can be only of the form for some θ ∈ 0, 2π .Then q − λ /r p − α 1 α 2 − e iθ e −iθ −2 cos θ.Hence λ q 2r cos θ, which means λ can be only on the line segment q − 2r, q 2r .Theorem 2.5.σ S, μ q − 2r, q 2r for μ ∈ { 1 , c 0 , c, ∞ }.

The Continuous Spectra and Residual Spectra
Proof.Similarly as in the proof of the previous theorem, we have σ r S, If T : c → c is a bounded matrix operator represented by the matrix A, then T * : c * → c * acting on C ⊕ 1 has a matrix representation of the form where χ is the limit of the sequence of row sums of A minus the sum of the limits of the columns of A, and b is the column vector whose kth entry is the limit of the kth column of A for each k ∈ N.For S λ : c → c, the matrix S * λ is of the form Theorem 3.5.σ r S, c {q 2r}.

The Resolvent Operator
The following theorem is a generalization of Theorem 2.3.

International Journal of Mathematics and Mathematical Sciences
Theorem 4.1.Let μ ∈ {c 0 , c, 1 , ∞ }.The resolvent operator S −1 over μ exists and is continuous, and the domain of S −1 is the whole space μ if and only if 0 / ∈ q − 2r, q 2r .In this case, S −1 has a matrix representation s nk defined by where α 1 is the root of the polynomial P x rx 2 qx r with |α 1 | < 1.
Proof.Let μ be one of the sequence spaces in {c 0 , c, 1 , ∞ }.Suppose S has a continuous resolvent operator where the domain of the resolvent operator is the whole space μ.Then λ 0 is not in σ S, μ q − 2r, q 2r .Conversely if 0 / ∈ q − 2r, q 2r , then S has a continuous resolvent operator, and since S is bounded by Lemma 7.2-7.3 of 12 the domain of this resolvent operator is the whole space μ.Now, suppose 0 / ∈ q − 2r, q 2r .Let α 1 and α 2 be the roots of the polynomial P x rx s nk defined by when μ c 0 by that theorem.The matrix S −1 is already a left inverse of the matrix S. Observe that S −1 satisfies also the corresponding conditions of Theorem 1.1, which means S −1 ∈ μ, μ for μ ∈ {c, 1 , ∞ }.So, the matrix S −1 is the representation of the resolvent operator also for the spaces in {c, 1 , ∞ }.
Remark 4.2.If a matrix A is a triangle, we can see that the resolvent when it exists is the unique lower triangular left hand inverse of A. In our case, S is far away from being a triangle.The matrix S −1 of this theorem is not the unique left inverse of the matrix S for 0 / ∈ q − 2r, q 2r .For example, the matrix T t nk defined by is another left inverse of S. Then λS −1 1 − λ T is also a left inverse of S for any λ ∈ C, which means there exist infinitely many left inverses for S.
Theorem 4.3.Let 0 / ∈ q − 2r, q 2r , and, α 1 be the root of P x rx 2 qx r with |α 1 | < 1.Then for μ ∈ {c 0 , c, 1 , ∞ } we have Proof.Since S −1 is a symmetric matrix, the supremum of the 1 norms of the rows is equal to the supremum of the 1 norms of the columns.So, according to Theorem 1.1, what we need is to calculate the supremum of the 1 norms of the rows of S −1 .Denote the nth row S −1 by S −1 n for n 0, 1, . ... Now, let us fix the row n and calculate the 1 norm for this row.Let ρ 1/|r 1 − α 2  1 |.By using Theorem 4.1, we have

4.6
On the other hand