Generalized Pseudospectral Method and Zeros of Orthogonal Polynomials

Via a generalization of the pseudospectral method for numerical solution of differential equations, a family of nonlinear algebraic identities satisfied by the zeros of a wide class of orthogonal polynomials is derived. The generalization is based on a modification of pseudospectral matrix representations of linear differential operators proposed in the paper, which allows these representations to depend on two, rather than one, sets of interpolation nodes. The identities hold for every polynomial family $\{p_\nu(x)\}_{\nu=0}^\infty$ orthogonal with respect to a measure supported on the real line that satisfies some standard assumptions, as long as the polynomials in the family satisfy differential equations $\mathcal{A} p_\nu(x) =q_\nu(x) p_\nu(x)$, where $\mathcal{A}$ is a linear differential operator and each $q_\nu(x)$ is a polynomial of degree at most $n_0 \in \mathbb{N}$; $n_0$ does not depend on $\nu$. The proposed identities generalize known identities for classical and Krall orthogonal polynomials, to the case of the nonclassical orthogonal polynomials that belong to the class described above. The generalized pseudospectral representations of the differential operator $\mathcal{A}$ for the case of the Sonin-Markov orthogonal polynomials, also known as generalized Hermite polynomials, are presented. The general result is illustrated by new algebraic relations satisfied by the zeros of the Sonin-Markov polynomials.


Summary of results
In this paper we identify a class of algebraic relations satisfied by the zeros of a wide class of orthogonal polynomials. To prove the identities, we generalize the notion of pseudospectral matrix representations of linear differential operators, by allowing these representations to depend on two, rather than one, sets of interpolation nodes. The identities hold for all polynomials {p ν (x)} ∞ ν=0 orthogonal with respect to a measure satisfying some standard assumptions, as long as they satisfy the differential equations where A is a linear differential operator and each q ν (x) is a polynomial of degree at most n 0 ∈ N; n 0 does not depend on ν. This includes classical orthogonal polynomials [1,2], polynomials in the Askey scheme [3] and Krall polynomials [4], as well as additional classes of nonclassical orthogonal polynomials. If applied to classical or Krall orthogonal polynomials, the proposed result reduces to known identities [5,6,7], see Subsection 1.4. The motivation of this study stems from understanding that zeros of orthogonal polynomials play an important role in mathematical physics, numerical analysis and related areas. For example, zeros of some orthogonal polynomials are equilibria of important N-body problems [8,9,10,11]. They transpire as building blocks of remarkable isospectral matrices [12,13,14,15] and play an important role in construction of highly accurate approximation schemes for numerical integration [16,17,18,19].
To prove algebraic identities satisfied by the zeros of the polynomials {p ν (x)} ∞ ν=0 , we generalize and relate the notions of spectral and pseudospectral matrix representations of linear differential operators used in the corresponding numerical methods for solving differential equations. The standard pseudospectral matrix representations of linear differential operators are based on Lagrange collocation on the real line. These representations were proposed by Calogero in the context of numerical solving of eigenvalue and boundary value problems for linear ODEs [20,21], see also [22,23,24,8,25]. Mitropolsky, Prykarpatsky and Samoylenko set up a general algebraic-projection framework based on Calogero's method and furthered its applications to solution of evolution equations in Mathematical Physics [26]. The convergence analysis of Calogero's method was studied in [27].
The standard pseudospectral method was utilized in [7] to prove new properties of the zeros Krall polynomials. While Krall polynomials are eigenfunctions of linear differential operators, the polynomial families considered in this paper satisfy differential equations (1) with q n (x) being polynomials (as opposed to eigenvalues) of degree n 0 that does not depend on n. To prove new properties of the zeros of the latter polynomial families, we propose a generalization of the standard pseudospectral method.
Because, in general, the differential operator A in (1) raises the degree of polynomials by a summand of n 0 , the standard pseudospectral method does not allow exact discretization of differential equations (1). The main idea of the proposed generalization is to construct Lagrange collocation type matrices that exactly represent linear differential operators acting between spaces of polynomials of different degrees. In this paper, the last goal is achieved by allowing the matrix representations to depend on two rather than one set of interpolation nodes. We thus find exact discretizations of the differential equations (1) satisfied by the polynomials {p ν (x)} ∞ ν=0 ; the discretizations are constructed using the zeros of these polynomials as the nodes. By comparing the generalized pseudospectral and the generalized spectral matrix representations of the differential operator A, we derive a family of algebraic identities satisfied by the zeros of polynomials {p ν (x)} ∞ ν=0 . The proposed generalization of the pseudospectral matrix representations of linear differential operators has applications beyond those outlined in this paper; one such application allows to simplify the process of incorporation of initial or boundary conditions into linear systems that discretize certain ODEs, see the discussion in Section 4.
We illustrate the general result of the main Theorem 1.1 by applying it to the case of the Sonin-Markov polynomials, also known as generalized Hermite polynomials, see [28,29,16] and references therein. These zeros play an important role in the computation of integrals of singular or oscillatory functions [17,18] as well as extended Lagrange interpolation on the real line [19].
To prove the identities satisfied by the zeros of the Sonin-Markov polynomials stated in Theorem 2.1, we compute the generalized pseudospectral matrix representations of the differential operators d/dx and d 2 /dx 2 as well as the differential operator D associated with the Sonin-Markov family, see (20b) of (20) and Subsection 2.2, assuming that the interpolation nodes are zeros of the Sonin-Markov polynomials. The formulas for these matrix representations as well as the identities of Theorem 2.1 have been verified using programming environment MATLAB, for several particular values of the relevant paramenets (the degree N and the parameter β, see Theorem 2.1 and definitions (16), (17)).

The orthogonal polynomial family {p
ν=0 be a sequence of polynomials orthogonal with respect to a measure ω and the corresponding inner product f, g = f g dω. We denote the norm associated with this inner product by · , that is, f 2 = f 2 dω. Assume that ω is a Borel measure with support on the real line satisfying the following three conditions: (a) ω is positive; (b) all its moments x ν dω exist and are finite; (c) ω has infinitely many points in its support I = supp ω.
Under the above assumptions on the measure ω, the zeros of each polynomial p ν , ν ≥ 1, are real, simple and belong to the convex hull of the support of ω, see for example [16]. Notation 1. Here and throughout the rest of the paper N denotes a fixed integer strictly larger than 1, while n 0 is a fixed nonnegative integer. The small Greek letter ρ denotes an index that may take values N or N + n 0 . The small Greek letter ν denotes an integer index that usually takes values 0, 1, 2 . . ., unless otherwise indicated. The small Latin letters n, m, j, k etc. denote integer indices that usually run from 1 to N or from 1 to N + n 0 , see (2) and (1), thus we indicate the range of the indices each time they are used. We reserve the letter ℓ to denote polynomials in Lagrange interpolation bases.
Let P ν denote the space of all algebraic polynomials with real coefficients of degree at most ν. Assume that for each ν, the polynomials {p j (x)} ν j=0 form a basis of P ν . Let A be a linear differential operator acting on functions of one variable. Assume that A has the property for all ν. Recall that n 0 is a fixed nonnegative integer; it does not depend on ν.
For example, the differential operator D = a 0 + a 1 (x) d dx + . . . + a q (x) d q dx q with q ∈ N and a j (x) ∈ P j for all j = 1, 2, . . . , q has property (2) with n 0 = 0, while the operator x 2 D has property (2) with n 0 = 2.

Generalized pseudospectral and spectral matrix representations of linear differential operators
In this Subsection we introduce the notions of a generalized pseudospectral and a generalized spectral matrix representations of the linear differential operator operator A introduced in the previous Subsection. Note that, in general, the definitions of these generalized matrix representations hold for any linear differential operator D, which may or may not satisfy property (2), and for n 0 being a (positive or negative) integer such that N + n 0 ≥ 1. The definition of the standard pseudospectral N × N matrix representatioñ A c of A, see [30,31,7,25], is motivated by a search for an exact discretization of a differential equation under the assumption that it possesses a polynomial solution u ∈ P N −1 . More precisely, choose a vector of distinct real nodes and define the isomorphism π N : P N −1 → R N by The inverse of π N is of course given in terms of the standard Lagrange interpolation basis {ℓ N −1,j (x)} N j=1 of P N −1 constructed using the nodes x (N ) : for every vector g (N ) = g The standard pseudospectral N × N matrix representationÃ c of the differential operator A is defined as the unique N × N matrix that satisfies the condition π N Ag =Ã c π N g for all g ∈ P N −1 . Note that the superscript 'c' in the notation of the matrixÃ c stands for "collocation" in the spectral collocation method for numerical solving of differential equations, also known as the pseudospectral method [25]. It is not difficult to conclude that the matrixÃ c is given componentwise bỹ [20,21,26,27,31,30,7]. The last definition implies that a vector u (N ) ∈ R N solves the linear system if and only if the polynomial u( Let us now generalize this notion of pseudospectral matrix representation of the differential operator A to take advantage of its property (2).
In addition to the interpolation nodes x (N ) , consider another vector of distinct real nodes x (N +n0) = x . In short, we will work with two vectors of nodes x (ρ) , where ρ = N or ρ = N + n 0 , see Notation 1, and the respective Lagrange interpolation bases {ℓ ρ−1,j (x)} ρ j=1 . Recall that for each j ∈ {1, . . . , ρ}, where is the node polynomial.
We define the generalized pseudospectral (N +n 0 )×N matrix representation where, as before, the superscript 'c' stands for "collocation". This definition is motivated by the following relation: for all g ∈ P N −1 , where the isomorphisms π ρ : P ρ−1 → R ρ are defined by (4) with N replaced by ρ, ρ ∈ {N, N + n 0 }. In other words, if ODE (3) has a polynomial solution u ∈ P N −1 , then the vector u (N ) ∈ R N solves the system of linear equations Using analogous motivation, we define the generalized spectral (N + n 0 ) × N matrix representation A τ of the linear differential operator A componentwise by Here, the superscript 'τ ' indicates that the τ -variant of the spectral method is used [25]. We prove that the (N + n 0 ) × N matrices A τ and A c satisfy the following property: where each of the two ρ × ρ matrices L (ρ−1) with ρ = N or ρ = N + n 0 is the transition matrix from the orthogonal polynomial basis {p j (x)} ρ−1 j=0 to the Lagrange interpolation basis {ℓ ρ−1,j } ρ j=1 , see Theorem 3.1 in Section 3.

Main result: algebraic identities satisfied by the zeros of the polynomials {p
Let us now assume that x N +n0 are the zeros of the polynomial p N +n0 (x). We therefore use these two sets of zeros as the nodes in the definition of the pseudospectral matrix representation A c of the differential operator A, see (7) and (6). In this case, each of the two matrices L (ρ−1) in the relation (9), where ρ = N or ρ = N + n 0 , can be expressed in terms of the values p m−1 (x (ρ) j ) and the Christoffel numbers λ where the Christoffel numbers are defined by see Theorem 3.1.
Recall that Christoffel numbers arise in the Gaussian quadrature numerical integration formulas; they are always positive [16]. Christoffel numbers play an important role in the proof of the main identity (13) presented in this paper, although they are eliminated from that identity in the process of inversion of the matrices L (ρ−1) : the inverses of L (ρ−1) are given componentwise by see (45) in the proof of Theorem 3.1. Using property (9) of the matrix representations A τ and A c , together with the neat formulas (12) for the matrices L (ρ−1) −1 in the case where x (ρ) are the zeros of p ρ (x), ρ ∈ {N, N + n 0 }, we prove the following algebraic identities satisfied by the zeros of the polynomials in the family {p ν (x)} ∞ ν=0 .
ν=0 of generalized eigenfunctions of the linear differential operator A, see (1), satisfy the following algebraic relations for all integer m, n such that where the (N + n 0 ) × N pseudospectral and spectral matrix representations A c = A c x (N ) , x (N +n0) ≡ A c and A τ , respectively, are defined by (7) and (8).
Remark 1. For every pair of integers n, m such that 0 ≤ n ≤ N − 1 and 1 ≤ m ≤ N + n 0 , identity (13) relates the zeros of the polynomials p N (x), p N +n0 (x) and p n (x) with the zeros of all the polynomials p k (x) such that the index k ∈ {0, 1, . . . , N + n 0 − 1} satisfies n − n 0 ≤ k ≤ n + n 0 .
Remark 2. The main identity of Theorem 1.1 may be recast as follows: for is a row-vector with the components w k } ρ k=1 form the standard basis of R ρ . Theorem 1.1 is proved in Section 3. Identities (13) or, equivalently, (14), are remarkable in the sense that they reveal a deeper structure and relation between the linear operators A c and A τ , the generalized spectral and pseudospectral representations, respectively, of the differential operator A, for the case where these operators are constructed using two sets of zeros of orthogonal polynomials as the nodes.
Let us compare the main result of Theorem 1.1 with other results of this kind. By setting n 0 = 0 in identities (13), (14), we obtain that for all m, n such that 1 ≤ m, n ≤ N . In this case, of course, q ν are constants, see (1), and {p ν (x)} ∞ ν=0 are either classical or Krall orthogonal polynomials. In this special case where n 0 = 0, identities of Theorem 1.1 reduce to those reported in [7]. It was shown in [7] that, if applied to the classical Jacobi, Hermite or Laguerre polynomials, identities (15) reduce to known identities for the zeros of these polynomials reported in [5,6].
Therefore, identities (13), (14) generalize similar results for classical orthogonal polynomials proved in [5,6] and for Krall polynomials proved in [7]. These identities may be considered as analogues of the properties of the zeros of the Askey scheme and generalized hypergeometric polynomials proved in [12,13,14,15], for the case of the polynomial families considered in this paper. An application of the identities proved in this paper is related to the study of the asymptotic behavior of algebraic expressions involving the zeros of orthogonal polynomials of degree N as N → ∞, see [32,33].
In the next Section 2 we apply Theorem 1.1 to prove new identities satisfied by the zeros of the nonclassical Sonin-Markov orthogonal polynomials. In Section 3, "Proofs", we elaborate on the proofs of most of the theorems of this paper, except for those that are straightforward consequences of another theorem. In Section 4 titled "Discussion and Outlook" we summarize the results proposed in this paper and discuss their importance, possible applications and further developments.

Application: Properties of the Zeros of the Sonin-Markov Orthogonal Polynomials
In this Section 2 we illustrate Theorem 1.1 by applying it to the case of the Sonin-Markov orthogonal polynomials, which are generalized eigenfunctions of a certain linear differential operator D, see (20). This application requires a computation of the generalized spectral and the generalized pseudospectral matrix representations of the differential operator D associated with the Sonin-Markov polynomials. We thus proceed as follows. In Subsection 2.1 we define the Sonin-Markov polynomials and state their basic properties. In Subsections 2.2 and 2.3, respectively, we compute the generalized pseudospectral and the generalized spectral representations, respectively, of the differential operator D. Finally, in Subsection 2.4 we provide a family of new algebraic properties satisfied by the zeros of the Sonin-Markov polynomials.

Definition and Basic Properties of the Sonin-Markov Polynomials
The Sonin-Markov polynomials {p β ν (x)} ∞ ν=0 , which are also known as generalized Hermite polynomials, are orthogonal on I = (−∞, ∞) with respect to the weight w(x) = |x| β e −x 2 (see [28,16] and references therein). These polynomials can be expressed in terms of the generalized Laguerre polynomials as follows: where α = (β − 1)/2 (17) and the coefficients c n = (−1) n n! Γ(n + 1 + α) , d n = (−1) n n! Γ(n + 2 + α) (18) are chosen to make the polynomials orthonormal and the leading coefficients positive. Note that the leading coefficients K ν of p β ν (x) are given by The zeros of the Sonin-Markov polynomials are distinct, real and symmetric with respect to the origin. The Sonin-Markov polynomials satisfy the following differential equations: where D is the linear differential operator given by These differential equations can be derived from the corresponding differential equations satisfied by the generalized Laguerre polynomials stated in [3], see also [1] and formula (3.5) in [34]. Clearly, DP ν ⊆ P ν+2 , so we can apply Theorem 1.1 with n 0 = 2 to these polynomials. be the zeros of the Sonin-Markov polynomial p N +2 (x) ≡ p β N +2 (x). In this subsection we find the generalized pseudospectral matrix representation D c x (N ) , x (N +2) ≡ D c of the Sonin-Markov differential operator (20b) with respect to the two vectors of nodes

The Generalized Pseudospectral Matrix
. To compute this representation D c , we use definition (7) with n 0 = 2. In the following, we use the notation of Subsection 2.1, note the appropriate definitions of α, c n , d n and µ ν , η ν .
Let {ℓ N −1,j (x)} N j=1 be the Lagrange interpolation basis with respect to the nodes {x (N ) j } N j=1 defined by (6) with ρ = N and ψ N (x) replaced by p N (x). Let Z (k) x (N ) , x (N +2) ≡ Z (k) be the (N +2)×N generalized pseudospectral matrix representation of the differential operator d k dx k with respect to the two vectors of nodes x (N ) and x (N +2) . By definition (7), its components Z (k) mn are given by The generalized pseudospectral (N + 2) × N matrix representation D c of the differential operator D is given componentwise by see definition (7). By using the fact that the Sonin-Markov polynomial p N (x) satisfies differential equation (20) and the differential equation obtained by differentiation of (20) with respect to x (with ν = N ) , we simplify the formulas for Z (k) mn , where k = 1, 2, to read and where B mn are defined by By using the last two expressions for the components of the matrices Z (1) and Z (2) in (23), we obtain the following formulas for the components of the generalized pseudospectral matrix representation D c : where, again, B mn is given by definition (26).
Remark 3. If ρ is even, the roots {x (ρ) j } ρ j=1 of the Sonin-Markov polynomial p ρ (x) ≡ p β ρ (x) are all distinct from zero because the roots of every generalized Laguerre polynomial are all distinct from zero, see definition (16a). Therefore, case (27b) does not apply to even values of N . Remark 4. It is known that two generalized Laguerre polynomials L α n (x) and L α n+1 (x) do not have common roots [16,2]. But then, by definition (16), two Sonin-Markov polynomials p β N (x) and p β N +2 (x) have a common rootx if and only if N is odd andx = 0. This is why formulas (24) and (25)

The Generalized Spectral Matrix Representation of the Sonin-Markov Differential Operator (20b)
The generalized spectral (N +2)×N matrix representation D τ of the operator D is defined by formula (8). To find an explicit formula for the components of D τ , we will employ recurrence relations satisfied by the Sonin-Markov polynomials.
Remark 5. The coefficients α ν and β ν defined in (29a) and (29b), respectively, must not be confused with the parameters α and β in the definition of the Sonin-Markov polynomials, see (16) and (17).
For completeness, let us mention that as a consequence of their orthogonality, Sonin-Markov polynomials satisfy the standard three-term recurrence relation and p j (x) = 0 if j < 0. Of course, the three-term recurrence relation (28) is a consequence of the recurrence relation (31).

Algebraic Properties of the Zeros of the Sonin-Markov Polynomials
Having found the generalized pseudospectral and the generalized spectral matrix representations D c and D τ , respectively, of the differential operator D, we apply Theorem 1.1 to the case of the Sonin-Markov polynomials. We thus obtain the following algebraic identities satisfied by their zeros.
where B mn are defined by (26) and, as before, p j (x) = 0 if j < 0.

Proofs
The proof of Theorem 1.1 is based on the following result.
where the coefficients respectively, are the components of the column-vectors u c and u τ , respectively, and 1 ≤ j ≤ N . To prove relation (36), we will show that u τ = L (N −1) u c and L (N +n0−1) A c u c = A τ u τ . Let us expand where the coefficients L (N −1) mj are given by (35). Upon a substitution of (41) into (37), we obtain To obtain the equality L (N +n0−1) A c u c = A τ u τ , we first notice that because u ∈ P N −1 and the operator A satisfies AP N −1 ⊆ P N +n0−1 , we have On the other hand, By comparing expansions (43) and (44), we obtain and so finish the proof of relation (36). Second, let us assume that x are the zeros of the polynomial p ρ (x), where ρ = N or ρ = N + n 0 . Let us prove that the transition matrix L (ρ−1) is given componentwise by (10). The Gaussian rule for approximate integration with respect to the measure ω based on these nodes x (ρ) 1 , . . . , x (ρ) ρ has degree of exactness 2ρ−1, see, for example, Theorem 5.1.2 of [16]. Therefore, for the polynomial which implies (10). By applying the Gaussian rule to the polynomial p m−1 (x)p n−1 (x), where 0 ≤ m, n ≤ N , we obtain which implies (12). We note that the index k ∈ {1, . . . , N + n 0 } in the sum from the right hand side of identity (13) ranges from n − n 0 to n + n 0 rather than from 1 to N + n 0 . This is due to the following consideration. Because the polynomial family {p ν (x)} ∞ ν=0 is orthogonal and has the property that {p j (x)} ν j=0 is a basis of P ν for each ν, this family must satisfy a three-term recurrence relation where a ν,k are constants that equal zero if k < 0 or k is outside of the range {ν − 1, ν, ν + 1} [16]. From the last recurrence relation we derive where b ν,k are constants that equal zero if k < 0. Thus, for all integer n, k such that 1 ≤ k ≤ N + n 0 and 1 ≤ n ≤ N , if the index k is outside of the range {n − n 0 , . . . , n + n 0 }.

Discussion and Outlook
The identities of the main Theorem 1.1 are derived using a very special exact discretization of the differential equation compare with (1). This discretization is constructed using the fact that ODE (49) has a polynomial solution p N (x), where p N (x) is a member of the nonclassical orthogonal polynomial family {p ν (x)} ∞ ν=0 . Because the differential operator A maps P N to P N +n0 , see property (2), we choose to represent the operator by a (N + n 0 ) × N matrix A c x (N ) , x (N +n0) ≡ A c defined by (7) in terms of two vectors of interpolation nodes, x (N ) and x (N +n0) . We thus generalize the notion of pseudospectral matrix representations of a linear differential operator.
This generalization can be utilized to resolve the issues related to the incorporation of the initial or boundary conditions into pseudpsectral methods for solving a linear ODE Lu = f , where L is a linear differential operator, see [30,31]. For example, if the differential operator L has the property LP ν ⊆ P ν−k0 , where k 0 is a positive integer, it is beneficial to use the generalized pseudospectral N × (N − k 0 ) matrix representation L c of the differential operator L with respect to two vectors of nodes x (N ) ∈ R N and x (N −k0) ∈ R N −k0 . Of course, L c is defined componentwise by L c kj = (Lℓ N −1,j ) x (7), where {ℓ N −1,j (x)} N j=1 is the standard Lagrange interpolation basis with respect to the nodes x N .
We can approximate the ODE Lu = f by the N × (N − k 0 ) system of linear , compare with (4). The linear system L c u (N ) = f (N −k0) for the N components of u (N ) consists of N − k 0 linear equations, which allows to easily incorporate k 0 boundary or initial conditions into the system. Once the solution u (N ) of this augmented linear system is found, an approximate solution of the ODE Lu = f is given by u(x) ≈ (π N ) −1 u (N ) = N j=1 u (N ) j ℓ N −1,j (x), where {ℓ N −1,j (x)} N j=1 is the standard Lagrange interpolation basis with respect to the nodes x N .
The ODE (49) may also be discretized using only one vector of nodes x (N +n0) and the standard pseudospectral (N + n 0 ) × (N + n 0 ) matrix representationÂ c of the differential operator A defined componentwise bŷ A c kj = Aℓ N +n0−1,j (x) x=x (N +n 0 ) k , 1 ≤ k, j ≤ N + n 0 , see [7] and compare with (7). Indeed, in the proof of Theorem 3.1 we may choose to represent the polynomial u ∈ P N −1 in terms of the Lagrange interpolation basis {ℓ N +n0−1,j (x)} N +n0 j=1 or the basis {p j−1 (x)} N +n0 j=1 , see (37) and (38). This modification allows to prove the relation where the standard spectral (N + n 0 ) × (N + n 0 ) matrix representationÂ τ is defined byÂ which hold for all integer m, n such that 1 ≤ m, n ≤ N + n 0 . Given a fixed integer n such that 0 ≤ n ≤ N + n 0 − 1, the last identity relates the zeros of the polynomials p N +n0 (x) and p n (x) with the zeros of all the polynomials p j (x) such that the integer index j ∈ {0, 1, . . . , N + n 0 − 1} satisfies n − n 0 ≤ j ≤ n + n 0 .
The set of identities (53) is a straighforward generalization of Theorem 1.1 in [7], thus it is not the main focus of this paper. We invite the reader to apply identities (53) to the Sonin-Markov polynomials, replacingÂ c andÂ τ withD c andD τ , respectively, the latter being matrix representations of the differential operator D associated with the Sonin-Markov family, see (20b). The pseudospectral matrix representationD c can be found using definition (50) and the techniques outlined in Appendix A of [7]. The spectral matrix representation D τ can be found using definition (52) and recurrence relations (28).
In summary, the method of generalized pseudospectral matrix represenetations proposed in this paper can be utilized to prove new identities satisfied by nonclassical orthogonal polynomials that are generalized, rather than standard, eigenfunctions of linear differential operators. The proposed generalized matrix representations of linear differential operators are quite useful in the process of incorporation of initial or boundary conditions into a discretization of a linear ODE via a pseudospectral method. The last application deserves further exploration involving numerical tests, which the authors may pursue in the near future. Other interesting developments of the method proposed in this paper may involve its generalization to exceptional orthogonal polynomials [35,36,37].