Identity-Type Functions and Polynomials

For a noncommuting product of functions, similar to convolutions, an “identity-type function” leaving a specific function invariant is defined. It is evaluated for any choice of function on which it acts by solving a functional equation. A closed-form representation for the identity-type function of (1 + t)−b(b > 0) is obtained, which is a solution of a second-order linear differential equation with given boundary conditions. It yields orthogonal polynomials whose graphs are also given. The relevance for solution of boundary value problems by a series and convergence of the series are briefly discussed.


Introduction
Given two continuous real functions ϕ, ψ, define a noncommutative convolution-type product ϕ • ψ by (see [1, page 217] and also [2, page 308, (13)], for a more general form).This product satisfies the property In a different context, an identity-type function e ϕ for a given ϕ was defined in [3] by It is to be noted that the term "identity-type" should not be taken as a "left identity," as e ϕ is specific to ϕ. Taking Mellin transforms of both sides in (1.6), we get a functional equation (see [1, page 217]) that can be regarded as the extended Riemann functional equation.We consider a particular case (see [4, page 23, (2.7.1)]) when The Mellin transform of the above function is known to be (see [4, page 3, (2.7.1)]) The corresponding identity-type function is found to be (see [3]) The Mellin transform of (1.10) can be found (see [2, page 371 (6.5) (1)]) to be Putting these values of Φ M (α) and M[e ϕ (t);α] in (1.7) leads to the Riemann functional equation (see [3]) (1.12) Note that the appearance of the Riemann zeta function in (1.9) imparts significance to the chosen ϕ(t) of (1.8).
Defining the new function which is essentially the Weyl transform of the function ϕ, evaluated at x = 0, the following theorem was proved [3].
Theorem 1.1.Let ϕ(t) be analytic at zero such that the corresponding series ∞ n=0 ) is uniformly convergent.Then it defines an identity-type function e ϕ for ϕ in the sense that ϕ = e ϕ • ϕ.
In this paper, we extend the above theorem.For an application of the extended result consider the function It is shown that its identity-type function satisfies a second-order differential equation.The identity-type function naturally leads us to Jacobi-type polynomials that are solutions of a regular Sturm-Liouville system and hence form an orthogonal system, which provides a generalized Fourier series expansion for a class of functions arising in engineering and applied sciences.A graphical representation of the polynomial is also provided.

The main results
In this section, we solve a general convolution-type equation and then deduce the result involving the identity-type functions.
Theorem 2.1.Let ϕ and ψ be given functions analytic at zero such that the series is uniformly convergent for t ≥ 0. Then where Proof.Since the given functions ϕ and ψ are analytic at zero, they must have the unique Taylor series representations (2.4) Assume that Replacing the functions ϕ, ψ, and e in (2.2) by their Taylor series representations, we find which can be rewritten as Equating the coefficients of the equal powers of t in (2.7), we find Hence the proof.
Corollary 2.2.Let ϕ(t) be a given function analytic at zero such that the corresponding series converges uniformly for all t ≥ 0. Then the function e is the identity-type function for ϕ in the sense that
Remark 2.4.The representation of the identity function in terms of the Gauss functions is important in that it satisfies the second-order homogeneous differential equation For the properties of the Gauss hypergeometric and gamma functions we refer to [7][8][9].

The identity-type polynomials
The above analysis leads us to introduce a sequence of polynomials, which are solutions of the second-order Gauss differential equation.Since the function F(c,−c;1;t) is defined as well for all integral values of the parameter c, we define the associated identity-type polynomials by (see (2.11)) and call them identity-type polynomials.In particular, we have Clearly, the values at the endpoints are given by e n (0) = 1, e n (1) = 0 (∀n = 1,2,3,...).They are given graphically in Figure 3.1 for n = 1,...,5 and in Figure 3.2 for n = 6,7,8,9, 10.
we obtain the orthogonality relation For n = m in (4.2), it turns out that

.). (4.3)
To prove this result, we note that the identity-type polynomials come from the Jacobi polynomials, P (α,β) n (x) (see [7,8,10]), by choosing α, β to be 0 and −1, respectively, with the variable taken to be (1 − 2x), that is, Putting the chosen parameter values and the transformation of the variable into the formula for the norm of the Jacobi polynomial [10], we obtain (4.3).Since the polynomials satisfy the boundary conditions e n (0) = 1, e n (1) = 0 (∀n = 1,2,3,...) and are solutions of the Sturm-Liouville differential equation (4.1), this leads to the following result.

Conclusion
We considered a noncommuting operation for taking a convolution-type product of two functions and an identity-type function for the operation, specific to the function on which it acts.This procedure was used for a particular choice of function.The resulting identity-type function satisfies the hypergeometric equation and is, thus, a Gauss (hypergeometric) function.These functions reduce to simple polynomials for the appropriate choice of the parameters of the Gauss function.
The identity-type polynomials, despite the fact that they are expressible in terms of the Jacobi polynomials, are interesting in themselves.The differential equation they satisfy can be written as a singular Sturm-Liouville form with weight factor (1 − t) −1 and eigenvalue n 2 (where n is the order of the polynomial), with the function satisfying Dirichlet boundary conditions at the ends of the interval.Consequently, they form an orthogonal basis for expansion of functions that are required for solving any partial differential equation in which the Sturm-Liouville problem arises.Surprisingly, the norm of the nth polynomial is 1/ √ 2n, and hence we can easily normalize the basis elements.It is interesting to note that the nth identity-type polynomial has exactly n zeros, as may be seen from the properties of the corresponding Jacobi polynomial [10].
Unlike the Chebyshev polynomials, Hermite and Legendre polynomials [11, pages 194-196, 218-219, 184-185] are not a set of either even or odd functions.Nor does the set match the Euler or Laguerre polynomials [11, pages 176-177, 210-211], in which the number of zeros does not continue to match the order of the polynomials beyond 2 and 5, respectively.The distribution of zeros is that they are closer together at the ends and more spaced out in the middle of the interval.The orthogonal basis elements start at 1 and oscillate to zero, with the last peak value decreasing for larger n.Even for the orthonormal basis, the last peak decreases as n increases, albeit more slowly (see Figure 3.1).Factoring out the (1 − t), the remnant polynomial has the interesting property that at t = 1, it has the value (−1) n+1 n (see Figure 3.2).
The properties of the (nonnormalized) polynomials, such that they are all 0 at 1 and 1 at 0, suggest that they will be especially useful for solving partial differential equations by separation of variables using series solutions, which reduce to a Sturm-Liouville problem with appropriate boundary conditions.For example, if an appropriate physical/engineering problem concerns a field between two walls of a cylinder at radial values a and b, with the boundary condition that it is a given constant at the inner wall and vanishes at the outer wall, the required boundary conditions would be obtained by changing variable from t to x = (t − a)/(b − a).The eigenvalue n 2 comes automatically from the axial symmetry.The choice of the cylindrical geometry is because of the strong similarity between (4.1) and the Bessel equation, which usually arises for cylindrical problems.
The question arises about the convergence properties of the resulting polynomials.It is known that the "best approximation" to x n+1 is given by the Chebyshev polynomials [12].They are either even or odd and have to be fitted at the endpoints.By the properties of these polynomials, they automatically fit at the endpoints and hence convergence is not an issue for them there.Also, it is clear from the graphs that, after the first zero, they are bounded between −0.4 and 0.4, and that the first zero appears for smaller values of x as n becomes larger.Thus, whereas one would expect that the Chebyshev polynomials provide the best fit according to the mean square norm, these polynomials might provide a better fit according to the supremum norm.This idea was checked for the first 4 polynomials.Of course, the first polynomial (adjusted for the appropriate boundary conditions) of both is identical.For the second supremum norm, difference is 0.36 for the Chebyshev polynomials, while it is 0.065 for these polynomials.In none of these cases is the fit better for the Chebyshev polynomials.
It might be useful to look at the other (linearly independent) functions that satisfy the hypergeometric equation that was obtained.Of course, the entire discussion in this paper was for one specific choice of ϕ(t).The one taken was because it led to the hypergeometric equation.There could well be other choices that lead to reasonable equations and Sturm-Liouville problems that could be of interest.These would also be worth finding.

Figure 3 . 1 .
Figure 3.1.Graphs of the first ten identity-type polynomials, from 1 to 5 in the first frame and from 6 to 10 in the second.Notice that the number of zeros is n.
Figure3.2.Graphs of the identity-type polynomials factored by (1 − t), written here as P n (t), from 1 to 5 in the first frame and from 6 to 10 in the second.Note that these are polynomials of order (n − 1) and not of order n.Further, note that the value at 1 increases in magnitude and alternates in sign with increasing n.