The distribution of Mahler's measures of reciprocal polynomials

We study the distribution of Mahler's measures of reciprocal polynomials with complex coefficients and bounded even degree. We discover that the distribution function associated to Mahler's measure restricted to monic reciprocal polynomials is a reciprocal (or anti-reciprocal) Laurent polynomial on [1,\infty) and identically zero on [0,1). Moreover, the coefficients of this Laurent polynomial are rational numbers times a power of \pi. We are led to this discovery by the computation of the Mellin transform of the distribution function. This Mellin transform is an even (or odd) rational function with poles at small integers and residues that are rational numbers times a power of \pi. We also use this Mellin transform to show that the volume of the set of reciprocal polynomials with complex coefficients, bounded degree and Mahler's measure less than or equal to one is a rational number times a power of \pi.


Introduction
The Mahler's measure of a polynomial f (x) ∈ C[x] is given by the expression It is readily apparent that Mahler's measure is a multiplicative function on C[x]. In this sense Mahler's measure forms a natural height function on C [x]. In this paper we study the distribution of values of Mahler's measure restricted to the set of reciprocal polynomials with bounded even degree and complex coefficients.
f (x) is said to be reciprocal if it satisfies the condition . We call p v (x) the reciprocal Laurent polynomial with coefficient vector v. The collection of reciprocal Laurent polynomials with complex coefficients forms a graded algebra. The integral defining Mahler's measure makes sense for reciprocal Laurent polynomials, and it is easily seen that µ(p v ) = µ(f ). It is convenient to work with reciprocal Laurent polynomials since they form an algebra (the set of reciprocal polynomials is not closed under addition). We define the reciprocal Mahler's measure to be the function µ rec : C N +1 → R given by v n cos(2πnt) dt .
If v = (v 0 , . . . , v L , 0, . . . , 0) with v L = 0, then there exist α 1 , . . . , α 2L not necessarily distinct nonzero complex roots of p v (x). By reordering if necessary, we may assume α L+n = α −1 n , and we may write and from Jensen's formula we have From this expression we see for all v ∈ C N +1 and k ∈ C the reciprocal Mahler's measure is: In addition µ rec is continuous as originally proved by Mahler [3]. By properties (i), (ii) and continuity, we find that µ rec is a symmetric distance function in the sense of the geometry of numbers (see for instance the discussion in [1, chapter IV]). µ rec satisfies all the properties of a metric except the triangle inequality. The 'unit ball' is thus not convex. Explicitly, is a symmetric star body. By property (iii) this star body is bounded. We call V N +1 the complex star body determined by the reciprocal Mahler's measure. One of the principal results presented here is the computation of the volume (Lebesgue measure) of V N +1 .
We introduce the monic reciprocal Mahler's measure, ν rec : C N → R, defined by Thus ν rec (b) is the Mahler's measure of the monic reciprocal Laurent polynomial We denote Lebesgue measure on Borel subsets of C N by λ 2N , and introduce the distribution function associated with the monic reciprocal Mahler's measure, h N (ξ) encodes statistical information about the distribution of Mahler's measures of reciprocal polynomials with complex coefficients and even degree bounded by 2N . The distribution function h N (ξ) is increasing and continuous from the right. By equation To see this, suppose b ∈ C N with ν rec (b) = 1. Then, from equation 1.4,p b (x) has all its roots on the unit circle. Thus, if α is a root ofp b (x) then so is α = α −1 . We find that b ∈ R N , and hence the set of b ∈ C N such that ν rec (b) = 1 has λ 2N -measure 0. Thus h N (1) = 0, and h N (ξ) is continuous at ξ = 1.
We recall the definition of the Mellin transform. Given a function g : [0, ∞) → R, the Mellin transform of g is the function of the complex variable s given by We will give an explicit formula for h N (ξ) by computing its Mellin transform. We note that, since h N (ξ) is identically zero on [0, 1] the integral defining h N (s) can be written with domain of integration [1, ∞).
The integral defining h N (s) converges in the half plane ℜ(s) > N . To see this, we use the following consequence of Jensen's inequality: where f (x) 2 is the Euclidean norm of the coefficient vector of f (x). Thus from equation 1.5 we have The latter set is a 'slice' of a solid sphere of dimension 2N + 1, and is thus a solid sphere of dimension 2N with radius less than ξ/ √ 2. Thus there exists a constant C such that

It follows that
The latter integral converges if ℜ(s) > N , and hence h N (s) is defined in the half plane ℜ(s) > N . We follow the method introduced by Chern and Vaaler in [2] to express the volume of V N +1 in terms of the Mellin transform of h N (ξ).
Proof. The volume of V N +1 is given by By the homogeneity of µ rec we see and thus the integral in equation 1.6 can be written as The domain of integration in the latter integral is [0, 1) since h N (1/r) is identically zero on [1, ∞). By the change of variables r = 1/ξ we find If we regard the integral defining h N (s) as a Lebesgue-Stieltjes integral, we may use integration by parts to write Since h N (1) = 0 and h N (ξ) is dominated by Cξ 2N , the first term vanishes when ℜ(s) > N . After a change of variables, we can write The latter integral is interesting enough to name: The bulk of this paper is committed to the discovery that H N (s) analytically continues to a rational function of s. Theorem 1.2. For each positive integer N , the function H N (s) extends by analytic continuation to an (even or odd) rational function. In particular, Proof. This follows immediately from Theorem 1.1 and Theorem 1.2.
Proof. h N (s) = H N (s)/2s is a rational function whose denominator is a product of distinct linear factors of the form s − n. We use partial fraction decomposition to write We compute ρ(n): And so, by the uniqueness of the Mellin transform we find for ξ ∈ (1, ∞). The lemma follows by substituting equation 1.7 into equation 1.8.
We outline the proof of Theorem 1.2. Given α ∈ (C \ {0}) N , we can create the unique monic reciprocal Laurent polynomialp a (x) having α 1 , . . . , α N , α −1 1 , . . . , α −1 N as roots. We will use the change of variables α → a to write H N (s) as an integral over root vectors of reciprocal Laurent polynomials, as opposed to coefficient vectors. This change of variables is useful, since by equation 1.4, ν rec (a) is a simple product in the roots ofp a (x) (i.e. in the coordinates of α). Analysis of the Jacobian of this change of variables will allow us to write H N (s) as the determinant of an N × N matrix, the entries of which are Mellin transforms which evaluate to rational functions of s. Theorem 1.2 will follow from the evaluation of the determinant of this matrix.
Before proceding to the proof of Theorem 1.2, we present µ rec and V N +1 from another perspective. Given the positive integer M , we define the Mahler's measure function to be µ : C M+1 → R where µ(u) is the Mahler's measure of the polynomial with coefficient vector u. As was shown in [2], µ is non-negative, homogeneous, positive-definite and continuous. Thus µ is a symmetric distance function and the set is a bounded symmetric starbody. Let M = 2N and consider the linear map Λ : T . We define V = Λ(C N +1 ) to be the subspace of reciprocal coefficient vectors. By equations 1.1, 1.2 and 1.3 we find µ rec (v) = µ(Λ(v)). Thus, the starbody formed by the intersection U 2N +1 and V is related to the reciprocal starbody. Specifically, Every bounded symmetric starbody uniquely determines a symmetric distance function [1, Chapter IV.2 Theorem 1]. Thus, armed with µ and Λ, we could 'discover' µ rec . Equation 1.4 can be recovered from the symmetry in the definition of Λ, so we would lose no information if we were to define µ rec in this manner.
The volume of U M+1 , as well as the subspace volume of the starbody formed by intersecting U M+1 with the subspace of real coefficient vectors was investigated in [2]. Thus the computation of the volume of V N +1 yields subspace volume information of another 'slice' of U 2N +1 .

A change of variables
Thus the nth coordinate function of E N is given by ε n (α 1 , . . . , α N , α −1 1 , . . . , α −1 N ), where ε n is the nth elementary symmetric function in 2N variables. Let E N : C N → C N be the function whose nth coordinate function is e n , the nth elementary symmetric function in N variables. That is, given It is well known that the Jacobian of E N (β) is given by |V (β)| 2 , where is the Vandermonde determinant. We will relate the Jacobian of E N to the Jacobian of E N .
Lemma 2.1. For each positive integer N , the Jacobian of E N (α) is given by Proof. By definition ε n (x 1 , . . . , x N , x ′ 1 , . . . , x ′ N ) is composed of all monomials of degree n in the variables x 1 , . . . x N , x ′ 1 , . . . x ′ N . If we impose the relation x m x ′ m = 1 for m = 1, . . . , N , then ε n (x 1 , . . . , x N , x ′ 1 , . . . , x ′ N ) is no longer homogeneous. In this situation it is easy to see that the monomials of degree n of ε n (x 1 , . . . , x N , x ′ 1 , . . . , x ′ N ) consist of monomials which do not contain both x m and x ′ m for m = 1, . . . , N . Hence, ε n (x 1 , . . . , x N , x ′ 1 , . . . , x ′ N ) = e n (x 1 + x ′ 1 , . . . , x N + x ′ N ) + (monomials of degree < n). In general ε n (x 1 , . . . , x N , x ′ 1 , . . . , x ′ N ) contains monomials of degree n − 2M formed from monomials which contain x m and x ′ m where m runs over a subset of 1, . . . , N of cardinality M . By counting the number of times each monomial of degree n − 2M appears we arrive at the following identity.
and * represents entries which are not necessarily 0. The Jacobian of E N (β) = |V (β)| 2 , and thus by the chain rule we arrive at the formula for the Jacobian of E N (α) given in the statement of the lemma.
The Jacobian of E N (α) is nonzero for λ 2N -almost all points of (C × ) N , and there are 2 N N ! preimages for λ 2N -almost all a ∈ C N . Employing the change of variables formula, we find The latter integral admittedly looks formidable, however this change of variables is beneficial since ν rec is a simple product.

H N (s) is a determinant
We first prove a short technical lemma concerning determinants. Lemma 3.1. Let N be a positive integer. If I = I(j, k) is an N × N matrix and S N is the N th symmetric group, then Thus we can write (3.1) as: which is the familiar formula for det(I).
Using equation 2.1, we expand the Vandermonde determinant as a sum over the symmetric group to find , which we rewrite as Substituting this expression into equation 2.2, we can write H N (s) as Exchanging the sums and the integral, and consolidating the products, we find By an application of Fubini's Theorem we find where I(J, K) is given by Applying Lemma 3.1 to equation 3.2 we find H N (s) is the determinant of the N × N matrix I = I(J, K).

The entries of I are rational functions of s
We shall view I(J, K), not only as an entry in a matrix, but also a function of s. We note that λ 2 (α)/|α| 2 is normalized Haar measure on C × . Thus I(J, K; s) is invariant under the substitution α → α −1 , and we may write where D is the open unit disk. By setting α = re iθ we may write I(J, K; s) = h(J, K; r), where h(J, K; r) is given by [1, ∞), and identically zero on [0, 1).
By the change of variables θ → −θ we see that h(J, K; r) = h(K, J; r). We conclude that I is a symmetric matrix whose J, K entry is h(J, K; s). The integral appearing in this expression can be readily evaluated: If J ≡ K (mod 2) we see that h(J, K; r) (and hence I(J, K; s)) is identically zero.
The conditions given in equation 4.3 allow us to eliminate one of the summations in equation 4.2. We use the facts that 0 ≤ k ≤ K − 1 and 0 ≤ j ≤ J − 1 together with the conditions in 4.3 to find conditions on j. Specifically, Since K ≥ J, we can write where δ JK = 1 if J = K and is 0 otherwise. From this information we may write h(J, K; r) as Using the convention that K−1 K = 0 and K−1 −1 = 0 we may eliminate δ JK from the latter two sums. Reindexing each sum based on the powers of r which appear and simplifying the binomial coefficients, we find h(J, K; r) = 2π We are now in position to compute h(J, K; s). There is a correspondence between the coefficients and powers of r which appear in h(J, K; r) and the poles and residues of h(J, K; s). We identify I(J, K; s) with the rational function it extends to. When J and K are odd, I(J, K; s) has poles at ±1, ±3, . . . , ± min{J, K}. When J and K are even, I(J, K; s) has poles at ±2, ±4, . . ., ± min{J, K}. I(J, K; s) has a zero of multiplicity one at 0.
We are now in position to prove the first part of Theorem 1.2. H N (s) is the determinant of I, and the entries of I extend to rational functions of s. Since the determinant is a polynomial in the entries of a matrix, H N (s) itself extends to a rational function of s. In fact, since the determinant is a homogeneous polynomial in the entries of a matrix and the entries of I analytically continue to odd functions, H N (s) analytically continues to an even rational function when N is even, and analytically continues to an odd rational function when N is odd. We also see H N (s) has a zero of multiplicity N at 0.

H N (s) is a simple product
In this section we express det(I) as a simple product. The structure of the poles and residues of I(J, K; s) will allow us to find linear dependence relations on the rows of I.
Let B n be the N × N matrix whose J, K entry is the integer c n (J)c n (K). Then by Lemma 4.1 we have the matrix equation Define ω T n ∈ Q N to be the row vector given by ω T n = (c n (K)) N K=1 . It follows then that the Jth row vector of B n is given by c n (J)ω T n , and thus every row of B n is a scalar multiple of ω T n . We may find a nonzero vector ψ ∈ Q N such that ω T n ψ = 0 for 1 ≤ n ≤ N − 1. In fact, B n ψ = 0 for 1 ≤ n ≤ N − 1, leading us to the vector equation