## Abstract

Keating and Snaith showed that the 2*k*th absolute moment of the characteristic polynomial of a random unitary matrix evaluated on the unit circle is given by a polynomial of degree *k*^{2}. In this article, uniform asymptotics for the coefficients of that polynomial are derived, and a maximal coefficient is located. Some of the asymptotics are given in an explicit form. Numerical data to support these calculations are presented. Some apparent connections between the random matrix theory and the Riemann zeta function are discussed.

## 1. Introduction

Let *Z*_{N}(*A*,*θ*) denote the characteristic polynomial of an *N*×*N* unitary matrix *A* evaluated at . Let *θ*_{n} denote the eigenphases of *A*. Then, , Keating & Snaith (2000) gave an explicit expression for the expected moments of |*Z*_{N}(*A*,*θ*)|:
1.1
The expectation is taken with respect to the normalized Haar measure on the group of *N*×*N* unitary matrices *U*(*N*). This can also be written in terms of the Barnes *G*-function, which is the entire function of order 2 defined by and satisfying the relations *G*(1) = 1, *G*(*z*+1) = *Γ*(*z*)*G*(*z*). The r.h.s. of equation (1.1) equals
1.2
When , this simplifies to a polynomial in *N* of degree *k*^{2}. This polynomial can be expressed in several equivalent ways, each of which is useful:
1.3
Expanding, we define
1.4
The asymptotics of the leading coefficient *c*_{0}(*k*) can be obtained from the asymptotics of the Barnes function: , from which it follows that .

### (a) Results

The main purpose of this paper is to develop *uniform* asymptotics in *r* for the coefficients *c*_{r}(*k*), 0 ≤ *r* ≤ *k*^{2}, as . We also obtain some explicit asymptotics and estimate in *r*, for given *k*, the maximal *c*_{r}(*k*).

We prove the following theorem in the next two sections as a sequence of five lemmas.

### Theorem 1.1

*Let u := u*_{r,k} *be the unique positive number satisfying* *. Further, define U := U*_{r,k} *by* *. Then, uniformly in r∈(0,k*^{2}*), and as*
1.5
*Furthermore, if we fix α* < 1, *then, uniformly in* 0 ≤ *r* ≤ *k*^{α}*,
*
1.6
*and
*
1.7
*with the implied constants in the O terms depending on α and where
*
1.8

Regarding the location of the maximal *c*_{r}(*k*), we prove in §5.

### Theorem 1.2

*Let
*
1.9
*where γ* = 0.57721… *is the Euler constant. Then, there exists ρ* > 0 *such that, for all k sufficiently large, a maximal c*_{r}*(k) occurs for some
*
1.10
*and no maximal c*_{r}*(k) occurs outside of that interval.*

Notice that the size of the interval in equation (1.10) is slightly larger than 1. So typically, it will only contain one integer, and the maximum will occur for *r* = ⌈*k*^{2}−*μ*⌉, but it can contain two integers in the event that *μ* is very close to an integer.

In §4, we derive a more precise formula for the leading coefficients, valid for *r* = *O*(*k*^{α}) with *α* < 2, the first few terms of which are listed below.

### Theorem 1.3

1.11
*The jth term in the exponent is of the form q*_{j}*(r)/k*^{2j}*, where q*_{j}*(r) is a polynomial in* *of degree j*+1, *divisible by r(r*−1), *and satisfying the bound* *for some λ* > 0.

The above expansion can be transformed into a similar one involving the binomial coefficient :
1.12
This can be achieved by writing , where *B*_{m}(*x*) are the Bernoulli polynomials, and we have used equation (1.18).

The apparent smaller coefficients in the exponent of equation (1.12) in comparison with equation (1.11) suggest that the binomial coefficient appears naturally as a factor in the asymptotics. A heuristic explanation is provided in the next section.

It turns out the problem of deriving uniform asymptotics for the *c*_{r}(*k*)’s is similar to that of deriving uniform asymptotics for the unsigned Stirling numbers of the first kind. These numbers are usually denoted by *S*(*k*,*r*) and can be defined by the generating function . The above generating function bears some resemblance to the generating function of the *c*_{r}(*k*)’s given in equation (1.3), which is the reason the settings in the two problems are similar.

Our proof of lemma 2.3 parallels the proof of asymptotic (1.3) in Moser & Wyman (1958), where uniform asymptotics for the *S*(*k*,*r*)’s were developed. See also theorem 2 in the later work of Chelluri *et al.* (2000), which relies on the work of Moser & Wyman (1958).

### (b) Motivation and connections to the Riemann zeta function

For a fixed *k* with ℜ*k* > −1/2, and as , the 2*k*th moment of the Riemann zeta function on the critical line *ζ*(1/2+*it*) is conjectured to have an asymptotic expansion: , where is an asymptotic series in descending powers of *x* of the form: . When , then this series terminates and is a polynomial of degree *k*^{2} in *x*: .

The leading coefficient is expected to be of the form , where *a*_{k} is a certain, generally understood, ‘arithmetic factor’, and *g*_{k} is an integer for integer values of *k* (see Conrey & Ghosh 1984). It is well known that *g*_{1} = 1, *g*_{2} = 2, and it was conjectured *g*_{3} = 42, *g*_{4} = 24024 in Conrey & Ghosh (1998) and Conrey & Gonek (2001), respectively. But little was known about *g*_{k} in general.

Based on the analogous computation in the random matrix theory, Keating & Snaith (2000) conjectured that *g*_{k}/(*k*^{2}!) = *G*^{2}(*k*+1)/*G*(2*k*+1), or . In other words, the leading coefficient of *P*_{k}(*x*) and of are expected to coincide, except for ‘arithmetic effects’.

Recently, Conrey *et al.* (2005) conjectured, for integer *k*, a formula for the polynomial expressed in terms of 2*k*-fold residue. The residue formula allows one to derive complicated formulae for the ’s (Conrey *et al.* 2008).

Precise information concerning the extreme values of the zeta function up to a given height could be derived from precise knowledge of the uniform asymptotics of its moments in both the *k* and *T* aspects (Farmer *et al.* 2007). However, it is not clear how to obtain the uniform asymptotics from the formulae in Conrey *et al.* (2005, 2008).

As a first step, we decided to try and understand the behaviour of the coefficients of the moment polynomials. We hoped that deriving asymptotics for the coefficients that occur on the random matrix theory side (i.e. the *c*_{r}(*k*)’s) could shed some light on the zeta function side (i.e. the ’s). In a sense that turned out to be the case. In an upcoming paper (Hiary & Rubinstein in preparation), we show for a fixed *α* < 1, and uniformly in 0 ≤ *r* ≤ *k*^{α}, it holds
1.13
where *γ* = 0.5772… is Euler’s constant, and is a certain explicit number (see Hiary & Rubinstein in preparation) for details) satisfying . To facilitate comparison, we give the corresponding result for unitary random matrices (see lemma 2.2),
1.14

### (c) Basic facts

For a fixed *r*, one can easily show that *c*_{r}(*k*)/*c*_{0}(*k*) is a polynomial in *k* of degree 3*r* with leading coefficient 1/*r*!:
1.15
Furthermore, pulling out the 1/*r*!, all the coefficients of *r*!*b*_{r}(*k*) are polynomials themselves in . These facts can be proved by writing *c*_{r}(*k*), which are expressible from equation (1.3) as elementary symmetric polynomials, in terms of power sums via Newton’s identities applied to the set of *k*^{2} numbers {1,2,2,3,3,3,…,2*k*−2,2*k*−2,2*k*−1}:
1.16
where *p*_{n}(*k*) are the sums of powers:
1.17
The above can be expressed in terms of Bernoulli polynomials using
1.18
Thus, breaking equation (1.17) into three sums, we find
1.19
is a polynomial in *k* of degree *n*+2 and leading coefficient (2^{n+2}−2)/(*n*+1)(*n*+2). The last step follows from the expansion of *B*_{n}(*x*) in terms of Bernoulli numbers.

Therefore, from the *n* = 1 term in equation (1.16), we inductively get the leading term of equation (1.15), and the recursion also gives the coefficients of *b*_{r}(*k*) as rational numbers (functions of *r*). We list the first eight *p*_{n}(*k*): *p*_{1}(*k*) = *k*^{3}, *p*_{2}(*k*) = 7/6*k*^{4}−1/6*k*^{2}, *p*_{3}(*k*) = 3/2*k*^{5}−1/2*k*^{3}, *p*_{4}(*k*) = 31/15*k*^{6}−7/6*k*^{4}+1/10*k*^{2}, *p*_{5}(*k*) = 3*k*^{7}−5/2*k*^{5}+1/2*k*^{3}, *p*_{6}(*k*) = 127/28*k*^{8}−31/6*k*^{6}+7/4*k*^{4}−5/42*k*^{2}, *p*_{7}(*k*) = 85/12*k*^{9}−21/2*k*^{7} + 21/4*k*^{5} − 5/6*k*^{3}, *p*_{8}(*k*) = 511/45*k*^{10}−127/6*k*^{8}+217/15*k*^{6}−35/9*k*^{4}+7/30*k*^{2}, and the first few *b*_{r}(*k*): *b*_{1}(*k*) = *k*^{3}, *b*_{2}(*k*) = 1/2*k*^{6}−7/12*k*^{4}+1/12*k*^{2}, *b*_{3}(*k*) = 1/6*k*^{9}−7/12*k*^{7}+7/12*k*^{5} − 1/6*k*^{3}, *b*_{4}(*k*) = 1/24*k*^{12} − 7/24*k*^{10} + 205/288*k*^{8} − 527/720*k*^{6}+ 85/288*k*^{4}−1/40*k*^{2}, *b*_{5}(*k*) = 1/120*k*^{15}−7/72*k*^{13}+125/288*k*^{11}−677/720*k*^{9}+1489/1440*k*^{7}−97/180*k*^{5}+1/10*k*^{3}, *b*_{6}(*k*) = 1/720*k*^{18}−7/288*k*^{16}+11/64*k*^{14}−32927/51840*k*^{12}+22931/17280*k*^{10}−38245/24192*k*^{8}+10513/10368*k*^{6}−47/160*k*^{4}+5/252*k*^{2}.

One readily observes from the powers of *k* that appear that *b*_{r}(−*k*) = (−1)^{r}*b*_{r}(*k*), and this can be proved inductively from the recursion (first establishing this property for *p*_{n}(*k*) from its expression in terms of Bernoulli polynomials).

The recursion also allows us to work out specific formulas for the lower terms of *b*_{r}(*k*). For example, the next to leading term has degree 3*r*−2. Writing
1.20
we have, from the *n* = 1,2 terms of the recursion, the relation: *rb*_{r,1} = *b*_{r−1,1}−7/6*b*_{r−2,0}. But *b*_{r−2,0} = 1/(*r*−2)!, and one easily checks inductively that *b*_{r,1} = −(7/12)(1/(*r*−2)!).

More generally, writing , we have, on comparing the coefficient of *k*^{3r−2j} on both sides of equation (1.16), that
1.21
One can show, inductively on *j*, that *b*_{r,j} is of the form
1.22
where *g*_{j}(*r*) is a polynomial in *r* with rational coefficients, by plugging this into equation (1.21) and separating the (*n*,*a*) = (1,0) term from the rest as follows. Putting the other terms over a common denominator and using *p*_{1,0} = 1, we have *r*(*g*_{j}(*r*)/*r*!) = *g*_{j}(*r*−1)/(*r*−1)!+*h*_{j}(*r*)/(*r*−1)!, where, *h*_{j}(*r*) is the polynomial obtained by our inductive hypothesis from the terms (*n*,*a*)≠(1,0). Therefore *g*_{j}(*r*)−*g*_{j}(*r*−1) = *h*_{j}(*r*), and this difference equation allows us to solve for all but the constant coefficient of *g*_{j}, which can be obtained from a specific value of *b*_{r,j}.

For example, to work out *b*_{r,2}, we substitute *b*_{r,2} = *g*_{2}(*r*)/*r*! into the recursion and pull out the (*n*,*a*) = (1,0) term, giving *r*(*g*_{2}(*r*)/*r*!) = *g*_{2}(*r*−1)/(*r*−1)! + 49/72(*r* − 4)!+ 1/6(*r* − 2)! + 3/2(*r* − 3)!. Clearing denominators we have *g*_{2}(*r*)−*g*_{2}(*r*−1) = (49/72)(*r*−1)(*r* − 2)(*r* − 3)+ (1/6)(*r*−1)+(3/2)(*r* − 1)(*r* − 2) = 49/72*r*^{3} − 31/12*r*^{2}+227/72*r* − 5/4, and solving for the coefficients of *g*_{2} yields *g*_{2}(*r*) = 49/288*r*^{4}−25/48*r*^{3}+131/288*r*^{2}−5/48*r*+*g*_{2}(0). From *b*_{2,2} = *g*_{2}(2)/2! = 1/12, we get *g*_{2}(0) = 0. Hence, after factoring the above, *b*_{r,2} = (49*r*^{2}−101*r*+30)/288(*r*−2)!. We list the first few terms in the expansion of *b*_{r}(*k*):
1.23

## 2. Basic asymptotics

The coefficients of *P*_{k}(*x*) can be expressed as elementary symmetric polynomials in its roots. For example, if we let −*z*_{1},−*z*_{2},…,−*z*_{k2} denote the roots of *P*_{k}(*x*), then, from the last equation in equation (1.3), we have *c*_{k2}(*k*) = 1, , and so on. The question is reduced to estimating . If *r* is small, then the terms *z*_{ji} in this expression may be thought of heuristically as independent random variables drawn from the distribution
2.1
Therefore, for small *r*, one may expect something like the following to hold, , where *A* is defined in equation (1.8). This is precisely the statement of our lemma 2.1.

### Lemma 2.1

*Fix α* < 1. *Assume* 0 ≤ *r* ≤ *k*^{α}. *Then, uniformly in r over that range, and as* *it holds*
2.2

### Proof.

One may write
2.3
where
2.4
By the Euler–Maclaurin summation formula, , and for *m* > 1. So, we anticipate the main contribution will come from the term . We thus write
2.5
and expand *f* in a Taylor series
2.6
A direct application of Cauchy’s estimate yields the upper bound
2.7
where *C* is the circle centred at the origin of radius , and *k* sufficiently large. Now, *c*_{k2−r}(*k*) is the coefficient of *x*^{r} in the Taylor expansion about zero of the function (2.5). As *β*_{1} = 0 in expansion (2.6), it follows
2.8
Fix *α* < 1, and let *r*∈[0,*k*^{α}]. Then, estimate (2.7) yields , with the implied constant depending on *α*. Since *η*_{1} = *kA*, we arrive at
2.9
as required. Finally, to obtain the result in terms of the binomial coefficient, note
2.10
and
2.11
□

A similar heuristic applied to the leading coefficients of *P*_{k}(*x*) leads one to expect that in agreement with lemma 2.2.

### Lemma 2.2

*Fix α* < 1. *Assume* 0 ≤ *r* ≤ *k*^{α}. *Then, uniformly in r over that range, and as* *it holds*
2.12

### Proof.

The leading coefficients of *P*_{k}(*x*) are the trailing coefficients of . Also, by Taylor expansions, , where . Now, *c*_{r}(*k*)/*c*_{0}(*k*) is the coefficient of *x*^{r} in the expansion about zero of
2.13
Since *p*_{m}(*k*) ≤ (2*k*)^{m+2} for *m* ≥ 1, Cauchy’s estimate supplies the bound *ν*_{m}(*k*) = *O*(*k*^{2m}), as . Also, it easy to verify *p*_{1}(*k*) = *k*^{3} and *ν*_{1}(*k*) = 0.

Finally, suppose *r*∈[0,*k*^{α}], where *α* < 1 is a fixed constant. Then, put together, we have
2.14
▪

By taking more terms in the above lemma, we can obtain a more precise asymptotic, though still valid only up to *r* = *O*(*k*^{α}) with *α* < 1. For example, truncating the sum in equation (2.14) at *m* = 5 gives
2.15
One can also rearrange this expansion, collecting terms according to the power of 1/*k*^{2} to get another derivation of the expansion (1.23).

The situation away from the tails is more complicated. There, we apply a saddle-point technique in a similar way to Moser & Wyman (1958), where it was used to develop asymptotics for Stirling numbers of the first kind. Owing to such similarities, some of the details in the proof of lemma 2.3 are only sketched.

### Lemma 2.3

*Fix α* < 2. *Let u* := *u*_{r,k} *be the (unique) positive number satisfying*
2.16
*Further, define U* := *U*_{r,k} by
2.17
*Then*,
2.18
*If we restrict r to lie in the interval k*^{α} < *r* < *k*^{2}−*k*^{α}, *then, as*
2.19

Notice that because we can take *α* < 2, the interval covered by this lemma, *k*^{α} < *r* < *k*^{2}−*k*^{α}, overlaps with the two intervals in lemmas 2.1 and 2.2.

### Proof.

By Cauchy’s theorem, , where *C* is any contour circling the origin once in a positive direction, and
2.20
The plan is to obtain very good approximations of the above integral via a saddle-point method. So, consider the function,
2.21
The saddle points of *f*(*z*) are the solutions of
2.22
By monotonicity of for *z* ≥ 0, equation (2.22) has a unique positive solution, which we denote by *u* := *u*_{r,k}. From here on, we follow the standard saddle-point recipe (see de Bruijn 1981), in a generally similar way to what was done in Moser & Wyman (1958). The contour *C* should cross the saddle point *u* in the direction of steepest descent. The direction of steepest descent is determined by the argument of *f*′′(*u*). Since,
2.23
then (because *uf*′′(*u*) > −*f*′(*u*) = 0, as can be seen by showing that *zf*′(*z*) is increasing). So, as in the standard saddle-point recipe, the angle of passage should be . This suggests the contour choice *C* := {*u*e^{iθ}:−*π* < *θ* < *π*}, which is the same as in Moser & Wyman (1958). Writing
2.24
we have, by calculations analogous to those in Moser & Wyman (1958),
2.25
where *ϵ* > 0 is a small number to be chosen later, and
2.26
Next, expand *f*(*u*e^{iθ}) about *θ* = 0 to obtain (for *θ* < 1/100 say)
2.27
In that expansion, we used the fact that *u* is a saddle point, i.e. (d/d*θ*)*f*(*u*e^{iθ})|_{θ = 0} = 0. Now, Cauchy’s estimate applied to (d^{2}/d*z*^{2})*f*(*u*e^{iz}) yields *γ*_{m}(*u*) = *O*(*U*), for *m* ≥ 3. So, choose *ϵ* such that,
2.28
Consequently,
2.29
Cauchy’s estimate, in combination with the previous estimate on the *γ*_{m}’s, gives *μ*_{m} = *O*(*U*^{m/3}), uniformly in *m* and *U*. We can also get a sharper estimate for *m* fixed. Exponentiating, , ensures *μ*_{m} = *O*_{m}(*U*^{⌊m/3⌋}).

So, if the sum over *m* in expression (2.29) is truncated at some integer *M* ≥ 3, then, using the first bound on *μ*_{m},
2.30
The domain of integration is extended to to obtain
2.31
The function *θ*^{2m}*e*^{−Uθ2/2} achieves its maximum at . It follows, on estimating the integrand, that
2.32
Thus, if the condition 2*M* < *U*^{2δ} is imposed in equation (2.31), we obtain
2.33
The first part of the lemma follows by taking *δ* = 1/12, *M* = 13 say, and the estimate *μ*_{m} = *O*_{m}(*U*^{⌊m/3⌋}). For the second part, we observe that
2.34
as . This bound can be proved as follows. First, recall, by definition, . So, for *x* ≥ 0, . It is not hard to see, for *x* ≥ *k*/2, . Also, for 0 < *x* < *k*/2, . By lemma 3.1, and the monotonicity of the saddle point *u*_{r} as a function of *r*, we have *k*^{α−1}≪*u*_{r}≪*k*^{3−α} for *k*^{α} < *r* < *k*^{2−α}. Combined with the above two lower bounds, this yields equation (2.34). ▪

## 3. A uniform asymptotic

One unsatisfying feature of lemma 2.3 is the implicit nature of the asymptotic approximation it provides. Also, it might be desirable to unify lemmas 2.1 to 2.3 into a single asymptotic approximation applicable over all regions. The purpose of this section is to resolve these issues. Lemma 3.1 provides explicit approximations to *u*_{r,k} and *U*_{r,k}. Lemma 3.2 provides a uniform asymptotic for *c*_{r}(*k*).

### Lemma 3.1

*Fix α* < 2. *Let u*_{r,k} *and U*_{r,k} *be as in the statement of lemma 2.3, and let A be defined as in equation (1.8), so* . *Then uniformly in r*,
*and*

### Proof.

Recall the saddle point *u* := *u*_{r,k} is the positive number satisfying
3.1
Since the left side of equation (3.1) is monotonically increasing for , it is easy to see
3.2
Also by monotonocity, *u*_{r,k} is unique. Suppose *r*∈(*k*^{2}−*k*^{α},*k*^{2}), so *u* = *O*(*k*^{α−1}). Let us rewrite equation (3.1) as . Then, , and . Therefore, , and we see, in combination with equation (3.2), that
3.3
As for the case *r*∈(0,*k*^{α}), note *k*^{2}−*r* = *Ω*(*k*^{2}) for such *r*, whereas
3.4
Therefore, by the monotonicity of the left-hand side of equation (3.1), the saddle point satisfies *u* = *Ω*(*k*^{3}/*r*) = *Ω*(*k*^{3−α}). Using this estimate on *u*, we can rewrite equation (3.1) in the form , which implies . From this, it is straightforward to verify *k*^{3}/*u*+*O*(*k*^{4}/*u*^{2}) = *r*, or *u* = *k*^{3}/*r*[1+*O*(*k*^{α−2})], as claimed. Asymptotics for *U* := *U*_{r,k} are derived similarly. We have
▪

With the aid of lemma 3.1, it becomes possible to combine lemmas 2.1 to 2.3 into a single lemma.

### Lemma 3.2

*Let u* := *u*_{r,k} *and U* := *U*_{r,k} *be defined as in the statement of lemma 2.3. Then, uniformly in r*∈(0,*k*^{2}), *and as* , *it holds*
3.5

Note, we exclude *r* = 0,*k* from the statement of the lemma because the r.h.s. vanishes at those points.

### Proof.

Lemmas 2.1–2.3 give the asymptotics for *c*_{r}(*k*) in three overlapping regions that cover all 0 ≤ *r* ≤ *k*^{2}. We compare these three asymptotics to the r.h.s. above, and find the *α* in those lemmas that gives the optimal remainder term (it will transpire that *α* = 3/4).

So fix *α* < 1 and suppose *r*∈(0,*k*^{α}). By lemma 3.1, we have
3.6
Equation (3.6) implies
3.7
Using with *x* = *j*/*u*, and the above estimate for *u*, we get
3.8
And by Stirling’s formula,
3.9
Put together, for *r*∈(0,*k*^{α}),
3.10
Lastly, by lemma 2.2,
3.11
Next, we consider the case *r*∈(*k*^{α},*k*^{2}−*k*^{α}). By Stirling’s formula
3.12
So by lemma 2.3, and for *r*∈(*k*^{α},*k*^{2}−*k*^{α}),
3.13
Similar manipulations together with the remark following lemma 2.1 give for *r*∈(*k*^{2}−*k*^{α},*k*^{2}),
3.14
Finally, comparing equations (3.11), (3.13) and (3.14), we find that the optimal *α* satisfies 2−2*α* = *α*⇒*α* = 2/3, giving the *O* term claimed in the lemma. The second formula in the lemma follows from Stirling’s formula.emsp;▪

## 4. A more precise asymptotic for the leading coefficients

In this section, we assume that *r* = *O*(*k*^{α}), with *α* < 2, unless otherwise stated.

As a direct application of the Lagrange inversion formula, one can obtain a more precise expression for the saddle point *u*_{r,k} over restricted ranges. This will allow us to write out series for *u*, *U*, *P*_{k}(*u*) and *μ*_{m} which will give an explicit expansion for the saddle-point approximation.

Recall *u* is the unique positive number satisfying
4.1
Since *r* < *k*^{α}, we know by lemma 3.1 that *u* > 2*k* for *k* large enough. So, we may apply Taylor expansions to the left side of equation (4.1). We therefore need to solve for *u* in the equation: (the *m* = 0 term cancels the *k*^{2} on the other side of the equation). Substituting *y* = *k*/*u* the last equation becomes . Now, trivially,
4.2
so that the sum on the l.h.s. above is analytic in |*y*| < 1/2. Furthermore, it has a non-vanishing derivative at *y* = 0, because *p*_{1}(*k*) = *k*^{3}. Using the bound on *p*_{m}(*k*), for *x* > 0 sufficiently small, say < 1/100, one can find a unique *y* > 0, in say (0,1/10), such that
4.3
We can write down the Taylor series for *y* in terms of *x* by the Lagrange inversion:
4.4
There are two formulas that are useful for finding the *λ*’s. We can either substitute the above formula for *y* into equation (4.3) and compare coefficients of powers of *x*. Alternatively, we can use the well-known formula (see Wilf 2006, §5.1):
4.5
where *w*_{k}(*z*) is given by . Letting *a*_{m} = (−1)^{m+1}*p*_{m}(*k*)/*k*^{m+2}, we have
4.6
Since *y* = *k*/*u* and *x* = *r*/*k*^{2}, we get
4.7
Because equation (4.4) is analytic in some neighbourhood of *x* = 0, we have that there exists an *η* > 0 such that the above converges geometrically for all *r* < *ηk*^{2} and, even though the coefficients depend on *k*, does so uniformly in *k*. The latter is explained by the fact that the l.h.s. of equation (4.3) has, by equation (4.2), non-zero derivative in some neighbourhood, independent of *k*, of *y* = 0, and that one can cover, independent of *k*, some sufficiently small neighbourhood of *x* = 0 by taking *y* small enough. Therefore, the inverse function specified in equation (4.4) is analytic about *x* = 0 in some neighbourhood that is independent of *k*.

We can also write a series for *u* directly. On reciprocating . Using Maple, we compute
4.8
Putting the above together gives the first few terms of *u*:
4.9

A similar series for *U* can be derived from that of *u*. For *u* > 2*k*−1, we have
4.10
Substituting , the above equals
4.11

Next we work out the series for . Divide the second formula in equation (1.3) by the leading coefficient *c*_{0}(*k*), pull out *u*^{k2}, expand each and sum over *j* to get
4.12
Substituting , the above equals
4.13

Finally, we develop a series for *μ*_{m}. These are defined by , where *γ*_{n}(*u*) are, from equation (2.27), the Taylor coefficients of *f*(*u*e^{iθ}). Substituting *u*e^{iθ} for *z* in , we get, after Taylor expanding as in equation (4.12) and on expanding *e*^{iθ} in its Taylor series, that
4.14
Now each *μ*_{m} is given as a sum of products of the *γ*_{n}(*u*)’s, the specific expression obtained by exponentiating :
4.15
For example, . Notice that *γ*_{n}(*u*) is *r* times a power series in *r*/*k*^{2} with coefficients that are polynomials in 1/*k*^{2}. The latter follows from *λ*_{j}(*k*) being a polynomial in *a*_{m} = (−1)^{m+1}*p*_{m}(*k*)/*k*^{m+2}, and *p*_{m}(*k*) being a polynomial in *k* of degree *m*+2 satisfying, as described in §1, *p*_{m}(*k*) = (−1)^{m}*p*_{m}(−*k*), so that the powers of *k* that appear ‘go down by twos’. Therefore, each term in the above sum is of the form *r*^{a} times a power series in *r*/*k*^{2}, with coefficients polynomial in 1/*k*^{2}, and , *a* ≤ ⌊*m*/3⌋.

Substituting equations (4.9), (4.11), (4.13), (4.15) and (4.14) into the saddle-point formula (2.33) and taking the logarithm of the various series that appear as factors suggest the following formula for *c*_{r}(*k*):
4.16
The *j*th term appearing in the is of the form *q*_{j}(*r*)/*k*^{2j}, where *q*_{j}(*r*) is a polynomial in *r* with rational coefficients and of degree *j*+1. Furthermore, when *r* = 0 or 1, *c*_{r}(*k*)/*c*_{0}(*k*) = *k*^{3r}/*r*!, and thus each term appearing in the must have numerator divisible by *r*(*r*−1) so as to vanish at those two values of *r*.

Below we develop our formula further, explaining with proof how it arises. Substituting the various series as described does not immediately result in the above formula, as there are extraneous terms of the form *a*_{m,n}/*r*^{m}*k*^{2n} along with various *O* terms that need to be considered. One also needs to identify the terms of Stirling’s asymptotic formula as appearing from these substitutions so as to obtain the factor of *r*! in equation (4.16).

These extraneous terms can be eliminated by comparing these terms with the terms that appear in equation (1.23) and considering different values of *r*.

To begin, we have replaced the factors and terms with ‘just an *r* and no *k*’ with a 1/*r*!. The *P*_{k}(*u*)/(*c*_{0}(*k*)*u*^{k2−r}) contributes, from equations (4.13) and (4.9), a factor of *e*^{r}/*r*^{r}. The contributes, from equation (4.11), a , while the sum involving *μ*_{2m} in equation (2.33) contributes a factor whose initial terms are computed to be . Together with the , we recognize these as Stirling’s approximation to 1/*r*!. By taking *M* in equation (2.33) sufficiently large, letting and *r* = *k*^{ϵ} with *ϵ* sufficiently small and comparing with the asymptotic *c*_{r}(*k*)/*c*_{0}(*k*) = *k*^{3r}/*r*!(1+*O*(*k*^{2(ϵ−1)})) of theorem 1.1, we have that these terms must coincide with Stirling’s asymptotic series for 1/*r*!.

The extraneous terms can be eliminated by comparing with the full polynomial expansion of *c*_{r}(*k*)/*c*_{0}(*k*). From equations (1.20) and (1.22),
4.17
where *z* = 1/*k*^{2}, and *g*_{j}(*r*) is a polynomial in *r* of degree ≤ 2*j* (the bound on the degree of *g*_{j}(*r*) follows from the inductive procedure for determining them). Denote the above polynomial in *z* by *F*_{r}(*z*). By taking its logarithm, we can write it in the form
4.18
where *q*_{j}(*r*) is a polynomial in *r*. One can easily pass from the polynomial in equation (4.17) to an expression of the form (4.18), because the polynomial (4.17) does not vanish at *z* = 0, hence its logarithm is analytic there and may be expanded in a Taylor series. The constant term 1 in equation (4.17) implies that the Taylor series that appears in the exponent of equation (4.18) has constant term 0, and thus begins with the *q*_{1}(*r*)*z* term. Expanding out using the Taylor series for and comparing coefficients with those in the exponent of equation (4.18) shows that *q*_{j}(*r*) is a polynomial in *r* of degree ≤ 2*j*. Our goals are to show that *q*_{j}(*r*) actually has degree *j*+1 rather than ≤ 2*j* and to obtain a truncation bound for equation (4.18).

To do so, we consider more carefully the result of substituting the various series into the saddle-point formula, pulling out the various factors that comprise Stirling’s asymptotic formula for *r*! and moving everything else to the by taking the logarithm of the various factors.

First, the series for is of the form *r* times a power series in *r*/*k*^{2}, whose *j*th coefficient is a polynomial in 1/*k*^{2} of degree *j*.

Next, we consider the factor *u*^{r}. Pulling out, in equation (4.9), *k*^{3}/*r* and moving the rest into the , we have that is of the same form as described in the previous paragraph.

The series for contributes, in pulling out a and moving it to the as , a power series in *r*/*k*^{2} with coefficients in *k* as above.

Finally, by the paragraph surrounding equation (4.15), *μ*_{2m}/*U*^{m} is equal to a sum whose terms are products of the *γ*_{n}(*u*)’s, and hence expressible as power series in *r*/*k*^{2} with coefficients in *k* as above, with each term multiplied by a factor of the form 1/*r*^{a} with *a* ≥ *m*−⌊2*m*/3⌋. Therefore, for *M* sufficiently large in equation (2.33) and also using, from equation (4.11), that *U*∼*r* when *r* = *O*(*k*^{2}), we get that the logarithm of the bracketed factor in equation (2.33) is, for any *A* > 0, of the form:
4.19
where *σ*_{j,a}(*k*) is a polynomial in 1/*k*^{2} of degree *j*.

Next, truncate all substituted power series in *x* = *r*/*k*^{2} after, say, *J* = *J*_{A,α} terms so that the remainder gets absorbed into the *O*(*r*^{−A}). This can be achieved because, by the paragraph following equation (4.7), all these power series can be shown to be analytic about *x* = 0 with their coefficients, expressed as polynomials in 1/*k*^{2}, uniformly bounded in *k*. Hence each has some radius of convergence, and the cost of truncating each power series at *J* terms is *O*((*η*′*r*/*k*^{2})^{J+1}) for some *η*′ > 0 depending on *A*. Finally, recall that *r* = *O*(*k*^{α}) with *α* < 2. Thus, taking *J* large enough (depending on *A* and *α*), we can ensure that the cost of truncating is *O*(*r*^{−A}).

Thus, collecting terms according to the power of 1/*k*^{2} and pulling out the terms corresponding to Stirling’s formula for *r*! as explained, we have obtained
4.20
where *Q*_{j}(*A*;*r*) is of the form
4.21
The largest power of *r* that appears is *j*+1 rather than *j* because the power series for and are multiplied by an extra *r*. The smallest power is greater than −*A*+*j* rather than −*A* because *r*^{m}/*k*^{2j} < *r*^{m−j} can be absorbed into the *O*(*r*^{−A}) if *m* ≤ −*A*+*j*.

Formula (4.20) essentially gives, as a function of *z* = 1/*k*^{2}, the Taylor expansion of , and comparing with equation (4.18), we would like to conclude that *q*_{j}(*r*) = *Q*_{j}(*A*;*r*), from which we would have that *q*_{j}(*r*) is a polynomial of degree *j*+1, rather than of degree 2*j*. However, the presence of extraneous terms and the remainder term in equation (4.20) necessitates a more careful comparison, and we will only conclude that the polynomial part of both coincide for given *j* and all *A* sufficiently large.

From equation (2.14), we have that *r*!*c*_{r}(*k*)/(*k*^{3r}*c*_{0}(*k*)) is asymptotically 1 when *r* = *O*(*k*^{α}), for *α* < 1. Therefore, its logarithm is analytic in *z* = 1/*k*^{2} in that region, and its Taylor series in equation (4.18) converges in at least that region.

For *r* = *O*(*k*^{α}), *α* < 1, we can truncate equation (4.18) after sufficiently many terms so as to also have remainder *O*(*r*^{−A}). By adjusting our value of *J* upwards, we can assume that both series in equations (4.18) and (4.20) truncate at *J*. We thus have, on comparing them: . We can use this to show that, for any given *j*, and all sufficiently large *A*, that *q*_{j}(*r*) = *Q*_{j}(*A*;*r*). We have established that *q*_{j}(*r*) is a polynomial of degree ≤ 2*j* in *r*, and that *Q*_{j}(*A*;*r*) has the form (4.21). Therefore, the above can be written as . For each choice of *A*, the l.h.s. above consists of finitely many terms. If we let *r*∼*k*^{α}, with *α* < 1 and *α* irrational so as to guarantee that all the exponents *mα*−2*j* are distinct, then each term above must individually satisfy the bound: d_{m,j}*k*^{mα−2j} = *O*(*k*^{−Aα}). Therefore, for fixed *j*, *m* ≥ 0 and all large enough *A*, we have that d_{m,j} = 0, i.e. *q*_{j}(*r*) coincides with the polynomial portion of *Q*_{j}(*A*;*r*). This shows that *q*_{j}(*r*) has degree *j*+1.

Next we obtain a bound on the rate of growth of *q*_{j}(*r*). We have seen *q*_{j}(*r*) coincides with the polynomial portion of *Q*_{j}(*A*;*r*) if we take sufficiently many terms *m* in equation (2.33) and have observed that *μ*_{2m}/*U*^{m} has a factor of 1/*r*^{a} with *a* = *m*−⌊2*m*/3⌋ ≥ *m*/3. Thus, given *j*, in order for the various substituted power series to capture the degree *j*+1 polynomial *q*_{j}(*r*), we need to take, in equation (2.33), (1/3)⌊(*M*−1)/2⌋ ≥ *j*+1, so *M* ≥ 6(*j*+1) suffices. Thus, we let *M* = 6(*j*+1) and consider
4.22
As before, we substitute the power series in *r*/*k*^{2} for the various factors and terms of this expression, and letting *z* = 1/*k*^{2}, express it as
4.23
We will obtain a bound for *Q*_{i,j}(*r*) and then deduce a bound on *q*_{j}(*r*) by bounding the polynomial part of *Q*_{j,j}(*r*). To do so, we regard the above expansion as a function of complex variables *r* and *z* and apply Cauchy’s estimate twice: first to bound *Q*_{i,j}(*r*), and then to bound its polynomial part.

From the power series expansions for , and , we have that there exists *a* > 0 such that for all |*z*| ≤ *a*/|*r*| (think of this as *r*/*k*^{2} ≤ *a*), such that
4.24
Furthermore, in the same kind of region, |*z*| < *a*/|*r*|, we have *U* = *Ω*(|*r*|) and *μ*_{m} = *O*(|*r*|^{m/3}), uniformly in *m*. Combining this with the elementary estimate, , gives
4.25
so long as |*r*| is bounded away from 0. Therefore, applying Cauchy’s estimate to equation (4.23), taking the circle |*z*| = *a*/|*r*| as the contour of integration, gives . In particular, .

To bound the polynomial part *q*_{j}(*r*) of *Q*_{j,j}(*r*), we take the integral
4.26
with *C* the circle |*w*| = 2. Applying the above estimate for *Q*_{j,j}(*r*) gives , for some *λ* > 0.

Presumably, there should be a more direct and combinatorial derivation of equation (4.16). One can verify that expanding equation (4.16) using the Taylor series for does produce the first few terms of equation (1.23). In fact, by comparing these two expressions, one can more easily calculate the polynomials *q*_{j}(*r*) from the terms in equation (1.23).

## 5. A maximal coefficient

In this section, we obtain the estimate for the maximal *c*_{r}(*k*) given in theorem 1.2.

The *c*_{r}(*k*)’s are the coefficients of a polynomial with negative roots. This implies the sequence {*c*_{r}(*k*)} is unimodal (see Wilf 2006). That is, the sequence {*c*_{r}(*k*)} monotonically increases until it reaches a maximum, then it monotonically decreases. Or there exists *s*∈*Z*^{+} such that *c*_{0}(*k*) ≤ *c*_{1}(*k*) ≤ ⋯ ≤ *c*_{s}(*k*), and *c*_{s}(*k*) ≥ *c*_{s+1}(*k*) ≥ ⋯ ≥ *c*_{k2}(*k*). The idea is to combine unimodality with a saddle-point estimate for *c*_{r+1}(*k*)−*c*_{r}(*k*). Adapting the proof of lemma 2.3, we consider
5.1
where *C* := {*u*e^{iθ}:−*π* < *θ* < *π*}. Choose *u* := *u*_{r,k} to be the saddle point corresponding to *c*_{r}(*k*); that is, *u* is the positive number satisfying
5.2

Let *A* be defined as in equation (1.8), so . An examination of the results in §2 reveals that a maximal coefficient *c*_{r}(*k*) occurs when the saddle point *u*_{r,k}≈1. By lemma 3.1, this happens when *k*^{2}−*r*≈*kA*⇒*r*≈*k*^{2}−*μ*. Guided by this, our plan is to show, for some *ρ* > 0 and all *k* sufficiently large, that there exists
5.3
such that Δ_{r1} is strictly negative, and Δ_{r2} is strictly positive. Combined with the unimodality of the sequence {*c*_{r}(*k*)}, this shows *c*_{r}(*k*) is maximal for some
and is not maximal outside of that interval. To this end, consider the behaviour of Δ_{r} for
5.4
Under assumption (5.4), it follows by lemma 3.1,
5.5
Using the same procedure as in the proof of lemma 2.3 and choosing *δ* = 1/1000, *M* = 13, in expression (2.30), one obtains
5.6
where *ϵ* = *U*^{1/1000−1/2} and *μ*_{j} = *O*(*U*^{⌊j/3⌋}). Replacing *u*e^{iθ} with *u* + i*uθ*−*uθ*^{2}/2+*O*(*θ*^{3}) yields
5.7
The integral vanishes for odd powers of *θ*. Also, the contribution of some of the terms can be absorbed into the remainder. For example
5.8
where we made use of , which holds according to equation (5.5), the bound on *μ*_{j}, and also that odd powers of *θ* integrate to 0. After such reductions, the polynomial that survives is *u*−1−*uθ*^{2}/2+i*uμ*_{3}*θ*^{4}. So, when the domain of integration is extended to all of , we arrive at
5.9
Define
5.10
Then,
5.11
Thus, for *k* greater than some absolute constant and for *r* in the range (5.4),
5.12
where
5.13

Next we obtain a more precise estimate for *μ*_{3} = *γ*_{3}. Differentiating *f*(*u*e^{iθ}) three times with respect to *θ* and setting *θ* = 0 gives
5.14
Now, from (5.4) and equation (1.8), we get . Therefore, plugging into equation (5.13) and simplifying gives (we have dropped the *O*(1/*U*^{2}) which is subsumed by the ). Hence
5.15
The sign of Δ_{r} is, therefore, essentially determined by that of *u*−1. To obtain the maximal *c*_{r}(*k*) we develop, below, a stronger estimate for *u* than equation (5.5).

Recall *u* is the positive number satisfying
5.16
Also, one may easily verify *U* = *uh*′(*u*). Expanding in a Taylor series,
5.17
for |*x*| < 1/2 say. And by direct calculation, , and .

Let *ρ* > 0 and choose so that *h*(1)−d is an integer. Consider the saddle point *u*^{−} corresponding to *c*_{k2−h(1)+d}(*k*), i.e. *h*(*u*^{−}) = *h*(1)−d. By expansion (5.17), the point *u*^{−} satisfies . Since *k*^{2}−*h*(1)+d falls in the range (5.4), it follows from estimate (5.5) that . Consequently, . Furthermore, by equation (5.5), the corresponding *U* satisfies *U*∼*kA*. Thus equation (5.15) becomes
5.18
So, choosing *ρ* sufficiently large, we have for all sufficiently large *k* that Δ_{k2−h(1)+d} is strictly negative. Similarly, if one chooses we obtain a strict inequality in the opposite direction, and theorem 1.2 follows.

## 6. Numerical verifications

Table 1 compares different approximations to *c*_{r}(7). Recall that *b*_{r}(7) := *c*_{r}(7)/*c*_{0}(7).

Column 3 illustrates that, for smaller *r*, the asymptotics of *b*_{r}(*k*) are simply described by , while column 7, which compares *c*_{r}(*k*) to , works best for *r* near *k*^{2}.

Column 4 shows the ratio of *b*_{r}(*k*) to the explicit formula given in equation (4.16), truncating the terms in the at those that are depicted, namely terms up to 1/*k*^{8}.

Column 5 compares the ratio of *c*_{r}(7) to the saddle-point formula (2.18), without the *O* term. It approximates well throughout, except near the two tails.

Finally, column 6 shows the performance of the uniform approximation, depicting *c*_{r}(7) divided by the r.h.s. of equation (1.5), without the *O*-term. The uniform asymptotic performs surprisingly well across all ranges, and the ratio seems to equal 1+*O*(1/*k*^{2}) uniformly in *r*, rather than as proved in lemma 3.2.

This is somewhat surprising because equation (1.5) was designed to interpolate three formulas *c*_{0}(*k*)*k*^{3r}/*r*!, equation (2.18) and (*kA*)^{r}/*r*! across three asymptotic ranges. Yet, for every *r*, it seems to approximate better than any of these. Presumably the uniform approximation correctly captures lower order terms especially in the tails where the improvement is substantial.

This is also illustrated by figure 1 which compares the performance of the uniform asymptotic to the saddle-point asymptotic, for *k* = 100. We see the uniform asymptotic agreeing to within 1/*k*^{2} across all *r*.

## Acknowledgements

Both authors are supported by the National Science Foundation under awards DMS-0757627 (FRG grant) and DMS-0635607. In addition, the second author is supported by an NSERC Discovery Grant.

- Received August 12, 2010.
- Accepted September 17, 2010.

- This journal is © 2010 The Royal Society