## Abstract

In this article, we derive a generalization of the Riemann–Siegel asymptotic formula for the Riemann zeta function. By subtracting the singularities closest to the critical point, we obtain a significant reduction of the error term at the expense of a few evaluations of the error function. We illustrate the efficiency of this method by comparing it to the classical Riemann–Siegel formula.

## 1. Introduction

The Riemann–Siegel (RS) asymptotic formula is a very efficient method to compute the Riemann zeta function for large *t* and it has been used extensively in the last 70 years to compute the non-trivial zeros of the zeta function (see Odlyzko (1994)). The RS formula contains the main sum of terms and the asymptotic correction terms which allow the reduction of the error. The function being approximated is actually *Z*(*t*), which is defined as , where(1.1)The function *Z*(*t*) is real for real *t* (this follows from the functional equation for *ζ*) and .

Below we present the RS formulawhere the first three correction terms are(1.2)and . To compute *θ*(*t*) for large *t*, one can use the asymptotic expansion derived from the Stirling series(1.3)

In the last 15 years, several new asymptotic expansions for the Riemann zeta function have appeared. In Berry & Keating (1992), the authors use the Cauchy integral formula to derive the asymptotic expansion for *Z*(*t*), where the leading term *Z*_{0}(*t*, *K*) iswhere , and *K* is a free real parameter. Note that the leading term is similar to the main sum of the RS, with the cut-off at *n*=*N* being smoothed by the complementary error function, thus the approximation to *Z*(*t*) is a smooth function, unlike the RS. Authors show that increasing *K* gives a better accuracy and that (for suitable *K*) the leading term *Z*_{0}(*t*, *K*) always gives better accuracy than the RS main sum. As *K* increases to infinity, *Z*_{0} also contains at least the first correction term *C*_{0}. Higher correction terms *Z*_{j}(*t*, *K*) also provide a significant increase in accuracy.

In Paris (1994), the author uses the Poisson summation formula and the uniform asymptotic expansion for the incomplete gamma function to derive an asymptotic expansion for *Z*(*t*). This expansion also involves the complementary error function, although in a different manner compared to the Berry & Keating (1992) approximation. The free parameters in this approximation can be chosen to decrease the error significantly (though again at the expense of computing error functions, see Paris (1994) for details).

In this article, we propose another method of approximating *Z*(*t*) for large *t*. The result is a generalization of the RS formula with an additional free parameter *δ*. In essence, we replace *δ* highest terms in the main sum by 2*δ* terms involving the incomplete gamma function and the function *Ψ*(*τ*) in the correction terms is replaced by a similar function *Ψ*(*τ*, *δ*). This approximation is obtained by removing 2*δ* poles of the integrand around the stationary point and then performing asymptotic expansion. We find that the form of the correction terms is the same as in RS, and when *δ*=0 we recover the classical RS formula as a special case. Combined with the asymptotic expansion for the incomplete gamma function derived in Temme (1979; see also Dunster *et al*. 1998), we obtain a simple and efficient method of reducing the error in the RS formula (see §3 for the numerical results).

## 2. Derivation of the asymptotic formula

We start with the following integral representation for the Riemann zeta function, which Riemann used to prove the functional equation (see Titchmarsh 1986, p. 27)(2.1)where(2.2)Note that the integral in equation (2.2) converges absolutely for all complex *s*. Throughout this article, we assume that and that *t* is large and positive. To find the stationary point of the integral in (2.2), we solve , thus *w*^{2}=−2*πt* and the stationary points are . We cannot move the contour of integration to owing to the branch point at *w*=0, thus we choose the stationary point .

Note that equation *w*^{2}=−2*πt* has two real solutions when *t* is negative, however, we do not obtain new asymptotic formulae. We cannot move the contour of integration to owing to the branch point at 0, and if we move the contour of integration to , we would in fact obtain the same asymptotic expansion as by choosing *t* positive and .

In order to obtain the asymptotic representation for the integral in equation (2.2), we need to move the contour of integration to pass through the stationary point , expand function in Taylor series around *w*_{0} and integrate term by term. However, the function always has poles near the critical point, and this will affect the accuracy of the approximation. Thus we do the following: we fix an integer number *δ*≥0 and subtract 2*δ* singularities of around the critical point. Thus we define a function *F*_{N,δ}(*w*) as(2.3)

Now we move the contour of integration to and obtain the following decomposition for (2.4)

First we simplify by computing the residues(2.5)

Next, we express in terms of the incomplete gamma functions(2.6)where is the normalized incomplete gamma function (see Gradshteyn & Ryzhik 2000). If *n*=0 the term in the sum should be replaced by . Equation (2.6) was derived using the following integral:(2.7)where the argument of i*b* is between and . To obtain (2.7), we separate (*x*+*b*i)^{−1} into real and imaginary parts and use Gradshteyn & Ryzhik (2000) to evaluate each integral.

Now we have to approximate . First we perform a change of variables, , and obtain(2.8)where function *g*(*t*, *x*) is defined as

Expanding in a Taylor series, we find that (for every *x*) *g*(*t*,*x*) →1 as *t*→∞ and obtain the following asymptotic expansion:(2.9)

As a next step we substitute (2.9) into (2.8) and integrate term by term. To achieve this we need to compute functions , where . We follow the steps of the derivation of the RS formula in Titchmarsh (1986) and compute the exponential generating function for (2.10)where the function is defined as(2.11)and is the error function (see Gradshteyn & Ryzhik 2000). The first integral in (2.10) is one of the Mordell integrals (see Ramanujan 1915; Mordell 1933; Kuznetsov in press) and can be computed using the method described in Titchmarsh (1986). The second integral can be found in Gradshteyn & Ryzhik (2000).

Taking the *n*th derivative of both sides of (2.10), we obtain(2.12)

Now we use (2.12), (2.9) and (2.8) to obtain the following asymptotic formula for (2.13)where the correction terms are

Finally, we combine (2.1), (2.4)–(2.6) and (2.13) to obtain the following expression for :(2.14)

We have also used the fact that , which follows from the definition of *θ*(*t*) (see (1.1)).

The correction terms in the above equation (2.14) can be simplified if we use the asymptotic formula (1.3) to rewrite the factor asThus we introduce the new function , use (2.11) to simplify the expression for *Ψ*(*τ*, *δ*) and present all the results in the following theorem.

*Let t be a real positive number. Define* *and* . *Fix an integer number δ*≥0. *Then*(2.15)*where the first three correction terms are*(2.16)*and**Here* *is the normalized incomplete gamma function and* *is the error function. If δ*>*N the n*=0 *term in the second sum in equation* (2.15) *should be replaced by*

Note that the correction terms (2.16) have the same form as in the classical RS formula (see equation (1.2)). This can be explained as follows: the coefficients in front of derivatives of *Ψ*(*τ*, *δ*) are obtained from the expansion (2.9) of *g*(*t*, *x*) and thus do not depend on *δ*. However, when *δ*=0 equation (2.15) must give us the classical RS formula, thus these coefficients must be the same for all *δ*.

Note that using the asymptotic expansion (2.9) of *g*(*t*, *x*) is not the only way to approximate the term . One could prove that if a constant *a* satisfies , then *g*(*t*, *x*) can be expanded in a convergent series in the Hermite polynomials *H*_{n}(*ax*). In this way we would obtain a *convergent asymptotic series* for *Z*(*t*), where the correction terms would also involve linear combinations of the derivatives of *Ψ*(*τ*, *δ*). However, we found that such an expansion is certainly more complicated and is not as accurate as (2.15).

When *δ*→+∞ we see that the first sum in (2.15) vanishes, function *Ψ*(*τ*, *δ*)→0 (this follows from equation (2.10) and the fact that *F*_{N,δ}(*w*)→0), thus (2.15) reduces to the following expansion:which was used in Paris & Cang (1997) to derive an asymptotic expansion for .

## 3. Numerical results

Before we use formula (2.15), we need to be able to compute efficiently the incomplete gamma function , where and . In applications, especially when *t* is large, we will be interested in the case when *δ*≪*N*. But then we find that , thus we need to approximate *Q*(*a*, *z*) in the region . Fortunately we have an excellent approximation to *Q*(*a*, *z*) in this region derived in Temme (1979), see also Dunster *et al*. (1998). Here we present an approximation to *Q*(*a*, *z*) of order , which is enough for our purposes (for all the details see Temme (1979)).

*Define* *and* . *Then*(3.1)where coefficients are given by(3.2)

Note that coefficients *c*_{k} have removable singularities at *μ*=0, thus when *t* is close to 2*πn*^{2} and parameters *μ* and *η* are close to 0, we have to use the second set of equations in (3.2), which do not involve subtracting large numbers.

Below we present the numerical results. An approximation to *Z*(*t*) given by equation (2.15) with *m* correction terms and fixed *δ* will be denoted as RS[*m*,*δ*]. The classical RS formula will be denoted as RS[*m*,0].

As we will see, for different choices of parameters *m* and *δ*, these approximations have different shapes of the error as a function of *t*: sometimes the error is smaller for , while for other choices of *m* and *δ* the error is smaller at the endpoints *τ*∼0 and *τ*∼1. Thus it is hard to compare the efficiency of approximation RS[*m,δ*] at a single point and instead we will present the error graphically for the range of *t*.

First, we compare RS[3,0] with RS[1,3] (figure 1). We find that for 10<*t*<70 approximation RS[1,3] is actually better, and for 200<*t*<600 both of these approximations have comparable accuracy. For *t* even larger RS becomes a better approximation, since it has an error term of the order while RS is . But we find that even at large *t*, we could increase and make RS as good as RS: for example at , we find that is enough for this purpose.

Second, we examine the effects of increasing while keeping *m*=1 fixed (figure 2) in the region . We see that increasing by 1 decreases the error roughly by a factor of 10. Also note that the shape of the error (the shape of the graph of the next correction term) becomes more linear as *δ* increases. This is easy to explain if we remember that 2*δ* is the number of subtracted singularities, thus for larger *δ* function *Ψ*(*τ*, *δ*) and its derivatives become less oscillatory. This fact means that instead of computing *Ψ*(*τ*, *δ*) and its derivatives, we can efficiently approximate the correction term by just a few terms of its expansion in a Taylor series (or Chebyshev polynomials).

Finally, we examine the effects of increasing the number of correction terms, while keeping *δ* fixed (figure 3). In figure 3*a,b*, (3*c,d*) we plot the absolute error of RS[*m*,0] (RS[*m*,3]) when *m* increases from 0 to 3. Note the decrease of five orders of magnitude as *m* goes from 0 to 1 in figure 3*c* (*δ*=3), while in figure 3*a* (*δ*=0)we have a decrease of three orders of magnitude. After this fast initial decrease, we gain roughly one order of magnitude in figure 3*c,d* and two orders in figure 3*a,b*. This seems to be a general trend: the error decreases faster with the increase in *m* for small *δ* compared to large *δ*. Note again that in figure 3*c,d* (*δ*=3), the correction terms are smoother when compared with figure 3*a,b* (*δ*=0).

## 4. Conclusion

In this article, we present a generalization of the Riemann–Siegel asymptotic formula, which allows us to obtain a significant increase in the accuracy without a lot of extra computational effort. This approximation has an extra free integer parameter *δ*≥0, which corresponds to half of the number of the singularities removed around the critical point. In general, increasing *δ* results in the decrease of the error.

In this article, we compared this new approximation scheme to the classical RS formula and found that the new formula consistently gives better accuracy even for small *δ*. We did not compare our approximation to other results, such as approximations by Berry & Keating (1992) or by Paris (1994): while it seems that our approximation can achieve the same accuracy, the question is at what computational cost. To answer this question, one would have to optimize approximation schemes. Note that in our scheme, there are several things that can be done to reduce the computational cost to just 2*δ* evaluations of the error function.

In equation (2.15), we have two incomplete gamma functions with , but when

*t*is large, we can approximate them with just a few terms of the Taylor series around .The almost linear form of the graphs of correction terms (figures 2 and 3) suggests an approximation by polynomials: one could use just a few terms of either the Taylor series around or the Chebyshev series to approximate

*S*_{j}.

An advantage of the Berry & Keating (1992) asymptotic formula is that the approximation terms are smooth functions of *t*; in the approximation by Paris (1994), one can also choose and thus have a smooth approximation at the transition points. There is also a simple way to do it in our approximation: instead of completely removing 2*δ* singularities around the critical point , we can assign to every singularity a weight, depending on the distance from the critical point. For example, theorem 2.2 could be obtained using the following weights: if the singularity is within the distance of 2*πδ* from the critical point *t*_{0}, it is assigned the weight of 1 and is removed completely, otherwise the weight is 0 (see equation (2.3)). Note that this scheme is equivalent to using a sharp cut-off function with jumps at ±2*πδ* (figure 4), and these jumps create discontinuities in the RS formula. However, one could also use a smooth cut-off function, where the weight of each singularity is 1 if it is close to the critical point *t*_{0} and the weight would decrease smoothly to 0 as the distance to *t*_{0} increases (figure 4). The result of using a smooth cut-off function is that the error at the transition points *t*∼2*πn*^{2} is certainly smaller, but at the same time the computational complexity is increased: note that in figure 4, the sharp cut-off contains just two transition points *n*=23 and 24, while the smooth cut-off also contains *n*=22 and we will need more evaluations of the error function. Another undesirable feature of using the smooth cut-off is that the correction terms become dependent on *t* and not just on as with the sharp cut-off.

## Acknowledgments

The author would like to thank anonymous referees for many helpful comments.

## Footnotes

- Received March 12, 2007.
- Accepted June 4, 2007.

- © 2007 The Royal Society