## Abstract

Let (*γ*_{n})_{n ≥ 0} be the sequence of Stieltjes constants appearing in the Laurent expansion of the Riemann zeta function. We obtain explicit upper bounds for |*γ*_{n}|, whose order of magnitude is
as *n* tends to infinity. To do this, we use a probabilistic approach based on a differential calculus for the gamma process.

## 1. Introduction and main result

Let
1.1
be the Riemann zeta function. The Laurent expansion of *ζ*(*s*) about its simple pole at *s* = 1 can be written as
1.2
where the constants (*γ*_{n})_{n ≥ 0} are known as Stieltjes or generalized Euler constants (in fact, *γ*_{0} is the Euler–Mascheroni constant). Stieltjes (1905) pointed out that each *γ*_{n} can be obtained as
1.3
A proof of equation (1.3) can be found in Berndt (1972). Equations (1.2) and (1.3) have been rediscovered several times during the last century (see Berndt & Evans 1983, p. 81; Ivić 1985, p. 49, for further details). Different integral or series representations for *γ*_{n} have been given by many authors (e.g. Liang & Todd 1972; Israilov 1979, 1981; Zhang & Williams 1994; Coppo 1999; Coffey 2006*a*,*b*, 2008 and references therein). Recently, Coffey (2007, proposition 6 and corollary 13) has obtained rapidly convergent expressions of *γ*_{n} in terms of Bernoulli numbers. Also, series representations for *γ*_{0} and *γ*_{1} with an exponential rate of decay can be found in Coffey (2009, proposition 9). On the other hand, numerical computations for *γ*_{n}, *n* = 0,1,…,3200 are given in Kreminski (2003) (see also Choudhury 1995). Such numerical computations reveal, among other things, that the sign behaviour of the sequence (*γ*_{n}) is far from being trivial (see Mitrović 1962; Matsuoka 1985; Coffey 2006*b*; for some theoretical results in this direction). Fortunately, the sign behaviour of the sequence (*γ*_{n}) is now well-described by the asymptotic result of Knessl & Coffey (in press; in this regard, see also the comments after theorem 1.1 below).

The aim of this paper is to obtain explicit upper estimates for |*γ*_{n}| useful for large values of *n*. Early work in this sense goes back to Briggs (1955). Berndt (1972) gave the bound
Zhang & Williams (1994) obtained the following improvement:
An estimate in terms of Bernoulli numbers is due to Israilov (1979, 1981). This author showed, for *n* ≥ 2*k*, *k* = 1,2,…, that
where
*B*_{2k} is the 2*k*th Bernoulli number, and

As far as we know, the best explicit upper bound to date has been provided by Matsuoka (1985). This author proved that
1.4
and that, for any arbitrary *ε* > 0, the inequality
1.5
holds for infinitely many *n*.

Based on his numerical results, Kreminski (2003) conjectures that inequality (1.4) can be considerably strengthened, despite the lower bound in inequality (1.5). The following theorem gives a positive answer to Kreminski’s conjecture.

### Theorem 1.1

*For any n = 4,5,…, we have
*
1.6
*where* *and ⌊x⌋ stands for the integer part of x.*

*As a consequence, the order of magnitude, as* *, of the upper bound in equation (1.6) is
*
1.7

Denote by *f*(*n*)∼*g*(*n*), whenever *f*(*n*)/*g*(*n*) → 1, as . Our attention has been drawn to a recent paper by Knessl & Coffey (in press), in which the authors obtain the leading asymptotic form of the constants *γ*_{n}, as . More precisely, they show that
1.8
where the functions *A*, *B*, *a*, *b* depend weakly on *n*. Equation (1.8) captures both the basic growth rate and the oscillations . As follows from theorem 1.1 and the remarks following its proof in Knessl & Coffey (in press), the order of magnitude of |*γ*_{n}|, as , is determined by the factor *e*^{nA} where
, and . After some computations, it turns out that
1.9

It therefore follows from equations (1.7) and (1.9) that our upper bound in theorem 1.1 overestimates the right size of |*γ*_{n}|, as . However, being explicit, such an upper bound may be useful to determine a zero-free region of the zeta function near the real axis in the critical strip 0 < Re *s* < 1.

Let us say some words about the proof of theorem 1.1. The key point in this paper is a probabilistic representation of *γ*_{n} in terms of the mathematical expectation of the (*n* + 1)-derivative of the function *f* defined in equation (2.6) acting on a sum of independent identically distributed random variables (see proposition 2.1 below). This is a consequence of the differential calculus for linear operators represented by stochastic processes, particularly gamma processes, developed in Adell & Lekuona (2000). We mention that such a differential calculus has already found applications in dealing with estimates of the remainder of a certain Ramanujan series connected with the median of the gamma distribution (cf. Adell & Jodrá 2008), as well as estimates of the entropy of the Poisson law in an information theory setting (cf. Adell *et al.* 2010), among other applications.

As seen in the following section, the probabilistic representation of functions involving the zeta function, such as *sζ*(*s* + 1), work only for real *s*. This is because of the fact that *s* is interpreted as the real index of the stochastic process appearing in the probabilistic representation. For this reason, we take a real variable approach, not paying attention to the full domain of validity of various formulas (see, for example, the comments after proposition 2.1).

## 2. Differential calculus for the gamma process

Let (*X*_{t})_{t ≥ 0} be a gamma process, i.e. a stochastic process starting at the origin, with independent stationary increments, right-continuous non-decreasing paths, and such that for each *t* > 0 the random variable *X*_{t} has the gamma density
2.1
Observe that the Laplace transform of *X*_{t} is given by
2.2

To state the differential calculus for the gamma process, we shall need the following notations. Let *U* and *T* be two independent random variables, such that *U* is uniformly distributed on [0,1] and *T* has the exponential density *ρ*_{1}(*θ*). By (*U*_{k})_{k ≥ 1} and (*T*_{k})_{k ≥ 1}, we denote two sequences of independent copies of *U* and *T*, respectively. We assume that (*U*_{k})_{k ≥ 1}, (*T*_{k})_{k ≥ 1} and (*X*_{t})_{t ≥ 0} are mutually independent and denote by
2.3
By Fubini’s theorem and equation (2.2), we have
2.4
Thus, the Laplace transform of *X*_{t} can be rewritten as
This means that the gamma process is a centred subordinator with characteristic random variable *T* (see Feller 1966, p. 450, or Adell & Lekuona 2000, §4.3.1). It therefore follows from corollary 1 and proposition 4 in Adell & Lekuona (2000) that we have the Taylor expansion
2.5
for any infinitely differentiable function for which the preceding expectations exist.

The basic fact showing the interest of gamma processes in dealing with Stieltjes constants is contained in the following result involving the function 2.6

### Proposition 2.1

*For any* *t* ≥ 0, *we have*
2.7
*As a consequence,*
2.8

### Proof.

Equation (2.7) is true for *t* = 0, since *Ef*(*X*_{0}) = *f*(0) = 1. Assume that *t* > 0. Recalling equations (1.1), (2.1) and (2.2), we have from Fubini’s theorem

On the other hand, the Taylor expansion in equation (2.5) for *ϕ* = *f* and *r* = 0 has the form
Comparing this expansion with that in equation (1.2) and taking into account equation (2.7), we get equation (2.8). ■

Here, we have given a proof of equation (2.7) for the sake of completeness, although the gamma representation of *sζ*(*s* + 1) for Re *s* ≥ 0 is well known (e.g. Coffey 2006*a*, equation (2.2)). In the following technical result, we shall use the so-called Poisson–Gamma relation (cf. Johnson *et al.* 1993, p. 164), that is,
2.9

From now on, it is understood that .

### Lemma 2.2

*Let* *U* *and* *T* *be as above. For any* *k* = 0,1,… *and* *a* > 0, *we have*

### Proof.

Since *T* has the exponential density *ρ*_{1}(*θ*), it is clear that
2.10
Fix *T* = *t* > 0. Applying equation (2.9), we see that
2.11
Replacing *t* by *T* in equation (2.11) and taking expectations, from equations (2.4) and (2.10), we have
This completes the proof. ■

An easy consequence of lemma 2.2 referring to the random variables *S*_{n} defined in equation (2.3) is the following.

### Lemma 2.3

*For any* *n* = 0,1,2,… *and* *a* > 0, *we have*
2.12
*and*
2.13

## 3. Proof of theorem 1.1

Recall Leibniz’s rule for the (*n* + 1)-derivative of the product of two functions *u* and *v*, i.e.
3.1
For each *m* = 1,2,…, to be chosen later on, we decompose the function *f* defined in equation (2.6) into
where
3.2

In view of equation (2.8) and the fact that the derivatives of *f*_{m} are easy to compute, we give the following.

### Lemma 3.1

*For any* *m*, *n* = 1,2,…, *with* *e*^{n} ≥ *m* + 1, *we have*

### Proof.

Let *x* ≥ 0. For any *k* = 0,1,…,*m* − 1, consider the function *h*_{k}(*x*) = *x*e^{−kx}. By equation (3.1), we have
3.3
We therefore have from lemma 2.3
3.4
Thus, the equality in lemma 3.1 follows from the fact that *f*_{m}(*x*) = *h*_{0}(*x*) + ⋯ + *h*_{m − 1}(*x*). On the other hand, the function
is increasing for 1 ≤ *x* ≤ *e*^{n}. Hence,
This completes the proof. ■

Since the derivatives of *f* are rather involved, it does not seem possible to obtain a neat result like lemma 3.1 with *f*_{m} replaced by *φ*_{m}. Instead, we will give an upper bound for the (*n* + 1)-derivative of *φ*_{m}, as an intermediate step. To this end, define the function
Observe that
3.5

### Lemma 3.2

*Let* *x* ≥ 0. *For any* *k* = 0,1,…, *we have*
3.6
*As a consequence, we have for any* *m* = 1,2,… *and* *n* = 0,1,…
3.7

### Proof.

Let *x* ≥ 0. Since *fg* = 1, we have from equation (3.1)
3.8
To prove inequality (3.6), we shall use induction. If *k* = 0, inequality (3.6) is obvious. Assume that inequality (3.6) holds for *k* = 0,1,…,*n*. From equations (3.5) and (3.8), we see that
Thus, the induction will be complete as soon as we show that
3.9
Inequality (3.9) is true for *x* = 1. Assume that *x* > 1. After some simple computations, inequality (3.9) is equivalent to
But this last inequality is true since
Similarly, for 0 ≤ *x* < 1, inequality (3.9) is equivalent to *r*(*x*) ≤ 0. This is also true since *r*(*x*) is convex on [0,1] with *r*(0) = *r*(1) = 0. This shows inequality (3.9) and therefore completes the induction to prove inequality (3.6).

Again by equation (3.1), we have Hence, we have from equation (3.6) The proof is complete. ■

### Proof of theorem 1.1.

Let *x* ≥ 0, *m* = 1,2,…, and *n* = 4,5,…. Observe that
3.10
Actually, equation (3.10) is equivalent to the inequality 1 − e^{−x} − *x*e^{−x} ≥ 0, which is true thanks to the Poisson–Gamma relation in equation (2.9). Thus, applying equation (3.10) and lemma 2.3, we get
We therefore have from equation (3.7)
3.11
Inequality (1.6) follows form equations (2.8) and (3.2), by choosing in lemma 3.1 and equation (3.11). With this choice, observe that
as well as
This shows equation (1.7) and completes the proof of theorem 1.1. ■

## Acknowledgements

This work has been partially supported by research grants MTM2008-06281-C02-01/MTM, DGA E–64, UJA2009/12/07 (Universidad de Jaén and Caja Rural de Jaén), and by FEDER funds. The author would like to thank the referees for their remarks and suggestions, which greatly improved the final outcome.

- Received July 27, 2010.
- Accepted September 10, 2010.

- © 2010 The Royal Society