## Abstract

We consider functions of the form , where . A version of Descartes's rule of signs applies. Further, if and , then the number of zeros of *f* is bounded by the number of sign changes of . The estimate is reduced by 1 for each relation of the form .

## 1. Introduction

We consider functions of the form(1.1)with *p* real, the non-zero and(1.2)We call these functions *sums of fractional powers*, though in fact we do not exclude integer values of *p*. The number of terms, *n*, is the *length* of *f*. Our objective is to give bounds for the number of zeros of such functions, counted with their orders. Denote this number by . Of course, we have to exclude the case when is identically zero; this can only happen if *p* is one of the integers .

A special case of particular interest is when each is either 1 or −1, with equally many of each occurring. It is then natural to use notation likewhere . We will call this type *bipartite*.

One context in which this problem arises is in relation to the method of Bombieri & Iwaniec for exponential sums, as developed in Watt (1989) and Huxley (1996), ch. 11. For rational *p*, say , Watt (lemma 4.2) gives the bound , by moving to an expression involving only integer powers. When , this becomes . The case really wanted is *f* bipartite of length 8, for which Watt's bound is 64 (though Huxley uses the rather more generous estimate ).

A more fruitful approach was initiated by Laguerre as long ago as 1883, taking Descartes's rule of signs as the starting point. For ordinary real polynomials, Descartes's rule states that the number of *positive* zeros is no greater than the number of sign changes of the coefficients. By a method based on Rolle's theorem, Laguerre extended the rule to generalized polynomials , and Dirichlet polynomials . We show that a similar result applies to functions of type (1.1), without any requirement that *p* be rational. In particular, it follows that .

For Dirichlet polynomials, Laguerre formulated a powerful variant in which is replaced by the sequence of partial sums . He also obtained a result of this kind for functions of our type, but only for negative *p* (Laguerre 1898, p. 41). We show that this result extends to the case *p*>0, subject to the condition ; a completely different method seems to be needed. For the bipartite type of length , this implies that .

We then establish an extension of this theorem that has no counterpart for Dirichlet polynomials: the bound for is reduced by one for each relation of the form satisfied by the coefficients. Exactly this assumption (for *r*=1, 2) is in force for the bipartite functions of length 8 considered by Huxley & Watt, so in fact these functions have at most *one* zero, a distinct improvement on the estimate 64 (or 768)! This at least leads to some simplification, and better estimates for constants, in the ensuing results in their study.

Laguerre's work in this area does not seem to have been accorded much attention in recent literature. It is reproduced in exercise form, and partly with new methods, in Pólya & Szegö (1964) (Part V, ch. 1). The forthcoming article Jameson (in press) is an expository account with full proofs.

## 2. Preliminaries

We consider the function *f* defined by (1.1) on , with the all non-zero. We exclude the trivial case *p*=0. Note that if *p* is a positive integer, then *f* is an ordinary polynomial of degree at most *p*.

One may as well assume that , since this is effected by the substitution . Effectively, this is just maximizing the domain of *f*.

We must clarify the possibility of *f* being identically zero, as in the trivial example . This is easily resolved in the case where *p* is not a positive integer.

*If f is defined by* *(1.1)* *and p is not a positive integer, then* *is not identically zero*.

It is enough to show that some derivative of *f* is not identically zero. Now is a non-zero multiple of . For a fixed *x*, once *r* is large enough (so that *p*−*r* is large enough negative), the term dominates the others, so . ▪

Now consider the case where *p* is a positive integer. With *f* as in (1.1), consider the Dirichlet polynomial(2.1)By the binomial theorem,(2.2)from which it is clear that *f* is not identically zero, provided that for some *r* with .

*Suppose that f is defined by* *(1.1)* *and p is a positive integer. Then f may be identically zero if* *, but not if* .

If , then the vectors are linearly dependent, so there exist , not all 0, such that if *G* is defined by (2.1), then , for .

Now suppose that , and that distinct and non-zero are given. Then (as is well known) the vectors are linearly independent, so for some . ▪

Hence if *p* is not one of the integers , then the representation of a function in the form (1.1) is unique.

## 3. Extension of Descartes's rule of signs

For a function possessing all derivatives, we write for the number of zeros of *f* in an interval *I*, counted with their orders. We shorten this to when *I* is the whole domain of *f*. Recall that Rolle's theorem implies that .

Denote by the number of *sign changes* of the sequence , in other words, the number of terms that have the opposite sign to the previous non-zero term. Clearly, if has length *n*, then .

For a Dirichlet polynomial , with , Laguerre's extension of Descartes's rule of signs states that . The substitutions and transform into the generalized polynomial . In either form, the result applies equally to infinite series (within their interval of convergence) whose coefficients have only finitely many sign changes. The following analogue of Descartes's rule applies to sums of fractional powers.

*Suppose that f is defined by* *(1.1) and (1.2)**, and is not identically zero. Then* .

The proof is by induction on the number of sign changes. If there are no sign changes, then all the have the same sign (say ), so for all *x*>0 and *f* has no zeros. Assume that the statement is true when there are *m* sign changes, and suppose that . Let the last sign change occur at the term *j*=*k*, so that has the opposite sign to . Choose *a*, such that . Then *f* has the same zeros (with the same orders) as , whereHenceso thatNow has the same sign for and *j*=*k*. Otherwise, it has the same sign changes as , so it has *m* sign changes altogether. If is not identically zero, then, by the induction hypothesis, it has at most *m* zeros. By Rolle's theorem, . If is identically zero, then for a non-zero constant *K*, so . ▪

*Suppose that f is defined by* *(1.1) and (1.2)**, and is not constant. Let* . *Then any non-zero value is assumed by f at most* *times*.

Since and is not identically zero, we have . The statement follows, by Rolle's theorem again. ▪

Corollary 3.2 clearly also holds for the function . However, such functions may also take the value zero (not *m*) times; for example, the single term is zero when .

Clearly, theorem 3.1 implies that , where *n* is the length of *f*. In the usual way, there is an algebraic restatement of this fact.

*Let* *be distinct positive numbers and* *distinct non-negative numbers. Let p be a real number other than* . *Then the matrix* *is non-singular*.

However, as the remarks in §2 show, if *p* had one of the excluded values, then the matrix in corollary 3.3 would be singular for all choices of .

For polynomials, or Dirichlet polynomials, Descartes's rule incorporates the further feature that the difference between and is necessarily even. It is easily seen that this statement does not transfer to functions of our type. For example, has one sign change, but no zeros.

## 4. Bounds in terms of the sequence (): Laguerre's method

Write . We will present some generalizations of theorem 3.1, in which is replaced by .

First, some elementary facts about . It is not greater than , because each time has a new sign, the corresponding term must have the same sign as . It is possible to have while . In the case where , we have , while , from which it follows that must differ from by an odd integer; in particular, it is not greater than .

Given the condition (but not otherwise!), it makes no difference if the original are listed in reverse order: in fact, if , then , hencewhich clearly has the same number of sign changes as .

*If* *is bipartite of length* *, then* .

Suppose that has a sign change at *j*=*k*. Since each is 1 or −1, this means that , and the next sign change cannot occur before . The first sign change cannot occur until *j*=3, so the total number is at most . This number occurs when consists of pairs alternating with . ▪

Laguerre's second theorem for Dirichlet polynomials is as follows. Actually, his reasoning (p. 9) depends on a limiting process which seems to the present author to need further explanation, and the proof of Pólya & Szegö (1925) may have been the first fully satisfactory one. Only version (i) below was stated by these writers, but version (ii) is readily obtained by the same proof.

*Let* *for* *, where* . *Then:*

,

*if**, then*.

Laguerre (1898), (p. 40–41) derived a corresponding result for functions of our type (1.1) with *p*<0. His method applies equally to positive integer values of *p*, and extends naturally to include (ii) below (which he did not state). We outline it here, both because it is short and elegant, and because it serves as a pleasantly simple proof of our main theorem for these values of *p*.

*Suppose that p is a positive integer and that f is defined by* *(1.1)* *and is not identically zero. Let* *, and let* *. Then*:

,

*if**for some**, then*.

Under the condition in (ii), we know from proposition 4.2 that .

Recall from (2.2) thatBy Descartes's rule of signs for ordinary polynomials,By the intermediate value theorem, this is not greater than , which, by proposition 4.2, is not greater than *m*.

Under the condition in (ii), we have ▪

For the case*p*<0, it is more convenient to express as a combination of terms instead of . This means that the coefficients appearing correspond to the usual 's taken in the opposite order; it must be remembered that unless , is not the same as .

*Let p*<0 *and**where* . *Let* *, and let* . *Then:*

,

*if**for some k*>0*, then*.

By an obvious translation, we may assume that . We have for , where for all *r*. Now , where, for ,where . So is the same as the number of zeros of for . By Descartes's rule for power series, this number does not exceed . The proof of both statements now continues as in theorem 4.3. ▪

If , then Rolle's theorem gives . In fact, the method can be modified to give a more precise statement for this case, as follows: , where for . We deduce that , which in turn is no greater than (so unless and are non-zero with the same sign).

## 5. Bounds in terms of (): the general case

We show that both parts of theorem 4.3 extend to any *p*, under the extra condition . Note that . The proof of the first part is an adaptation of the method of Pólya & Szegö for proposition 4.2 (Part V, exercises 80, 83).

Let *f* be defined by (1.1) and (1.2), with . Abel summation gives(5.1)Hence if for all *j*, then *f* has no zeros: in fact, if *p*>0, then for all *x*, and if *p*<0, then for all *x*.

We rewrite (5.1) as an integral:(5.2)where . Rewrite this again as(5.3)where for and for other *t*.

Of course, the function *ϕ* has sign changes exactly corresponding to those of . Without any attempt at maximum generality, we now formulate a result analogous to theorem 3.1 for functions defined by an integral in this way. Suppose (with a slight change of notation) that *ϕ* is a function on , such that

there exist points , such that on each open interval ,

*ϕ*is bounded and continuous and either strictly positive, strictly negative or zero.

We count the point as a sign change of *ϕ* if it has opposite signs on and the last earlier interval where it was not zero.

*Suppose that ϕ, not identically zero, satisfies (A) and has m sign changes in* . *Let* *and**If g is not identically zero, then* .

Induction on *m*, copying the proof of theorem 3.1. If *m*=0, then either for all *t* or for all *t*: assume the first. Condition (A) now ensures that for all *x*, so .

Now assume that the theorem is correct for a certain value *m* and that *ϕ* has sign changes. Let one of them be at *c*, and letThen, by differentiation under the integral sign,soNow satisfies condition (A) and has the same sign changes as , except that it does not have one at *c*. Hence it has *m* sign changes, and, by the induction hypothesis, (unless is identically zero). Rolle's theorem gives the required statement . ▪

By (5.3), we deduce immediately.

*Suppose that f, not identically zero, is defined by* (1.1) and (1.2)*, with* . *Then* .

When *p*<0, this only reproduces theorem 4.4(i) with the extra condition , but our method can be modified to dispense with this condition. For this purpose, assume that *p*<0 and that *f* is given by (1.1), but with the in ascending order: . In (5.1), we can writeso (5.2) is modified towhere for and . The proof then continues as before.

Of course, this modification is not possible when *p*>0. The following example shows that without the condition , can be greater than both and .

Let

Then . One finds that , and , so

*f*has two zeros.

We now set out to prove our main theorem, which extends proposition 5.2 to a version incorporating the second statement in theorems 4.3 and 4.4. With expressed as in (5.3) and , we have

so the condition equates to .

The proof is by another induction process like the one in lemma 5.1, but this time on *k*, keeping *m*−*k* fixed. We use the following variant of Rolle's theorem for functions that tend to 0 as .

*Suppose that f is a function on* *, possessing all derivatives and having finitely many zeros. Suppose also that* *as* . *Then* .

Let the last zero of *f* occur at , and assume that for . There is a point where *f* attains its greatest value on , and, clearly, . Then has at least zeros in , hence at least zeros altogether. ▪

Reverting to the notation of lemma 5.1, we have to prove:

*Suppose that ϕ satisfies (A) and has m sign changes in* *, and also*(5.4)*Let* *and**If g is not identically zero, then* .

Fix and consider pairs with . The proof is then by induction on *k*. The case *k*=0 (so that condition (5.4) is empty) is lemma 5.1. Assume, then, that the result is correct for (where ), and that the conditions are as in the statement. Follow the proof of lemma 5.1. Nowuniformly for . Hence the same is true with both sides multiplied by the bounded function . Since , it follows that as . As before, we havewhere , with sign changes. Also,for (if *k*=1, there is no such statement, and none is needed). By the induction hypothesis, . By lemma 5.4, . (Again, this still holds if is identically zero.) ▪

So we have completed the proof of our main theorem.

*Suppose that f, not identically zero, is defined by* (1.1) and (1.2). *Let* . *Write* *, and suppose that**for some* . *Then* *and* .

*Under the same conditions, f attains any given value at most* *times*.

Theorem 5.6 applies equally to (with replaced by ), and the statement follows, by Rolle's theorem. ▪

Let

One checks easily that*m*=2 and . So

*f*has no positive zeros (except for the cases , when it is identically zero).

*The bipartite case*. Let . By lemma 4.1, , so if , then *f* has no zeros, and if , then *f* has at most one zero. The condition equates to . This is exactly the situation considered by Huxley & Watt (specifically with and *k*=2).

The case *k*=*m* in theorem 5.6 can be deduced directly from proposition 4.2, as follows. Fix *x* and define (for any *q*) . By the binomial theorem, the assumption implies that *H* has zeros at . By proposition 4.2, it has no other zeros, and these zeros are simple, so the sign of alternates on the intervals between them (with for large *q*, if ). Another choice of *x* will give a new *H*, but still with signs on these intervals determined in the same way. So, for the given *p* (assumed not to be one of ), (and hence ) has the same sign for all choices of *x*.

One can extend this argument to the case , using the fact that at two successive zeros of *f* (if they exist), has opposite signs.

*Questions about the case* . We have seen that proposition 5.2, as stated, does not hold when *p*>0 and . However, a certain amount can be said about this case. The Abel summation expression shows that if , then . A proof along the lines of the previous note shows that if *p*>1 and , then . We leave it as an open problem whether these statements can be generalized. One might also ask whether the inequality holds for all *p*>0.

## Acknowledgments

I am grateful to Peter Walker for directing me to the relevant literature, and to the referees for several useful suggestions.

## Footnotes

- Received September 22, 2005.
- Accepted December 15, 2005.

- © 2006 The Royal Society