## Abstract

The concern of this paper is in expanding and computing initial-value problems of the form ** y**′=

**(**

*f***)+**

*y*

*h*_{ω}(

*t*), where the function

*h*_{ω}oscillates rapidly for

*ω*≫1. Asymptotic expansions for such equations are well understood in the case of modulated Fourier oscillators and they can be used as an organizing principle for very accurate and affordable numerical solvers. However, there is no similar theory for more general oscillators, and there are sound reasons to believe that approximations of this kind are unsuitable in that setting. We follow in this paper an alternative route, demonstrating that, for a much more general family of oscillators, e.g. linear combinations of functions of the form e

^{iωgk(t)}, it is possible to expand

**(**

*y**t*) in a different manner. Each

*r*th term in the expansion is for some

*ς*>0 and it can be represented as an

*r*-dimensional highly oscillatory integral. Because computation of multivariate highly oscillatory integrals is fairly well understood, this provides a powerful method for an effective discretization of a numerical solution for a large family of highly oscillatory ordinary differential equations.

## 1. Introduction

The subject matter of this paper is highly oscillatory [ordinary differential equation (ODE)] systems of the form
1.1where the function *h*_{ω} oscillates rapidly for *ω*≫1 and all (except possibly for a countable set) values of *t*. We assume that the functions ** f** and

*h*_{ω}are analytic, but our work extends straightforwardly to functions of lower smoothness.

In previous papers, we [1,2] have considered the case of the *oscillator*
In that case, it is possible to represent the solution as an asymptotic expansion in inverse powers of *ω* of the form
1.2where the functions *p*_{r,m} are non-oscillatory and can be derived recursively: *p*_{r,0} by solving a non-oscillatory ODE and *p*_{r,m}, *m*≠0, by recursion. Once (1.2) is truncated for *r*≥*R*+1, we obtain a numerical approximation whose error, , actually *improves* for growing frequency *ω*.

Unfortunately, (1.2) is no longer true for more general oscillators and it cannot be easily generalized in an obvious manner. A good toy example that illustrates problems inherent in such generalization is the linear system 1.3whose exact solution is 1.4

Let first *h*_{ω}(*t*)=** a**(

*t*) e

^{iωg(t)}, where

*g*′(

*t*)≠0,

*t*≥0. In that case, an asymptotic expansion of the integral in (1.4) is readily available, where [3]. Note that the functions

*x*_{r}are non-oscillatory. This readily provides an expansion of the solution in the form (1.2). Alternatively, we can pursue the recursive approach of Condon

*et al.*[2]. However, the paradigm (1.2) fails for other oscillators. For example, suppose that

*g*′(0)=0,

*g*′′(0)≠0, whereas

*g*′(

*t*)≠0,

*t*>0. Without loss of generality, we may assume that

*g*(0)=0. In that case, the asymptotic expansion is 1.5where The first thing to note is that, by the

*van der Corput lemma*[4], This immediately precludes the expansion (1.2), yet we might hope that perhaps it can be easily generalized by expanding in

*ω*

^{−r/2}in place of

*ω*

^{−r}. Unfortunately, once we do this, we run up against another problem. The simplest example of this kind is

*g*(

*t*)=

*t*

^{2}, in that case for all

*ω*≫1 and

*t*>0 [5], therefore On the face of it all is well and, upon substitution in (1.4), we have an asymptotic expansion. Unfortunately, this expansion fails at the origin and makes little sense for 0<

*t*≪1. The goal surely must be an expansion which is uniformly valid for all

*t*≥0!

An obvious solution to our predicament is to forego the asymptotic expansion of in (1.5), even when it is known. Indeed, we can go a step further and forego an explicit asymptotic expansion of ** y**(

*t*) at the first place. Instead, we go back to the variation-of-constants representation (1.4) and compute the integral therein by any of the many modern quadrature methods for highly oscillatory integrals [6,7].

All this is fairly straightforward for linear equations: the goal of this paper is to accomplish this task for nonlinear functions ** f**. Thus, we intend to prove that the solution

**(**

*y**t*) of the equation (1.1) can be expanded in the form 1.6where for some 0<

*κ*

_{1}<

*κ*

_{2}<⋯ which depend on the oscillator

*h*_{ω}. The above expansion is not unique, because, for example, we can add an term to

*p*_{r}and subtract it from

*p*_{r+1}. Our assertion, though, is that it is possible to define (1.6) in a particularly ‘clean’ form, whereby each

*p*_{r}, , can be represented as a multivariate highly oscillatory integral.

We assume for the sake of simplicity that
1.7where the oscillators *θ*_{m}s are scalar: the sum might be finite or infinite, and in the latter case, we assume that it is convergent.

The most ubiquitous oscillators are *θ*_{m}(*t*,*ω*)=e^{iωgm(t)}. Let *ς*≥1 be the largest integer such that , ℓ=1,…,*ς*−1 and for some . Then
uniformly for and sufficiently smooth function *f* [4]. In that case, *κ*_{r}=*r*/*ς*, . Other examples of oscillators include *Bessel functions* *θ*_{m}(*t*,*ω*)=J_{νm}(*tω*), . Because
it will follow that *κ*_{r}=*r*, .

The functions *p*_{r} can be constructed recursively:
and
where *Φ* is the solution of the linear equation
The *non-oscillatory* functions *s*_{r} can be constructed recursively from ** f** and its derivatives and they are independent of

*ω*.

Once the functions *s*_{r} are known for *r*=1,2,…,*R* for some , we construct an approximation to ** y** with highly oscillatory quadrature over cubes of an increasing dimension. The actual situation is somewhat more difficult: the functions

*s*_{r}are piecewise-smooth and, for a high-order approximation, we will partition cubes into simplexes. This can be accomplished by any of the familiar quadrature methods for highly oscillatory integrals or by bespoke methods which will be discussed in a follow-up paper. Note that the number of necessary quadratures, as well as their dimension, increases rapidly with

*r*≥1. This means that realistic computations are likely to be restricted to modest values of

*R*. Yet, even small

*R*s lead to a very good approximation to

**which, perhaps counterintuitively, becomes more accurate as the frequency**

*y**ω*grows.

Practical implementation of our methodology requires a step-by-step approach. Unlike standard computational methods, the steps need not be small, because accuracy is ensured by asymptotics, rather than by local Taylor expansions—the need for time stepping is motivated by highly oscillatory quadrature (we wish for at most one stationary point per interval) and because asymptotics with respect to *ω* tend to deteriorate in long time intervals. Unlike (1.1), this means that we start from an arbitrary non-negative point and that *y*_{0} may formally depend on *ω*, although for us it is a constant vector. Neither issue represents real difficulty.

In §2, we develop the recursive mechanism leading to the expansion (1.6), whereas §3 is devoted to a few simple examples that illustrate the potential of our approach. Finally, in §4, we discuss briefly the computation of the highly oscillatory multivariate integrals *p*_{r}.

## 2. The recursive procedure

We assume for the sake of simplicity that *h*_{ω} includes just one oscillator, hence that we are solving the equation
2.1Once we can expand (2.1) into the series (1.6), a generalization to
2.2is straightforward, although the growth in the number of terms might be substantial.

Let us assume that
2.3generically for all smooth functions *f*, *t*>0 and some *σ*>0. Note that might be exceeded for some functions *f* and values of *t*—for example, while in general
it is true that
for *t*∈[0,1), as well as for all *t*>0 in the case *f*(1)=0.

### (a) The first few *r*s

We wish to find functions *p*_{r}(*t*,*ω*) such that in (1.6) the function *p*_{0} is non-oscillatory (and independent of *ω*) and
2.4Moreover, we stipulate that
hence the expansion is consistent with the initial conditions of (2.1). Setting
we note that, subjected to (2.4), it is true that
2.5

For *r*=0, we let
2.6Consider next *r*=1. As *q*_{1}=** y**−

*p*_{0}, we have Using (2.5), we thus deduce that

*p*_{1}obeys the ODE The solution is highly oscillatory, but it can be easily written down explicitly, 2.7where

*Φ*is the solution of the non-oscillatory linear equation Note that, according to (2.3), , consistently with (2.4).

Over to *r*=2. We denote the derivative tensors of the analytic function ** f** by
and note that each

*f*_{m}is linear in all its arguments except for the first. Of course,

*f*_{0}(

**)=**

*y***(**

*f***) and**

*y*

*f*_{1}(

**)[**

*y***]=[∂**

*δ***(**

*f***)/∂**

*y***]**

*y***.**

*δ*As ** y**=

*p*_{0}+

*p*_{1}+

*q*_{2}, we have Because, by (2.5), , we deduce that

*p*_{2}obeys the ODE Although the equation is highly oscillatory, its linearity means that it can be solved readily by quadrature, Substituting the value of

*p*_{1}from (2.7) and using the linearity of

*f*_{2}in its second and third argument, we thus obtain, exchanging the order of integration, 2.8where Note that the function

*s*_{2}is non-oscillatory and independent of

*ω*. Moreover,

*s*_{2}is piecewise smooth in [0,

*t*]

^{2}: specifically, it is smooth in the simplexes with the vertices {(0,0),(0,

*t*),(

*t*,

*t*)} and {(0,0),(

*t*,0),(

*t*,

*t*)}, but just continuous along their joint boundary

*ξ*

_{1}=

*ξ*

_{2}.

Because of (2.3), it follows that the double integral *p*_{2} in (2.8) is , as required [8].

### (b) *p*_{r} for *r*≥1

*p*

We are now in a position to present and prove the main result of this paper. Let
denote by *ν*_{m,r} the number of distinct *m*-tuples such that and set
and, in general, for every *r*≥2,
2.9where
Theorem 2.1. establishes the form of *p*_{r} as a multivariate highly oscillatory integral, consistently with (2.7) and (2.8).

### Theorem 2.1

*The functions* *p*_{r}*,* *in (*1.6*) can be formally expressed as r-dimensional highly oscillatory integrals,*
2.10*This implies that*
2.11

### Proof.

Let *r*≥1. We commence by proving by induction on *r* that *p*_{r} obeys the linear ODE
2.12This is certainly true for *r*=1. Otherwise, as
exploiting the fact that for *j*=0,…,*r*−1 and that ,
where we have used for *m*≥*i*+1. Because we can discard terms, *p*_{r} obeys the linear ODE (2.12).

We now solve (2.12) using variation of constants and use induction to substitute *p*_{j} for *j*≤*r*−1 by relevant modification of (2.10),
where *k*_{0}=0 and
Therefore,
where

We next note, consecutively exchanging the order of integration, that for any *C*^{1} function
Therefore,
proving (2.10). To conclude the proof of the theorem, we evoke the standard theory of highly oscillatory integrals [8] to argue (2.11), being an *r*-fold integral of a function whose univariate integral is and which has no resonance points on the boundary.^{1} □

The extension of the theorem to equation (2.2) is straightforward: for the sake of simplicity, we assume that (2.3) is true (with the same *σ*) for all *θ*_{m}. In that case, we need to replace (2.10) by
2.13the number of multivariate highly oscillatory integrals in need of computation increases rapidly. Clearly, unless there is a fairly small number of oscillators *θ*_{j}, computing (2.13) for anything but modest values of *r* becomes impractical.

Theorem 2.1 remains valid once for some *M*≥1, except that the expansion cannot be carried beyond *p*_{M} and the error is at best *o*(*ω*^{−σM}).

## 3. Examples and numerics

The purpose of this section is to demonstrate our approach, rather than to conduct an exhaustive analytic and numerical investigation. Thus, our focus is on a single scalar equation with three different kinds of oscillators. Our first example is
3.1The equation, to the best of our knowledge, cannot be integrated in terms of familiar functions. Its numerical solution for different values of *ω* is displayed in figure 1, and we note that it behaves according to the theory: the larger *ω*, the greater the frequency of oscillation (which we can observe by looking at the imaginary part), but also the smaller its departure from *p*_{0}(*t*)=(1+*t*)^{−1}, the solution of (2.6). This is demonstrated in figure 2.

Because *f*(*y*)=−*y*^{2}, we have *f*_{1}(*y*)=−2*y*, *f*_{2}≡−2 and *f*_{m}≡0 for *m*≥3. Therefore, *Φ*′=−2(1+*t*)^{−1}*Φ*, *Φ*(0)=0, with the solution *Φ*(*t*)=(1+*t*)^{−2}. Therefore,
Figure 3 displays the error |*p*_{0}(*t*)+*p*_{1}(*t*,*ω*)−*y*(*t*)| for different values of *ω*. A comparison with figure 2 demonstrates that, consistently with the theory, the error decays considerably faster as *ω* grows, specifically as rather than as .

Next, we compute
Figure 4 depicts the error |*p*_{0}(*t*)+*p*_{1}(*t*,*ω*)+*p*_{2}(*t*,*ω*)−*y*(*t*)| for ‘our’ values of *ω*. We have no easy explanation why the error does not decrease by as much as predicted by our theory for *ω*=200, a likely reason might be that the reference solution (using standard adaptive numerical method) is simply not accurate enough for the minute error tolerance called for in this computation or that, in the presence of a large number of minute steps, its error-control mechanism is distorted by significant round-off error.

A few observations are in order. First, it is hardly a surprise that , *r*=0,1,2, because this is predicted by general theory. Second, we can easily pluck from our expansion the leading terms of (1.2):
This is a feature exclusive to the oscillator *θ*(*t*,*ω*)=e^{iωt}. Finally, we note that there is a transfer of wavelengths: whereas *p*_{1}, like the original equation, exhibits just the frequency e^{iωt}, we have in *p*_{2} both e^{iωt} and e^{2iωt}. This feature, which we called ‘blossoming’ in Condon *et al.* [1], is characteristic of nonlinear systems and absent in (1.5), an expansion of a forced linear system.

We consider next
3.2Unlike (3.1), the oscillator has a stationary point at *t*=1. The numerical solution is displayed in figure 5 and it is instructive to compare it with figure 1. As before, the oscillation becomes faster as *ω* grows, yet its amplitude decays—however, in the present case, it decays at a different rate. In the interval [0,1) the decay, similar to (3.1), is like but, once it crosses the stationary point, the amplitude increases to . In general, the entire character of the solution changes at the stationary point.

The functions *p*_{0} and *Φ* do not change, whereas *p*_{1} can be computed explicitly,
where erf is the error function [5]. Because erf *z*∼1−*π*^{−1/2}e^{−z2}/*z* for , this is the case for erf((−i*ω*)^{1/2}(1−*t*)) for *t*∈(0,1), as well as for erf((−i*ω*)^{1/2}). Therefore,
and . However, for *t*>1, we need to use the parity of the error function, erf(−*z*)=−erf *z*, for *z*=(−i*ω*)^{1/2}(1−*t*) to fit within and it readily follows that for *t*≥1. The double integral *p*_{2}, however, cannot be computed explicitly, and we have resorted to high precision numerical integration in our experiments.

Figure 6 displays the errors of truncated expansions for *R*=0,1,2 and *ω*∈{10,50,100}. All is in perfect accordance with the theory. Thus, the precision is much greater in (0,1) but, once *t* crosses a stationary point and the nature of the asymptotic expansion changes, the error is larger, whereas its decay as *ω* grows is significantly slower.

Our final example is
3.3The oscillator has stationary points at the origin and at , . Thus, for all *t*≥0, and the solution undergoes an infinite sequence of changes to its behaviour at the stationary points. This can be seen vividly in figure 7 by examining the imaginary part of the solution.

Figure 8 displays the error committed by approximating *y* by for *R*∈{0,1,2}. Figure 8 is fully consistent with our theory. Thus, the error drops down for increasing *R* (in fact, it is ), but also for increasing *ω*.

## 4. The computation of *p*_{r}: preliminary ideas

*p*

Let us assume that *θ*(*t*,*ω*)=e^{iωg(t)}, because in that case the behaviour of oscillatory integrals is well understood [8].

To integrate (2.1) (or (2.2)), we advance by steps *t*_{0}=0<*t*_{1}<*t*_{2}<⋯ . The steps need not be small: the accuracy of the method follows from the asymptotic expansion (1.6) rather than from conventional concepts of order. However, it is important that all stationary points of the oscillators are in the set {*t*_{n}} of time steps and that if *t*_{n} is a stationary point then neither *t*_{n−1} nor *t*_{n+1} can be one. Therefore, we have two types of intervals of integration (*t*_{n},*t*_{n+1}): *type I,* when neither *t*_{n} nor *t*_{n+1} is a stationary point, and *type II,* when either *t*_{n} or *t*_{n+1} (but not both!) is a stationary point. We may in that case assume that *t*_{n} is a stationary point, because the other case follows by trivial change of variable.

The function *Φ*^{−1}(*t*)*p*_{r} in (2.10) is formally an integral over the cube (*t*_{n},*t*_{n+1})^{r} but the function *s*_{r} is piecewise-smooth there, and it is a good idea to partition the cube into *r*! simplices where *s*_{r} is smooth (or, at any rate, shares the smoothness of ** f** and

**). Each such simplex can be obtained by commencing with the vertex and generating further**

*a**r*vertices by replacing one

*t*

_{n}at a time by

*t*

_{n+1}. After

*r*such steps we reach the vertex (

*t*

_{n+1},

*t*

_{n+1},…,

*t*

_{n+1}).

For *r*=1, we have just one simplex, the interval with vertices *t*_{n} and *t*_{n+1}. For *r*=2, our procedure yields two simplexes, with the vertices
*r*=3 results in six simplexes, whose vertices are
and so on.

The asymptotic behaviour for large *ω* of an integral of the form
where is a polytope, depends on three kinds of *critical points:*

*1. Stationary points*, where**∇***g*(*x*_{*})=**0**;*2. Resonance points*, where**∇***g*(*x*_{*})≠**0**is orthogonal to the boundary; and*3. Vertices*of the polytope .

[7,8]. In our case
Therefore, for type I integrals, there is no stationary point in a simplex, whereas, for type II integrals, there is a single stationary point at (*t*_{n+1},*t*_{n+1},…,*t*_{n+1}). Moreover, easy calculation confirms that in neither case are there any resonance points. Therefore, the multivariate integral inherits its behaviour in a fairly straightforward manner from the univariate one. Specifically, in the case of type I integrals , whereas for type II, it is true that , where *g*′(*t*_{n})=*g*′′(*t*_{n})=⋯=*g*^{(ς)}(*t*_{n})=0 and *g*^{(ς+1)}(*t*_{n})≠0.

A word of caution is in order: although formally for type I, the amplitude of the term can be very large when either *t*_{n} or *t*_{n+1} is close to a stationary point. Therefore, a safe strategy is to use type I integrals only when the endpoints are well separated from stationary points. Once *t*_{n+2}−*t*_{n+1}, say, is small, it is a much better idea to go from *t*_{n} all the way to *t*_{n+2}, subsequently backtracking from *t*_{n+2} to *t*_{n+1}—in other words, replacing one type I and one type II integral by two type II integrals.

### (a) Asymptotic expansions

The computation of explicit asymptotic expansions of multivariate highly oscillatory integrals is possible, at least in principle, but, once dimensions increase, rapidly become very complicated, arguably not very helpful [9]. An alternative is to derive asymptotic expansions with the method of stationary phase [10], except that this is known explicitly only in two dimensions.

The situation is not unduly complicated for type I integrals, once we follow upon the ideas of Iserles & Nørsett [3]. Let *f* be a smooth function in the closure of the simplex in question. Recalling that *g*′≠0 in (*t*_{n},*t*_{n+1}), a univariate expansion follows at once from Iserles & Nørsett [3],
where
It takes more effort to derive a bivariate expansion yet, after fairly standard algebra using the methodology of Iserles & Nørsett [9], we obtain, for example,
where
Let
Then, expanding again,
This procedure can be carried out also in higher dimensions although, needless to say, its complexity increases rapidly.

Matters are becoming considerably more complicated for type II integrals. Assume thus that *g*′(*t*_{n+1})=0, *g*′′(*t*_{n+1})≠0, and that otherwise *g* is strictly monotone. A univariate expansion is standard
where
[3]. Unfortunately, a generalization to a bivariate simplex, although possible in principle, rapidly leads to fairly unpleasant expressions. It is reasonable to rule out this approach as a practical computational tool for the different dimensions and diverse oscillators occurring within the framework of this paper.

### (b) Filon-type expansions

As pointed out in Iserles & Nørsett [9], whereas asymptotic expansions are next to useless as a direct computational tool in a multivariate setting, they are immensely useful in identifying the data necessary for the implementation of *Filon-type methods*.

Let and be smooth functions in a closed polytope with vertices . Assume further that **∇***g* is non-zero in , except possibly at the vertices and is nowhere orthogonal to the faces of the polytope. Then, there exists an asymptotic expansion of the form
4.1where *ς* is the highest degree of a stationary point (that is, the largest number such that ∂^{i}*g*(*v*_{j})=0 for a vertex, *j*∈{1,2,…,*M*} and all |** i**|=1,2,…,

*ς*and

*ς*=0 if

**∇**

*g*≠

**0**at all the vertices). The functionals are all linear and depend on

*f*and its derivatives. Let

*φ*be any function that interpolates

*f*and its derivatives at the vertices, specifically so that for all

*m*=0,1,…,

*p*and ℓ=1,…,

*M*. Then, the Filon-type quadrature 4.2satisfies , as can be verified at once through a substitution of

*φ*−

*f*in place of

*f*in (4.1) [9]. Once the integral (4.2) can be evaluated explicitly, we can use it to obtain a high-quality approximation to (4.1).

An efficient computation of highly oscillatory integrals over simplexes using Filon-type and other methods is beyond the scope of this paper and a matter for future investigation. In this paper, we wish just to provide two examples, both of bivariate integrals, that demonstrate the feasibility of this approach, at least for some oscillators. In both cases, we consider a simplex with the vertices (0,0), (*t*,0) and (*t*,*t*), that is
First, consider and *g*(*x*)=*x*, a type I integral and let , to highlight the dependence upon *t* and *ω*. The behaviour of the integral for varying *t* and *ω* is highlighted in figure 9.

We consider two Filon-type schemes, both with polynomial bases. In the first case, henceforth denoted by , we use the linear bivariate basis {1,*x*_{1},*x*_{2}} to interpolate *f* at the vertices, whereas in the second, , the basis of bivariate cubics is used to interpolate to both *f* and its two directional derivatives at the vertices, as well as to the value of *f* at . In both cases, the integral (4.2) can be easily computed explicitly.

We display in figure 10 the error for both methods for fixed *t* and increasing *ω* in logarithmic scale—essentially, exhibiting the number of significant digits recovered by the numerical approximation. It is clear that, using just a small number of function values, we can recover the integral to a surprisingly high accuracy that increases with *ω*—everything is consistent with the theoretical estimates, and .

As an example of a type II integral, we consider (in the same simplex) the function *f*(*x*_{1},*x*_{2})=e^{x1−2x2} and the oscillator *g*(*x*)=*x*^{2}. We can use a polynomial basis, because the *moments* can be computed explicitly for all , e.g.
Our first example uses linear functions to interpolate *f* at the vertices and its first derivatives at (0,0)—we have five interpolation conditions and six elements in our basis and, to match conditions to degrees of freedom, we also interpolate at . Using similar notation to the previous example, we denote the exact integral and the outcome of this Filon-type method by *I*_{ω} and , respectively.

In our second example, we interpolate to *f* and its first two derivatives (six conditions altogether) at (0,0), *f* and its first derivatives at (*t*,0) and just *f* at (*t*,*t*). Thus, we have 10 conditions, matching the number of bivariate cubics. Alas, it is impossible to use a base of cubics to interpolate to the above Hermite data. However, once we complement the cubic base with , while adding an interpolation condition at , all is well and this yields .

Figure 11 depicts the number of significant digits in for *j*=1,2. (Unlike figure 10, there is little point in displaying the error for *t*=1/10 since then the integral is non-oscillatory.) As predicted by theory, is better. However, the slower decay for *ω*≫1 (compared with type I integrals and Filon-type methods—in the current setting we have and ) means that the accuracy is lower (although not by much), in comparison with figure 10. This emphasizes a broader point about our asymptotic expansions: once stationary points are present, we need more terms in the expansion (1.6) to attain the same accuracy—but also the approximation of each term requires more data and effort.

The purpose of this section is to emphasize the important point that the highly oscillatory integrals (2.10) (or indeed (2.13)) *can* be computed effectively by numerical approximation. We definitely do not seek here to discuss this issue at any great depth. Several important methodologies have emerged from modern theory of computational highly oscillatory quadrature—not just Filon-type [9,11] but also Levin-type [12] and stationary phase [13] methods, as well as complex-valued Gaussian quadrature [14]. An attractive option is to generalize the univariate approach of Olver [15], namely a Filon-type method without any need to calculate moments, to a multivariate setting. All this is a matter for further exploration.

## Footnotes

↵1 The definition of resonance points will be recalled in §4, where we address highly oscillatory quadrature over simplexes.

- Received July 19, 2013.
- Accepted October 8, 2013.

- © 2013 The Author(s) Published by the Royal Society. All rights reserved.