## Abstract

In this article, we present a method for factorizing *n*×*n* matrix Wiener–Hopf kernels where *n*>2 and the factors commute. We are motivated by a method posed by Jones (Jones 1984*a Proc. R. Soc. A* **393**, 185–192) to tackle a narrower class of matrix kernels; however, no matrix of Jones' form has yet been found to arise in physical Wiener–Hopf models. In contrast, the technique proposed herein should find broad application. To illustrate the approach, we consider a 3×3 matrix kernel arising in a problem from elastostatics. While this kernel is not of Jones' form, we shall show how it can be factorized commutatively. We discuss the essential difference between our method and that of Jones and explain why our method is a generalization.

The majority of Wiener–Hopf kernels that occur in canonical diffraction problems are, however, strictly non-commutative. For 2×2 matrices, Abrahams has shown that one can overcome this difficulty using Padé approximants to rearrange a non-commutative kernel into a partial-commutative form; an approximate factorization can then be derived. By considering the dynamic analogue of Antipov's model, we show for the first time that Abrahams' Padé approximant method can also be employed within a 3×3 commutative matrix form.

## 1. Introduction

The Wiener–Hopf technique (Wiener & Hopf 1931) was invented to solve a certain integral equation—Milne's equation—occurring in connection with radiation and neutron transport problems. The technique has since then come to be recognized as one of the few methods that can determine exact solutions to two-part boundary value problems. As such, it has proved to be of great importance in solving a wide variety of problems in applied mathematics and engineering. While the method has acquired particular relevance in diffraction theory (being applied to acoustic (Abrahams & Wickham 1990), elastic (Norris & Achenbach 1984), electromagnetic and water wave phenomena), it has also found application to seemingly diverse fields such as fracture mechanics (Freund 1998), geophysics (Davis 1987) and financial mathematics (Fusai *et al*. 2006). One should also note that the field of application is much wider than purely two-part boundary value problems; many problems, when considered asymptotically, consist of ‘inner’ and ‘outer’ geometries, which often simplify to allow the application of the technique in an asymptotic matching scheme.

The principal idea of the method is that by applying Fourier transforms to the boundary value problem one derives the so-called Wiener–Hopf functional equation(1.1)where *α* is a complex variable, and superscript +(−) denotes a function that is analytic in the upper (lower) half plane. The upper and lower half planes (henceforth denoted ^{+} and ^{−}) overlap in an infinite strip , as shown schematically in figure 1. The functions *K*(*α*), *K*^{−1}(*α*) and *C*(*α*) are all analytic in this strip. These analyticity properties together with certain results from the theory of functions of a complex variable (see Noble (1988), for further details) ensure that the functional equation (1.1) may be solved for the previously unknown functions *Φ*^{+}, *Φ*^{−}. One may then determine the solution of the boundary value problem by inversion of the Fourier transform.

The key step in the Wiener–Hopf technique (which has also produced the greatest difficulties over the years) is the factorization of the Fourier transform of the kernel1, *K*(*α*). Wiener–Hopf factorization refers to the decomposition of *K*(*α*) into a product of two terms, *K*^{+}(*α*) and *K*^{−}(*α*) such that *K*(*α*)=*K*^{+}(*α*)*K*^{−}(*α*). For scalar kernels, the product factors can be expressed in terms of Cauchy type integrals, namely(1.2)where *∪* (*∩*) denotes a contour running from −∞ to +∞ within the strip , which passes under (over) the pole at *ζ*=*α*. While there are often difficulties associated with the speed of the numerical evaluation of these integrals2 and their convergence, it can be assumed that these integrals may be computed to determine the ± factors.

In more complex Wiener–Hopf models the situation is not so straightforward. Fourier transformation reduces the boundary value problem to a coupled system of Wiener–Hopf equations and consequently, the kernel is expressed as a *matrix* of functions of a complex variable, each element of the matrix being analytic in the strip . We also remark that the determinant of the matrix kernel is non-singular and analytic in . For an arbitrary matrix kernel the *existence* of the ± factors has been proved by Gohberg & Krein (1960); however, no method has yet been developed for *constructing* such a factorization. Nevertheless, for specific classes of 2×2 matrices, several approaches to Wiener–Hopf factorization have been suggested over the years. These include: an analytic continuation method by Rawlins (1975); the pole-removal method of Rawlins (1980), Abrahams (1987) and Idemen (1979); the Wiener–Hopf–Hilbert method of Hurd (1976) and Rawlins & Williams (1981); and the essentially equivalent commutative methods of Khrapkov (1971*a*,*b*) and Daniele (1978). It is the method of Khrapkov and Daniele (here and henceforth referred to as the Khrapkov–Daniele factorization) that we wish to draw our attention upon in this paper. This is because this method offers the most general algorithm for obtaining a commutative factorization, and therefore is the natural starting point for developing the existing factorization methods. The importance of Khrapkov–Daniele's method has been further enhanced by the Wiener–Hopf approximant method of Abrahams (1997). Abrahams showed that many 2×2 matrices, which do not possess a commutative factorization and therefore were previously thought to be intractable, may be approximately placed within this construction (through the application of Padé approximants) and thence factorized to arbitrarily high accuracy3.

Following on from Abrahams' development, the aim of this paper is to extend the Khrapkov–Daniele factorization to obtain commutative factorizations for *n*×*n* matrices where *n*>2. While it must not be expected that a given higher order matrix kernel will have a commutative factorization, Abrahams' Padé approximant method justifies our approach, as we can now look to embed a difficult kernel (i.e. one which does not possess a commutative decomposition) in a relevant commutative form and factorize by an analytical-approximate procedure. The question is: what does a *relevant commutative form* look like? To the author's knowledge, Jones (1984*a*) was the first to examine the question of higher order commutative factorization and he presented a natural extension of Khrapkov's form applicable to *n*×*n* matrices. However, it appears that no matrix of Jones' form has yet been obtained from a physical Wiener–Hopf model. Furthermore, apart from Jones' form, there are very few examples of *n*×*n* (*n*>2) matrix factorization techniques in the literature which are of direct use to workers in diffraction theory, or even to applied mathematicians in general! Perhaps this explains why there appears to be few examples of *n*×*n* (*n*>2) matrix Wiener–Hopf kernels in the diffraction theory literature. This paper is motivated by these difficulties. Extending Jones' ideas, we shall describe a broad class of *n*×*n* matrices which are commutatively factorizable and generalize the Khrapkov–Daniele form. Our aim will be to illustrate the method by combining physical examples with matrix theory, and as such we shall demonstrate that there are examples of matrix kernels which fit our general form. Throughout this paper, we will span a given kernel * K*(

*α*) by a commutative

*algebra*of entire matrices

**J**_{i}such that(1.3)Here we follow Jones' original work by restricting our attention to the case where any one of the matrices

**J**_{i}has distinct eigenvalues. In a following work (Veitch & Abrahams submitted), we shall repeat the analysis when this restriction is removed.

The discussion will begin in §2 by reviewing the idea of commutative factorization and we shall briefly discuss the methods of Khrapkov–Daniele and Jones. Having introduced the work of these authors, in §3 we will show, for the first time, that a matrix obtained by Antipov (from methods developed by Willis (1971, 1972)) which does not fulfil Jones' criterion, can be factorized commutatively by our method. In §4, we move on to describe a general *n*×*n* matrix form, which encompasses these examples and we present a simple method for obtaining the factorization.

We close the paper in §5 by investigating a 3×3 matrix kernel, arising from the important model of elastic wave scattering by a crack between two dissimilar materials. While this kernel does not appear to have a commutative factorization, we shall show that we can place the kernel into a pseudo-commutative form, which may then be factorized approximately using Padé approximants4. Thus, we extend the applicability of the approximate factorization method of Abrahams (1997) to a 3×3 kernel.

## 2. Review of commutative factorization

In this section, we present a review of the factorization methods of Khrapkov–Daniele and Jones. The authors feel that it is necessary to present these details so that a reader unfamiliar with this topic may gain a better understanding of the matrix factorization literature, and can see how the different methods interrelate.

The idea of commutative factorization was first proposed by Heins (1950). He suggested using the exponential function as a matrix operator to reduce the Wiener–Hopf product to a matrix sum split. Thence, given a matrix kernel * K*, we determine a matrix

*such that(2.1)where exp(*

**L***) is the exponential operator acting on a matrix*

**L***and is defined by(2.2)Now, we express each element*

**L***l*

_{ij}of

*as a sum of two functions and analytic in the (overlapping) upper and lower half planes, respectively, and thus obtain matrices*

**L**

**L**^{±}such that(2.3)If the matrices

**L**^{+},

**L**^{−}commute, we can identify the Wiener–Hopf factors by the following result.

*The matrix factors K*

^{±}

*of*(2.1)

*are given by*(2.4)

*if*(

*and only if*)

**L**^{±}

*commute*.

For 2×2 matrices, significant progress was made in the 1970s on the question of what form * K* must take in order to have commutative factors by Khrapkov (1971

*a*,

*b*) and Daniele (1978).

### (a) Khrapkov–Daniele factorization

The two papers of Khrapkov (1971*a*,*b*) considered the static stress fields induced by notches in elastic wedges, and examined 2×2 matrix Wiener–Hopf kernels of the form(2.5)where ** J**(

*α*) is an

*entire*matrix such that(2.6)In (2.5),

*k*

_{0}(

*α*) and

*k*

_{1}(

*α*) are the arbitrary functions of a complex variable analytic in with algebraic growth at infinity and

*Δ*

^{2}is a

*polynomial*in

*α*. The reader should note that the growth of

*Δ*

^{2}must have a particular bound at infinity to ensure algebraic behaviour of the plus–minus factors at infinity, see Abrahams (1998) for further details. To be more precise, suppose(2.7)then if

*p*is greater than some constant (2 for the 2×2 case), the plus–minus factors will have exponential growth at infinity and therefore the factorization fails5. However, in this paper, we shall ignore this technical issue and suppose that the growth of the polynomial

*Δ*

^{2}ensures algebraic growth of the plus–minus factors as |

*α*|→∞. With this provision clarified, the factorization methods of Khrapkov and Daniele are virtually identical. The idea is analogous to complex numbers; we write the kernel (2.5) in the form(2.8)Clearly(2.9)Owing to the kernel being expressible in the form (2.8), the matrix factors are commutative and so we can write(2.10)where

*r*

^{±}(

*α*) and

*θ*

^{±}(

*α*) can be found by solving the decoupled scalar equations(2.11)We see that the Khrapkov method decouples the Wiener–Hopf matrix equation into a scalar sum and scalar product decomposition. The reader will note that many diffraction problems give rise to matrix kernels of the type (2.5), see, for example, Rawlins (1975, 1997) and Hurd & Lüneburg (1981).

### (b) Jones' factorization

Jones generalized the Khrapkov–Daniele class by investigating *n*×*n* matrices of the form(2.12)where *k*_{i} are arbitrary scalar functions of a complex variable *α*, analytic in a Wiener–Hopf strip with algebraic growth at infinity. In this case, * J* is an

*entire*matrix with polynomial elements such that(2.13)The closure property (2.13) identifies (2.12) as a generalization of Khrapkov's form, and as in the Khrapkov case

*Δ*

^{n}must be a polynomial in

*α*with a bound on its growth at infinity to ensure algebraic growth of the matrix factors at infinity. Furthermore, (2.13) together with the Cayley–Hamilton theorem implies that the

*n*eigenvalues of

*, which we denote by*

**J***λ*

_{i}, satisfy

*λ*

_{i}=

*ω*

^{i}

*Δ*where 0≤

*i*≤

*n*and

*ω*

^{n}=1 (excluding the possibility

*ω*=1). We note that the matrices {

*,*

**I***, …,*

**J**

**J**^{n−1}} form a basis or spanning set for

*—this is the fundamental point of this paper, we wish to investigate which classes of basis sets (and which closure properties such as (2.13)) give rise to commutatively factorizable kernels.*

**K**We shall not review Jones' method in detail here but will simply convey the essential points; for further insights on Jones' factorization method, see Rawlins (1993). Jones re-expresses the kernel in the form(2.14)where **B**_{i} have the orthogonality property(2.15)The matrices **B**_{i} are a linear combination of {* I*,

*, …,*

**J**

**J**^{n−1}} (i.e. constitute a change of basis) and are given by(2.16)To show that the

**B**_{i}defined by (2.16) have the property (2.15) we note that(2.17)and hence(2.18)Therefore(2.19)and (2.15) now follows by considering different values of

*i*,

*j*, since(2.20)Jones used the above orthogonality relations to simplify calculations involving the exponential function; his main result was the following theorem.

*An n*×*n matrix kernel K of the form* (2.12)

*with the condition*(2.13)

*may be expressed as*(2.21)

*where*(2.15)

**B**_{i}are orthogonal matrices in the sense of*for*0≤

*i*≤

*n*−1

*and*(2.22)

*The matrix product factorization is then given by*(2.23)

*where*

*represents the Cauchy sum split of the functions l*(2.24)

_{i}which are given byWhile Jones' matrix (2.12) constitutes a reasonably general class, to date no matrix of Jones' form (for *n*>2) has yet been found for a physical Wiener–Hopf problem! This is most probably owing to the fact that *Δ*^{n} must be a polynomial in *α*. In the trivial case where *Δ* is a constant, or is itself a polynomial, this is fine, except that these cases are perhaps too simple to occur in a practical physical problem; * J* must necessarily be complicated as it must embed within its structure the inherent coupling present within the physical model from whence it arose. In any case, the choice of polynomial for

*Δ*is

*severely*limited by the bound on the size of the polynomial

*Δ*

^{n}as |

*α*|→∞6. Otherwise,

*Δ*must be a branch cut function with

*n*different branches and if

*n*≥3, this is non-physical since Laplace's equation, Helmholtz equation and other physical governing equations such as Navier's equation for linear elasticity only lead to doubly branched functions.

## 3. A physical example of a commutative factorization for a 3×3 matrix

In this section, we wish to illustrate that 3×3 matrix kernels having commutative factorizations do actually arise in physical models. We show that a kernel obtained recently by Antipov (1999), developed from work by Willis (1971, 1972), can be factorized simply by a commutative method. This fact was not realized before because the kernel does not fit the constraints of Jones' form.

Consider an elastic body consisting of two distinct isotropic half spaces *S*_{1}={−∞<*x*<∞, 0<*y*<∞, −∞<*z*<∞} and *S*_{2}={−∞<*x*<∞, −∞<*y*<0, −∞<*z*<∞}. These half spaces are welded along {*y*=0, *x*<0, −∞<*z*<∞}, and the surfaces of both the half spaces on *y*=0, *x*>0 are traction free. In the region *S*_{1}, the solid has Poisson ratio *ν*_{1} and shear modulus *μ*_{1}, and similarly *S*_{2} has Poisson ratio *ν*_{2} and shear modulus *μ*_{2}. The body undergoes a static deformation owing to some constant loading. A Wiener–Hopf type formulation may be employed owing to the semi-infinite nature of the boundary condition, i.e. the semi-infinite crack in the half plane {0<*x*<∞, *y*=0, −∞<*z*<∞}. Note that the *welded* condition on −∞<*x*<0, *y*=0 requires that the components of traction and displacement are continuous. One then considers double Fourier transforms (in *x* and *z*) of the tractions on the crack faces and of the jump in displacement, and these can be related to the conditions of continuity for *x*<0 to give a matrix Wiener–Hopf equation with kernel7(3.1)Here, *α* and *β* are the transform parameters in *x* and *z* directions, respectively (*β* is henceforth treated as a parameter), we have the branch cut function and(3.2)Antipov's factorization scheme depended on deriving rational matrices **P**^{+} and **Q**^{−}, analytic in the upper and the lower half planes, respectively, such that(3.3)where * C* is in block Khrapkov–Daniele form(3.4)where

*k*is a scalar function of

*α*, is a Khrapkov–Daniele matrix and . However, Antipov's method for determining the matrices

**P**^{+}and

**Q**^{−}was ad hoc and rather difficult and lengthy to construct. It is more straightforward, instead, to express

*(*

**K***α*) in the form(3.5)where(3.6)With this basis set, we find that the singular matrix

*satisfies(3.7)*

**J**We note that the closure condition (3.7) is *different* from that used by Jones (2.13) and our factorization procedure will reflect this difference. More to the point, only *doubly* branched functions are present in the characteristic polynomial of * J*, and as we stated above, it is this that makes this factorization form possible. We thus pose the product factors as(3.8)and multiplying these gives(3.9)Comparing coefficients of

**J**^{0}with (3.5) yields the relation(3.10)and so

*a*

^{±}may be found via a usual scalar product decomposition. The remaining equations may be written(3.11)(3.12)Writing the unknown scalar functions as(3.13)(3.14)reduces (3.11) and (3.12) to the Khrapkov–Daniele form. Hence, we find(3.15)(3.16)We can now determine

*r*

^{±}and

*ϕ*

^{±}by Cauchy integrals, and from these the functions

*b*

^{±},

*c*

^{±}. Therefore, we have fully determined the matrix factors

**K**^{±}. Since the kernel

*(*

**K***α*) has more similarity to Khrapkov–Daniele matrices than Jones' form, we shall term matrices of the form (3.5) subject to the constraint (3.7) the ‘Khrapkov-3’ class.

## 4. A general commutative form with distinct eigenvalues

Jones (1984*a*) also considered more general *n*×*n* matrix kernels given by(4.1)where the entire matrices **J**_{s} commute for each 1≤*s*≤*m*. Following Jones, we shall assume that any one matrix * J*∈{

**J**_{1},

**J**_{2}, …

**J**_{m}} has distinct eigenvalues, say

*λ*

_{1},

*λ*

_{2}, …,

*λ*

_{n}. Since there are

*n*independent eigenvalues,

*is diagonalizable, and hence there exists an invertible similarity matrix*

**J***such that(4.2)We now look to find what form the other*

**P**

**J**_{s}must take in order to commute with

*. First, we require the following lemma.*

**J***Two matrices A*,

**B**commute if, and only if, the matrices**P**^{−1}

**AP**and**P**^{−1}

*.*

**BP**also commuteAided by this simple lemma, we now realize that **P**^{−1}**J**_{s}* P* necessarily commutes with the diagonal matrix (4.2), and it follows that each

**P**^{−1}

**J**_{s}

*is also diagonal. Suppose(4.3)then we can write(4.4)This is equivalent to the linear system(4.5)where(4.6)*

**P***=(*

**b***b*

_{0},

*b*

_{1},

*b*

_{2}, …,

*b*

_{n−1})

^{T}and

*=(*

**a***a*

_{0},

*a*

_{1},

*a*

_{2}, …,

*a*

_{n−1})

^{T}. The determinant of

**M**_{λ}can be shown to be given as(4.7)and |

**M**_{λ}|≠0, since the eigenvalues are distinct; therefore, the system (4.5) has a unique solution. Equation (4.3), therefore, implies that every matrix

**P**^{−1}

**J**_{s}

*is spanned purely by powers of*

**P***Λ*. Thus, there exists a matrix

*and arbitrary functions (*

**P***k*

_{i}for

*i*=0, 1, …,

*n*−1) analytic in the Wiener–Hopf strip such that(4.8)Therefore, if any matrix

*∈{*

**J**

**J**_{1},

**J**_{2}, …

**J**_{m}} has distinct eigenvalues, our general commutative kernel takes the form(4.9)where

*satisfies the matrix polynomial(4.10)This is the extension of the Khrapkov–Daniele form that we shall propose in this paper, and we see that the example described in §3 is a special case of this form, when*

**J***n*=3.

The reader will note that the above reasoning closely resembles remarks made by Jones (1984*a*). However, the authors believe that a reappraisal of this analysis is justified since Jones moves on to assert that *if a Wiener–Hopf matrix K has commutative factors with distinct eigenvalues then* (4.9)

*is necessarily of the form*(2.12)

*with the condition*(2.13)! We do not believe that this statement is always true, and we now address the question of why commutative factorizations which are not of Jones' form exist. Jones' statement appears correct in the sense that one could equally well have spanned the diagonal matrix in (4.3) by the set where we choose(4.11)and consequently one may express

*in the form(4.12)However, it is fundamental to note that the diagonal forms for*

**K**

**P**^{−1}

*will only be a factorizable commutative class if each*

**KP**

**PΛ**^{i}

*P*^{−1}=

*is*

**J**^{i}*entire*. This is by no means certain; some choices of

*may produce entire matrices while others do not! It is important to remember that*

**Λ***will not, generally, be a simple rational or entire matrix, since it contains the eigenvectors of*

**P***and thence will almost certainly possess much of the branch cut structure of*

**K***. Thus,*

**K**

**PΛP**^{−1}will only be an entire function if there exists a choice for

*λ*

_{i}that satisfies the constraints of commutativity while also knocking out the inherent branch cut structure present in

*and*

**P**

**P**^{−1}. Thus, the set {

*,*

**I****, …,**

*Λ*

**Λ**^{n−1}} must be chosen so that

it spans the eigenvalues of

, as in (4.8) and**K**it must relate to the eigenvectors of

in such a way that**K****PΛP**^{−1}is entire.

Furthermore, there is another condition imposed on the eigenvalues *λ*_{i}. This is that every power **J**^{n} must not only be entire but also bounded. This condition arises because the factorization will involve exponentials of powers of **J**^{i}. Referring to the closure condition (4.10), we see that this amounts to(4.13)where each *p*_{i} is a polynomial in *α*. Alternatively, (4.13) is equivalent to being able to factorize the characteristic equation of * J* in the form(4.14)where each

*μ*

_{i}is a polynomial and

*n*=

*n*

_{1}+

*n*

_{2}+⋯+

*n*

_{p}. That is, if some

*λ*

_{i}in (4.14) is a branch cut function, then all the other branches of

*λ*

_{i}must be present in (4.14) as well.

*Evidently, the Khrapkov-3 matrix and Jones’ form are special cases of*(4.14). These three conditions must hold in order to have a commutative factorization and suggests that such factorizations are rather special.

Let us illustrate these points and in particular why (4.12) is not applicable for the kernel of Antipov (3.1). Here, we had(4.15)which is diagonalizable into the form(4.16)where(4.17)We cannot put this into Jones' form, contrary to the argument of his paper, for if we looked to span a matrix **J**_{jon} in the form(4.18)where **J**_{jon} has the eigenvalues *Δ*, *ωΔ*, *ω*^{2}*Δ*, applying a similarity transformation via the matrix * P* would yield(4.19)Thus(4.20)and therefore even with the choice

*Δ*=1 the functions

*a*

_{i}are not entire owing to the presence of the branch cut function

*γ*. Hence,

**J**_{jon}cannot be an entire matrix!

### (a) The Wiener–Hopf factorization

We now investigate the factorization of the general matrix form (4.9). To recap, we are given an *n*×*n* matrix kernel * K* such that(4.21)where

*is an entire matrix (with polynomial elements) with distinct eigenvalues*

**J***λ*

_{1},

*λ*

_{2}, …,

*λ*

_{n}such that the condition of (4.14) holds. From the assumptions on the eigenvalues of

*, it follows that*

**J***is diagonalizable, so there exists an invertible matrix*

**J***such that(4.22)and(4.23)In order to factorize this class of matrices, we look to express them in the form(4.24)where(4.25)Therefore(4.26)Comparing the diagonal elements of (4.23) and (4.26) gives the linear system(4.27)where*

**P**

**M**_{λ}is given by (4.6),

*=(*

**l***l*

_{0},

*l*

_{1}, …,

*l*

_{n−1})

^{T},

*=(*

**k***k*

_{0},

*k*

_{1}, …,

*k*

_{n})

^{T}and exp(

*) represents a vector whose elements are the exponential of each of the elements of*

**v***. Since*

**v***λ*

_{i}are all distinct(4.28)and therefore, we can uniquely determine the unknown

*l*

_{i}by inverting the linear system (4.27). The required functions may now be determined from a sum split of the

*l*

_{i}by Cauchy integrals or otherwise. Consequently, the commutative factorization is determined by Heins' lemma and this gives us the matrices(4.29)

It remains to check that (4.29) is indeed analytic in the required half plane. To see this, we first note that expanding each term of this product by its power series will yield a series of powers of **J**^{i} of the form(4.30)To show that the series (4.30) is indeed entire in the required half plane, we employ the following lemma (see Titchmarsh 1939).

*If each member of a sequence of functions u _{i}*(

*z*)

*is analytic in a region D and the series*

*is uniformly convergent throughout every region D*′⊂

*D then*(4.31)

*is analytic in D*.

It is clear that each term in the series (4.30) is analytic in ^{±} (each power **J**^{i} is an entire matrix by virtue of (4.13) or (4.14)). However, we must show that each entry in the *n*×*n* matrix power series (4.30) is uniformly convergent throughout every subdomain *D*⊂^{±}. From the analyticity of these functions, we may determine real numbers *X*(*D*) and *Y*(*D*) such that for all *α*∈*D*(4.32)and each entry in the matrix * J* is uniformly bounded in

*D*; thus |(

*)*

**J**_{ij}|≤

*Y*(

*D*) for all 1≤

*i*,

*j*≤

*n*. Therefore(4.33)which gives , and consequently we can then see that(4.34)which is convergent throughout

*D*. Thence the series (4.30) is uniformly convergent in any domain

*D*⊂

^{±}and therefore it is analytic in

^{±}. It follows that (4.29) has the required analyticity properties since it is a finite product of functions analytic in

^{±}.

An important issue which we have not fully addressed in this paper is that the ± factors must have at most algebraic growth at infinity, else the factorization fails. As we saw for the Khrapkov–Daniele factorization, this will produce further restrictions on the polynomials *p*_{i} or *μ*_{i}, defined in equations (4.13) and (4.14), respectively. It is, therefore, essential when performing any factorization to ensure that the growth of the factors is indeed algebraic. However, as mentioned earlier, if this constraint fails it may still be possible to employ Padé approximants, as in Abrahams (1998), to complete a factorization of the kernel. Provided the matrix factors have the correct growth at infinity, explicit formulae may be found for the coefficients in **K**^{±} by noting that if(4.35)then repeating the above analysis we find that the unknown functions are related to the through(4.36)where the subscript *i* denotes the *i*th component of the vector .

We note that the commutative matrix kernel defined in (4.9) with the closure condition (4.10) appears to be very general, the Antipov matrix and Jones form being particular cases. Furthermore, 2×2 matrices of the form(4.37)where * J* satisfies either the Khrapkov closure condition(4.38)or (see Jones 1984

*b*)(4.39)will fit our general form. The factorizations of matrices with conditions (4.38) and (4.39), as described by these authors, can easily be shown to be equivalent to steps (4.24)–(4.36).

## 5. A non-commutative 3×3 matrix Wiener–Hopf problem

Abrahams (2002) considered the problem of the scattering of waves by a semi-infinite crack in an interface between two dissimilar elastic materials, which is the *elastodynamic analogue* of that discussed in §3. The model consists of two elastic half spaces: *S*_{2}={−∞<*x*<∞, *y*>0, −∞<*z*<∞} and *S*_{1}={−∞<*x*<∞, *y*<0, −∞<*z*<∞}. The half space *S*_{2} has one set of material properties denoted by a suffix 2, and the material parameters of *S*_{1} are denoted by a suffix 1. The scattered displacement field within the isotropic elastic half space *S*_{i} is denoted **u**_{i} and satisfies Navier's equation(5.1)where *λ*_{i}, *μ*_{i} are the Lamé constants, *ρ*_{i} is the material density and the subscript *t* denotes differentiation with respect to time. We assume that the crack is situated on the half plane *y*=0, *x*>0, −∞<*z*<∞, and we stipulate that the scattered stresses on the top and bottom surfaces of the crack are specified. In addition, we shall also assume along *y*=0, for *x*<0, −∞<*z*<∞ the material is perfectly bonded, so that(5.2)as well as continuity of scattered normal and tangential stresses (*σ*_{yy}, *σ*_{xy}, *σ*_{zy}). The above boundary conditions imply that this problem has a Wiener–Hopf formulation. In the elastostatic case discussed by Antipov, the model gave rise to the kernel (3.1) which we factorized in §3, but the dynamic situation is somewhat more difficult.

We consider the Fourier transform (with respect to *x* and *z*) of the displacement **u**_{i} in the half space *S*_{i} and denote this as **U**_{i} where(5.3)Similarly, we define the Fourier transform of the stresses by(5.4)As in §3, we treat *β* as a passive parameter and work in the complex *α* plane. First, let [*U*](*α*, *β*) denote the Fourier transform of the jump in the displacement field across the interface *y*=0; thus(5.5)Since the material is perfectly bonded in *x*<0 (5.2), we see that(5.6)Similarly, since the stress is specified along the crack faces, we may write(5.7)In (5.6) and (5.7), the superscripts + and − denote analyticity in the upper and lower half of the *α* plane, respectively. Thence, we can now define a Wiener–Hopf equation (in the *α* plane) for the unknown vectors **Σ**^{−} and **U**^{+} for which * K* is the kernel, and the term

**F**^{+}is a known function determined from the Fourier transform of the incident potential on

*y*=0 and provides the

*forcing*for the Wiener–Hopf equation. To derive the Wiener–Hopf equation, we relate the jump in the Fourier transform of the displacements across

*z*=0 to the Fourier transform of the stresses on

*z*=0. Abrahams (2002) has shown that these are related by the matrix equation(5.8)where(5.9)and(5.10)with

*p*(

*α*) given below in (5.15). Further(5.11)where

*k*

_{c,i}and

*k*

_{s,i}are the compressional and the shear wavenumbers, respectively, in medium

*S*

_{i}, and

*R*

_{i}is the Rayleigh function given by(5.12)

The solution of this Wiener–Hopf equation, and thence the original boundary value problem is the focus of a forthcoming paper (Abrahams *et al*. submitted). In that paper, the authors require the factorization of the kernel * K*; it is the objective here to construct this factorization. In Abrahams (2002), the suggestion is to factorize the matrix in equation (5.9) by considering each of the three factors on the right-hand side of this equation in turn. The reason for doing this is that the middle factor is in block diagonal form consisting of a 2×2 matrix together with a scalar. Consequently, the 2×2 matrix

**N**_{2}−

**N**_{1}may be placed in a partial commutative Khrapkov form and factorized approximately. However, the left-hand factor of (5.9) and the inverse of the right-hand factor are not analytic in the required half planes. Therefore, this approach does not produce a true factorization. Here, we shall adopt a different methodology and we multiply out the three factors of (5.9) to derive the 3×3 kernel(5.13)where(5.14)(5.15)We shall now look to factorize

*by the methods we have discussed in the previous sections. The critical issue is: given a Wiener–Hopf kernel*

**K***(*

**K***α*), how does one determine the matrices

**J**^{i}? The theory suggests the possibility of using diagonalization to determine the matrices

*, for suppose we are given a matrix kernel*

**J***we then determine the eigenvalues and eigenvectors of*

**K***and if*

**K***is diagonalizable there exists an invertible matrix*

**K***such that(5.16)where*

**P***κ*

_{i}are the eigenvalues of

*. Our method then requires one to find a set of diagonal matrices (hence they are commutative) which form a basis set of the right-hand side of (5.16). We have seen many choices for this basis set (Jones form, or Khrapkov-3 matrices), but these were all particular cases of the set*

**K***={*

**J***,*

**I***, …,*

**Λ**

**Λ**^{n−1}}, where

*=(*

**Λ***λ*

_{1},

*λ*

_{2}, …,

*λ*

_{n}) and therefore this seems to be the most general available. Therefore, our matrix

*is determined by(5.17)However, this procedure does not guarantee that*

**J***is entire with polynomial elements; we must now look for a choice of*

**J***λ*

_{i}so that this condition is met (such a choice will only exist if the

*λ*

_{i}have a branch cut structure which eliminates the cuts in

*,*

**P**

**P**^{−1}). Furthermore, we have seen that the

*λ*

_{i}must be chosen so that any power of

**J**^{i}is entire. These two conditions severely restrict the choice of

**λ**_{i}and we would therefore expect that in general this approach would not be successful, i.e. there is very often no choice of

**λ**_{i}which yields a commutative class! If this is the case, we need to fit the matrix into an approximate commutative form. However, it is very difficult to see how to form an approximate factorization using an algorithmic diagonalization approach. Although

*no longer has to be entire, it must still satisfy a closure condition (such as*

**J**

**J**^{3}=

*Δ*

^{2}

*where*

**J***Δ*

^{2}is a polynomial) to ensure that higher powers of

*are entire. Furthermore, the form of*

**J***must be chosen so that the non-entire elements of*

**J***can be uniformly approximated throughout the Wiener–Hopf strip, for example by recourse to Padé approximants as posed by Abrahams (1997, 2000). In the experience of the authors (Veitch 2005; Veitch & Abrahams submitted), it is often impossible to meet these requirements by simply looking for a basis for the eigenvalues*

**J***κ*

_{i}, owing to the complexity of the eigenvectors of

*.*

**K**It is needless to say that the correct matrix algebra will be the one that precisely relates to the symmetries in the physical model; therefore, it is probably always best to separate a matrix kernel into the parts that seem simplest and most natural, or to be more precise are nearly commutative. In practice, this means pulling the most complicated functions (i.e. those involving difficult mixtures of branch cut functions) out of the matrices to derive a set of simpler matrices which span * K*. Evidently, such an approach is rather ad hoc and unlikely to always yield commutative matrices, but at least by this method the essential physical symmetries will be manifest in the algebra of the matrices. It is also worth stating that in many problems, it is necessary to pre- and post-multiply a matrix kernel by matrices analytic in the upper/lower half planes, respectively, before a commutative form is reached. In other words(5.18)where

*is in a suitable form. A diagonalization algorithm cannot determine*

**L**

**P**^{+},

**Q**^{−}, but it often transpires that by splitting a matrix into the parts which seem most

*natural*one can identify a suitable

**P**^{+},

**Q**^{−}.

Fortunately, pre- and post-multiplication are not required in the present example, and by inspection we see that the kernel of (5.13) may be expressed as(5.19)where(5.20)and(5.21)with(5.22)If *x*=1 and we define **Q**_{x}|_{x=1}=* Q* then the matrices

*,*

**Q***,*

**P***commute. In particular, the commutative algebra8 is summarized as follows:(5.23)*

**R**Using this algebra we can now place the matrix (5.19) into an *approximate* commutative form. To begin, we let(5.24)whereand so(5.25)We note that *x* is an even function of *α*, also(5.26)We may now introduce the two-point [*N*/*N*] Padé approximant (see Abrahams (2000), or for further details concerning Padé approximants, see Baker (1975) or Baker & Graves-Morris (1981)) to the function *f* and denote this by *f*_{N}(*α*)9. Hence, we replace each occurrence of *f* in **R**_{f} by *f*_{N}, to achieve the matrix(5.27)The matrix **R**_{N} now satisfies the commutative algebra, defined by the set of equations (5.23), where each occurrence of * R* is replaced by the matrix

**R**_{N}. One will also note that(5.28)where(5.29)Our kernel may thus be approximated by the matrix(5.30)Now if the approximate matrix kernel was instead spanned by the three matrices(5.31)then we could factorize approximately owing to the commutative algebra possessed by these matrices. While (5.30) does not satisfy this algebra, we can however decompose the spanning set of this matrix as(5.32)where the reader should recall that . Therefore, by virtue of the approximate algebra detailed above, we can express this set as the product(5.33)From this realization, we can

*exactly*factorize the approximant matrix kernel (5.30). However, we note the

*,*

**Q**

**Q**_{N},

*,*

**P**

**R**_{N}are rational as opposed to entire matrices and so the +(−) partial factors will still contain sets of poles in the upper (lower) half plane which arise from the Padé approximant

*f*

_{N}(and its reciprocal). There are also two poles at

*α*=+i

*β*from 1/(

*α*

^{2}+

*β*

^{2}) present in

*,*

**Q**

**Q**_{N}and

*. All poles must be removed from the respective regions of required analyticity. Therefore, we must introduce a meromorphic matrix, say*

**P***(see Abrahams 1997) into the factorization to remove the offending singularities. The working is simplified, however, if we first remove the poles at ±*

**M***iβ*and then construct a pole-removal matrix to remove the two sets of poles (in upper and lower half planes) arising from the zeros and poles of the Padé approximant,

*f*

_{N}.

Meister & Speck (1989) have devised a novel method for removing the former poles at ±*iβ*; employing the analysis of these authors, we write(5.34)where from (5.33) and (5.32), {**Q**_{N}, * P*,

*} spans and {*

**R***,*

**Q***,*

**P**

**R**_{N}} spans . Hence, we can write10(5.37)(5.38)Here

*w*

_{1,2}are constants required to remove the poles at

*α*=±

*iβ*in and are given by(5.39)Further, we set(5.40)We now multiply the three factors on the right-hand side of (5.34) and equate coefficients of

*,*

**R***and*

**P**

**Q**_{N}. This gives(5.41)(5.42)(5.43)Clearly, one may determine

*p*

^{±}from the scalar factorization (5.41), and the unknowns

*q*

^{±},

*r*

^{+}/

*w*

_{1},

*r*

^{−}/

*w*

_{2}may be found from Khrapkov's form. We let(5.44)where

*Δ*

^{2}=

*α*

^{2}+

*β*

^{2}. Consequently(5.45)From these identities, we can determine

*q*

^{±},

*r*

^{+}/

*w*

_{1}and

*r*

^{+}/

*w*

_{1}. Since we now know

*p*

^{±}and

*q*

^{±}, we can therefore find

*w*

_{1},

*w*

_{2}and thus determine

*r*

^{±}. The inner matrix

*has been factorized by Meister & Speck (1989). For brevity we shall write*

**A***w*

_{1}

*w*

_{2}=

*λ*

^{2}and the factorization of

*takes the form (see Abrahams 2002)(5.46)(5.47)*

**A**To eliminate the poles from *f*_{N} in the *wrong* half planes, we may now write the approximant kernel as(5.48)and our factorization is complete once the meromorphic matrix * M* is determined such that(5.49)are analytic in the upper and lower half planes, respectively. The construction of

*is not difficult but the details may serve as a distraction to the central themes of this paper and hence we confine this discussion to Appendix A. We finish by mentioning that if*

**M***f*

_{N}(

*α*) converges uniformly to

*f*(

*α*) throughout the strip, then the approximate factorization shown here can be performed to arbitrarily high accuracy.

## 6. Conclusions

In this paper, we have produced a general commutative factorization method, valid for *n*×*n* matrix kernels with distinct eigenvalues. We have shown how our method generalizes the commutative classes proposed by Khrapkov–Daniele and Jones. We have also demonstrated that an important physical model in static elasticity (Antipov 1999) gives rise to a matrix kernel factorizable by the commutative methods we have discussed here.

The technique described in this article is more general than that of Jones and allows for doubly branched Riemann surfaces in the matrix eigenvalues. During the completion of this article, the authors have discovered the work of Camara *et al*. (2001), who also noticed the importance of doubly branched eigenvalues. However, their factorization method is rather different from that discussed herein and it is believed that we are the first to ground the investigation of higher order Wiener–Hopf factorization within the framework of applied mathematics in general, and elasticity theory in particular.

It should certainly not be expected in general that a given matrix Wiener–Hopf model will give rise to a matrix kernel possessing a commutative factorization. However, we have shown by way of a 3×3 example (the dynamic generalization (Abrahams *et al*. submitted) of the model discussed by Antipov) that high order non-commutative kernels may be embedded into a partial commutative form. We showed how Padé approximants may then be employed, together with the method of Meister & Speck, to generate an approximate factorization (to arbitrarily high accuracy). The approach is similar to, but rather more complicated than, that employed for 2×2 matrices (Abrahams 1997).

As a closing point, we remind the reader that throughout this paper, we have focused our attention on the case where any one of the matrices, * J*, spanning

*, has distinct eigenvalues. In a forthcoming article, we shall remove this constraint and investigate examples of spanning matrices which exhibit repeated eigenvalues (Veitch & Abrahams submitted).*

**K**## Acknowledgments

The authors gratefully acknowledge the Engineering and Physical Sciences Research Council (EPSRC) for providing a research studentship for B.H.V. and the Royal Society and Leverhulme Trust for the award of a Senior Research Fellowship for I.D.A. to undertake this research project.

## Footnotes

↵† Present address: WesternGeco International Ltd., BSEL Technology Park (2nd Floor), Plot No. 39/5 & 39/5A, Sector 30A, Vashi, Navi-Mumbai, India 400-705.

↵Henceforth referred to, for brevity, as the kernel.

↵The reader should note that several approximate methods for factorizing kernels have been suggested over the years, including those by Koiter (1954), Crighton (2001) and Abrahams (2000).

↵Padé approximants can, in principle at least, be generated to any order and therefore the Wiener–Hopf kernel may be approximated to any specified degree of accuracy within the strip; for further details, see Abrahams (1997, 2000).

↵In §5, we shall also make use of the factorization method of Meister & Speck (Abrahams 2002). The reader will note that the Meister & Speck method is only applicable in very special cases; however, it does offer a very intriguing matrix algebra which we shall require here.

↵Methods for dealing with kernels exhibiting such exponential growth in their commutative factors have been offered by Daniele (1984), Moiseyev (1989), Antipov & Moiseyev (1991) and Abrahams (1998).

↵Note that it is not germane to the present work to investigate the general question of what is the appropriate bound to ensure algebraic growth of the kernel factors for

*n*×*n*Jones matrices (*n*>2), but it is likely, however, that extensions of the ideas developed by Abrahams (1998) for the case*n*=2 will also be applicable in this situation.↵The matrix (3.1) has been obtained by pre- and post-multiplying the matrix

*G*_{0}(*α*) in Antipov (1999, p. 1055), by↵The reader will note that the algebra of the matrices

,**Q**is that discussed by Meister & Speck (1989), and so this can be seen as an extension of their methodology.**P**↵In principal, the Padé approximant

*f*_{N}can be generated for arbitrarily high Padé number*N*.↵It is illuminating to note that we can write the matrix factors (5.37) and (5.38) in an exponential form. We can see that(5.35)(5.36)

- Received July 21, 2006.
- Accepted September 21, 2006.

- © 2006 The Royal Society