## Abstract

Field patterns occur in space–time microstructures such that a disturbance propagating along a characteristic line does not evolve into a cascade of disturbances, but rather concentrates on a pattern of characteristic lines. This pattern is the field pattern. In one spatial direction plus time, the field patterns occur when the slope of the characteristics is, in a sense, commensurate with the space–time microstructure. Field patterns with different spatial shifts do not generally interact, but rather evolve as if they live in separate dimensions, as many dimensions as the number of field patterns. Alternatively one can view a collection as a multi-component potential, with as many components as the number of field patterns. Presumably, if one added a tiny nonlinear term to the wave equation one would then see interactions between these field patterns in the multi-dimensional space that one can consider them to live, or between the different field components of the multi-component potential if one views them that way. As a result of

## 1. Introduction

Here, we introduce the theory of field patterns. Field patterns develop when waves, concentrated on characteristic lines, interact with certain special space–time microstructures. They occur when the space–time microstructure is such that the propagation along characteristics does not develop into a complicated cascade of space–time lines, but rather concentrates along particular patterns: these are the field patterns. There is an obvious connection with dynamical systems. What makes field patterns mathematically novel is their appearance in wave equations and the associated multi-dimensional, or multi-component, character of a collection of field patterns even though the wave equation is, say, a scalar wave equation in one, two or three dimensions plus time. Here, for simplicity, we will focus on the appearance of field patterns in scalar wave equations in one spatial dimension plus time.

The field patterns we introduce here can have both a particle-like aspect, as the patterns are concentrated on lines in space–time, and a wave-like aspect, as the patterns at long times can develop wave-like features. This hints at a connection with quantum mechanics. Another connection is the multi-dimensional nature of field patterns. In the non-interacting model of field patterns developed here, different field patterns in a collection, while occupying the same space–time continuum, act as if they live in separate dimensions. The multi-dimensional nature of collections of field patterns is quite different from that associated with multi-scale homogenization theory. In multi-scale homogenization, with say periodicity at all but the largest length scale, the moduli of the materials and the fields are often modelled in *d*-dimensions by functions of the form
*ϵ*>0 is a very small parameter giving the ratio between length scales and *f*(**x**,**y**_{1},**y**_{2},**y**_{3},…,**y**_{m−1}) is a function living in an *md* dimensional space, that is, periodic in each of the variables **y**_{1},**y**_{2},…,**y**_{m}, but not necessarily periodic in **x** [1–4]. Conversely, the notions of two-scale and multi-scale convergence [5,6] allow one to go from sequences of functions *u*_{ϵ}(**x**) parametrized by *ϵ* to the multi-dimensional function *f*(**x**,**y**_{1},**y**_{2},**y**_{3},…,**y**_{m−1}). In this multi-scale homogenization, the different dimensions **x**,**y**_{1},**y**_{2},**y**_{3},…,**y**_{m−1} play very different roles, each associated with a different length scale. By contrast, the different dimensions associated with a collection of field patterns play more or less an equivalent role, as do the different dimensions associated with say the multi-electron Schrödinger equation. Rather than using a multi-dimensional space, an alternative way of viewing non-interacting field patterns is as a multi-component potential, where each component potential is associated with one field pattern.

We do not study models where there are interactions between the field patterns. This would require adding nonlinear terms to the wave equation. Without such terms, a superposition of two field pattern solutions is also a solution: the field patterns do not interact.

Some quantum mechanical aspects are notably absent from the models we study here: significantly, there is nothing analogous to the collapse of the wave function; there is no stochastic element in the evolutions we describe here; and moreover there is no quantization of the solutions. It is curious that some of these quantum mechanical aspects have been seen in the dynamics of ‘walking droplets’ of silicon oil on a vibrating silicon oil fluid surface, which may be interpreted as a space–time microstructure [7–9]. In a certain region of instability, these droplets start ‘walking’, creating their own ‘pilot wave’ on the surface. The trajectory of the droplet can seem stochastic as the motion of a droplet after a bounce depends on the orientation of the fluid surface where it lands, and this can appear quite random: nevertheless, the ensemble-averaged statistics exhibit wave-like features. The ‘holy grail’ would be that a complete explanation of quantum mechanics emerges from a combination of these ‘pilot wave’ ideas and the ideas in the theory of field patterns introduced here. The bold conjecture is that the fundamental objects in the Universe are neither particles nor waves, but field patterns. In this regard, it seems likely that the associated space–time microstructure would have a length scale comparable to the Planck length (about 10^{−35} *m*) and a time scale comparable to the Planck time (about 10^{−44} *s*).

Space–time microstructures are composites whose microstructure varies not just with respect to space but also with respect to time. There exist two natural ways of changing the material properties in time, thus leading to two types of space–time microstructure. Using the terminology in the book of Lurie [10] *activated materials* (the subject matter of this article) are immovable with respect to the laboratory frame and the space–time microstructure is realized by an external mechanism that produces a time switching (either instantaneous or gradual) of the space pattern of the material in a pre-determined manner; in *kinetic materials*, instead, the space–time microstructure is realized by an actual mechanical motion of the various parts of the composite system with respect to the laboratory frame. In other words, in activated materials the motion (without transfer of matter) is only in terms of the property pattern, whereas in kinetic materials the space–time microstructure is achieved by moving fragments of the material assemblage. Clearly, standard composites can be considered as space–time microstructures with a microstructure that depends only on space. Activated materials are most easily achieved [11] by propagating a large-amplitude wave (the ‘pump wave’) into a nonlinear homogeneous, or possibly inhomogeneous, medium. One could even propagate several large-amplitude waves that then form an interference pattern. On top of these large waves one superimposes waves that are sufficiently small that one can linearize the problem and treat them as if they are propagating in a medium with space–time variations given by the tangent moduli associated with the large-amplitude waves. There are many other ways of creating space–time geometries too. For instance, one could consider a spatially periodic two-phase geometry where one phase is a liquid crystal and then subject this material to an oscillating electric field that then causes a temporal modulation in the refractive index of the liquid crystal phase.

Space–time microstructures were studied as early as 1958 by Cullen [12], who noted that a transmission line, with modulated inductance that propagates along the line, could support a current wave that is periodic in time but grows exponentially along the transmission line (in space). This transmission line, with time-varying modulated inductance, can be viewed as a ‘space–time laminate with continuously varying moduli’ in which the moduli are just functions of **n**⋅**x**, where **x** and **n** are two-dimensional vectors and we identify the components of **x**, *x*_{1} with space and *x*_{2} with time. If one interchanges the roles of time and space (switching *x*_{1} and *x*_{2}) in the analysis of Cullen, one naturally gets a ‘space–time laminate’ which supports a field that is exponentially growing in time. Shortly afterwards, space–time laminates that interact with a field **u**=(*V*_{1},*V*_{2}) consisting of a pair of potentials *V*_{1} and *V*_{2} were investigated by Tien [13], who found that they could transfer power between waves at different frequencies. Morgenthaler [14], while studying electromagnetic wave propagation in a homogeneous dielectric medium with time-dependent permittivity and permeability, derived a solution for the cases of a two-layered temporal laminate and a temporal graded material. Subsequently, Fante [15] proposed a solution for the electromagnetic problem of a homogeneous time-varying half-space, with the dielectric constant varying first in a stepwise fashion and then, in the limit, varying continuously in time. Many more early references and an experimental validation can be found in the paper of Honey & Jones [16].

Activated space–time laminates (where the moduli only depend on **n**⋅**x** in which *x*_{d+1} is identified with time and *d* is the number of spatial dimensions) are of particular interest as they exhibit remarkable properties: indeed, by suitably controlling the design parameters it is possible to selectively screen large space–time domains from long wave disturbances [17,18], an effect that is impossible to realize with standard laminates. In particular, waves cannot travel in the forward direction when the microstructure is effectively moving faster than the speed of wave propagation in an equivalent stationary medium. Contrary to what happens in other space–time microstructures, the total energy of two-component space–time laminates is preserved for low-frequency waves: the energy pumped into the system to switch from one material to the other (say, from material 1 to material 2) is equal to the energy released at the subsequent switch (from material 2 to material 1) [19]. For such low-frequency waves, one can replace the laminated microstructure with a homogeneous one with suitable effective properties. The determination of these effective properties [17,20,21] follows the standard procedure for static laminates (see Chapter 9 of [22] and references therein) for which the homogenized parameters are easily derived through direct calculation. When the frequency is not low, one can regard the laminated structure as a spatially periodic structure, independent of time, that is rotated in space–time. Thus, one can apply standard Bloch–Floquet theory, with a harmonic dependence on time replaced with a harmonic dependence on **m**⋅**x** where **m** is perpendicular to **n**. An additional assumption, which permits a lot of explicit analysis, is to assume that the fluctuations in the moduli are sinusoidal, only involving one Fourier component. Then, the constitutive relation only couples together neighbouring Fourier modes, and is easily solved [14,23–25]. For high-frequency waves, the energy balance need not occur, and waves can grow exponentially in time [26].

Besides space–time laminates, other space–time microstructures have been proposed in the literature. In particular, special attention has been drawn towards rectangular microstructures in one spatial dimension and time [27,28]: the geometry is doubly periodic in time and space, and the space period is of the same order of the time period. Specifically, the case of chequerboard patterned geometries where the two constituent materials have the same wave impedance, so that there is not a reflected wave but only a transmitted wave, has been extensively investigated [27]. Very interestingly, within each space period, the disturbances converge towards a so-called ‘limit cycle’ after a few time periods, if the parameters of the constituents are suitably chosen. As a group of characteristics converges to a limit cycle, the energy of the macroscopic system grows exponentially [29]. It is almost like shocks are developing in a linear medium. (One wonders if a stochastic version of such linear shocks could be a mechanism for the collapse of the wave function in quantum mechanics. Note that at the Planck length scale it is questionable as to whether the concept of energy has any meaning.) In particular, the spatial derivative of the disturbance increases every time the wave passes through a pure spatial interface, whereas its time derivative grows every time the wave passes through a pure time interface [27]. Therefore, unlike space–time laminates at low frequencies, space–time chequerboard patterned geometries accumulate energy independently of the frequency, and, clearly, this does not allow one to apply the standard homogenization techniques to determine the effective properties of the material. Nevertheless, the macroscopic behaviour of such exponentially growing fields is probably described by some coarse-grained equations. (We also find exponentially growing fields, yet it seems likely that macroscopic features, such as the conical shape seen later in figure 13, might be describable by some coarse-grained equations.) It is worth noting that another consequence of the exponential accumulation of energy in macroscopic systems is the thermodynamic non-equilibrium of such materials which, on the contrary, seem to be thermodynamically open systems: only when analysed together with the surrounding environment can they be considered as thermodynamically closed systems. We should point out that the crossing of pure temporal interfaces, in correspondence to which the properties of the material change instantaneously, leads to very interesting phenomena, such as the Doppler effect analysed, for instance, by Rousseau *et al.* [30].

The extension of the theory to space–time microstructures with properties varying in a two-dimensional space and in time reveals new aspects totally absent in the one-dimensional case. For instance, in the work of Sanguinet [31], the homogenization of an elastic laminate in plane strain (with both inertial and elastic properties varying in time and space) leads to two new additional forces, one of which is of the Coriolis type. This force, arising from the dynamics and the plane strain hypothesis, vanishes when the model is restricted to the one-dimensional case.

On the other hand, specific unconventional space–time microstructures have been designed to optimize certain properties by means of topology optimization. Following the pioneering papers of Maestre *et al.* [32] and Maestre & Pedregal [33], where the optimization of the distributions of materials in one-dimensional and two-dimensional space and in time has been studied, Jensen [34] proposed an optimized dynamic structure with time-varying stiffness that prohibits wave propagation: it consists of a moving bandgap with layers of stiff inclusions moving with the propagating wave. These results have also been extended to the case of time-varying mass density [35].

As space–time microstructures can break time reversal invariance, non-reciprocal frequency conversion [36,37] and other non-reciprocal effects ([38] and references therein) can occur. Moreover, most remarkably, Fang *et al.* [39] show that one can get effective magnetic fields for photons by dynamic modulation. Also Yuan *et al.* [40], following related ideas of Boada *et al.* [41] and Celi *et al.*[42], find that it is useful to introduce an extra ‘synthetic frequency’ dimension in the modelling of the behaviour of arrays of resonators undergoing a time-harmonic refractive index modulation, and this has led to interesting ideas such as optical models of four-dimensional quantum Hall physics [43] and simulated Weyl points [44].

Irrespective of the fascinating possible connection of the theory of field patterns to quantum mechanics, the theory of field patterns is intrinsically interesting and could prove relevant in the study of spatially periodic composites of hyperbolic materials. Hyperbolic materials are materials where the dielectric tensor has both positive and negative eigenvalues. They were studied in the context of anisotropic plasmas by Fisher & Gould [45] back in 1969, albeit with an antisymmetric part added to the dielectric tensor. In any physical hyperbolic material, these eigenvalues also have a small imaginary part that causes absorption of electromagnetic energy into heat. The simplest way to obtain hyperbolic materials is simply to laminate an isotropic material with positive dielectric constant with a material with negative dielectric constant at the frequency under consideration: candidates for materials with negative dielectric constant over a frequency range include silver, gold and silicon carbide as well as a host of novel materials [46]. If one assumes that homogenization theory applies (and the conditions for this assumption to be valid warrant further investigation), then the effective dielectric constant of the laminate is given by the arithmetic average in any direction parallel to the layers, and by the harmonic mean in the direction orthogonal to the layers. The key point is that the harmonic averages and arithmetic averages are sometimes of opposite sign, giving rise to a hyperbolic effective dielectric tensor. There are also naturally occurring hyperbolic materials [47]. Hyperbolic materials have generated considerable interest as they can direct radiation along the ‘characteristic lines’ in the hyperbolic medium. With the hyperbolic medium in a shell configuration, and the material oriented so an eigenvector of the dielectric tensor points in the radial direction, sources near the inner boundary of the shell can be spaced less than a wavelength apart, yet radiate along the characteristic lines to the outer surface of the shell where they can be greater than half a wavelength apart and thus detectable through conventional microscopic techniques. Thus this ‘hyperlens’, proposed by Jacob *et al.* [48] and by Salandrino & Engheta [49] and that was subsequently experimentally validated [50,51], allows one to resolve sources that are less than a wavelength apart. In the quasi-static regime, where the wavelength is much larger than the body under consideration, the governing equations are
**d**(**x**), the electric displacement field, **e**(**x**), the electric field, *V* (**x**), the electric potential, and ** ε**(

**x**), the dielectric tensor, are all complex valued. Consider the simplest case where the material and fields are two-dimensional (or three-dimensional but independent of one spatial coordinate), and the dielectric tensor is diagonal and hyperbolic,

*α*(

**x**) and −

*β*(

**x**) are complex valued functions with non-negative imaginary part. Then, (1.2) reduces to

*α*(

**x**) and

*β*(

**x**) are constant, real and positive this is just the standard wave equation if one reinterprets

*x*

_{2}as a time coordinate, keeping

*x*

_{1}as a spatial coordinate. So a spatially periodic hyperbolic material where the dielectric tensor takes the form (1.3) can be viewed in the limit where the loss goes to zero as a space–time microstructure. An interesting question is whether in spatially periodic composites of hyperbolic materials one can get unusual homogenized equations in the limit where the imaginary part of

*α*(

**x**) and

*β*(

**x**) tends to zero. One might expect that such unusual homogenized behaviour could occur if the ‘cell problem’, where one looks for periodic

**d**(

**x**) and

**e**(

**x**) that solve (1.2) with a prescribed non-zero average value of

**e**(

**x**), has a non-unique solution when the loss is zero.

We should point out that, besides the study of space–time microstructures, there is a large body of work on active control by using structures with moduli that vary in space and time [52–54]. Also, interestingly, space–time boundaries have been found to be important for time reversal [55,56]: a wave pulse hitting a space–time boundary divides into two wave pulses—one continues to spread outwards and the other converges to the position of the original source. This division of wave pulses is also apparent in the work of Lurie and co-workers [17,18,28,57].

Subsequent to the initial phases of this work, it was discovered by Alexander and Natalia Movchan that field patterns occur in space–time geometries that are as simple as a two-phase laminate with boundaries that are normal to the time axis (i.e. at a periodic set of times). The analysis of this will be explored elsewhere [58].

## 2. Statement of the problem

To begin with, and to keep things simple, our first interest is in the two-dimensional conductivity equation
**j**(**x**) the electric current, **e**(**x**) the electric field, *V* (**x**) the electric potential and ** σ**(

**x**) the conductivity tensor. Here, we focus only on the conductivity problem of an assemblage of two materials, that is, we study the case where

**(**

*σ***x**) can take only two values

*χ*(

**x**) is a characteristic function that takes the value 1 in the region of the material where

**(**

*σ***x**)=

*σ*_{1}and takes the value 0 in the region where

**(**

*σ***x**)=

*σ*_{2}, with the conductivities

*σ*_{1}and

*σ*_{2}taking the form

*α*

_{i}and

*β*

_{i},

*i*=1,2, are, in general, real and positive. The reason why the conductivity coefficient in the

*x*

_{2}direction is taken to be negative is merely for the sake of convenience. In fact, in such a way, it is easy to see that, by combining equations (2.1) and (2.3), the potential

*V*

_{i}(

**x**) in phase

*i*,

*i*=1,2, satisfies the following wave equation:

*x*

_{1}as representing the space variable

*x*, and

*x*

_{2}as representing the time variable

*t*. Consequently, the parameters

*α*

_{i},

*i*=1,2, are nothing but the usual conductivity coefficients in space, whereas the

*β*

_{i},

*i*=1,2, have to be interpreted as the conductivity coefficients in time. In other words, we are considering a one-dimensional distribution of the two materials and we suppose that such a configuration varies not just with respect to the spatial coordinate

*x*but also with respect to time, thus giving rise to a dynamic material.

It is well known that the local solution in phase *i* of equation (2.4), deduced via the d’Alembert method, is simply given by
*V* ^{+}_{i}(*x*−*c*_{i}*t*) is the wave moving upwards to the right in a space–time diagram, *V* ^{−}_{i}(*x*+*c*_{i}*t*) is the wave moving upwards to the left in a space–time diagram and
*V* ^{+}_{i}(*x*−*c*_{i}*t*) and *V* ^{−}_{i}(*x*+*c*_{i}*t*) are currents
*V* ^{+}_{i}′(*s*) and *V* ^{−}_{i}′(*s*) denote the derivatives of *V* ^{+}_{i}(*s*) and *V* ^{−}_{i}(*s*). It is clear that the currents do not interact and the potentials do not interact and, therefore, we can, in a sense, think of the medium as composed of two parallel independent sets of wires aligned with the characteristic directions (figure 1).

Clearly, the explicit expression of the d’Alembert solution (2.5), and therefore of (2.7), depends on the initial conditions chosen (§2b). For what concerns the boundary conditions, we assume here that the medium is infinite with respect to *x*. (Later, in the numerical section, we will assume periodic boundary conditions in *x*.)

### (a) Transmission conditions at the interfaces

First of all, we recall that, in principle, the space–time distribution of the two materials is arbitrary, provided that the existence and uniqueness of the solution *V* (*x*,*t*) is ensured. This places a constraint on the shape of the interfaces: each interface has to be such that there are always two incoming and two outgoing characteristics [21]. To attain such a requirement, it is sufficient that the slope *w* of each interface in a space–time diagram fulfils the following relationship [17]:
*w*=0 and the above condition is trivially satisfied. Similarly, in the case of a pure temporal interface, corresponding to a horizontal line in a space–time diagram, we have that

For what concerns the transmission conditions for the potential across the interfaces, we require that the potential be continuous across each interface and such that the continuity of the current flux is preserved, i.e.
**n** being the normal vector to the interface.

### (b) Initial conditions

Let us start by considering initial conditions of the type
*H*(*y*) is the Heaviside function,
*δ*(*y*) is the Dirac delta function. Thus, we are injecting a total current flux *j*_{0} at time *t*=0 concentrated at *x*=*a*. In general, as illustrated in figure 2, this generates a cascade of current lines branching as *t* increases that is difficult to analyse. If the process is ergodic, then one can probably understand the dynamics in an ensemble-averaged sense, similarly to what is done in statistical physics.

### (c) Field patterns

What is remarkable is that there are special space–time geometries, as illustrated in figures 3 and 4, where the dynamics is especially simple (note that the restriction (2.8) is trivially satisfied by both geometries having only horizontal and vertical interfaces). In particular, what causes the orderly pattern of characteristics in figures 3 and 4 is the special relation between the characteristic lines and the geometry of the microstructure, as displayed in figure 5. From the configuration of the characteristic lines it is clear that, denoted with *x*_{0} and *t*_{0} the spatial and time dimensions of the unit cell of each space–time microstructure, their ratio reads
*x*_{0} and *t*_{0} and the latter are 2*x*_{0} and *t*_{0}).

Note that, by fixing the microgeometry, we fix the volume fraction *f* of the space–time inclusions which, for the microstructure in figure 3, is equal to *f*=*c*_{1}*c*_{2}/[(*c*_{1}+2*c*_{2})(*c*_{1}+*c*_{2})], while for the geometry in figure 4 it is equal to *f*=*c*_{1}*c*_{2}/[(*c*_{1}+3*c*_{2})(*c*_{1}+*c*_{2})].

## 3. Collections of field patterns as multi-dimensional or multi-component objects

Let us begin by focusing on figure 3, where the origin of the reference system *x*=*t*=0 corresponds to the bottom-left corner of an inclusion. The periodic pattern of characteristic lines that arises after a sufficiently large amount of time is totally determined by the sole index parameter *ϕ*, which is the distance between the bottom-left corner of a specific inclusion and the intersection point between the base of that inclusion and the characteristic lines (figure 3). Note that, for all the other inclusions struck by characteristic lines, such a distance is either *ϕ* or *c*_{2}−*ϕ*, where *c*_{2} is the length of the base of the inclusion, if its height is set equal to one time unit. This is due to the fact that, for this space–time microstructure, the unit cell of the network dynamics (see also figure 7 in the next section) has length 2*x*_{0} and it contains two inclusions. Here, for simplicity, if we count the columns of inclusions starting from the right of the *t*-axis, those in an odd position will be parametrized by *ϕ*, whereas those in an even position by *c*_{2}−*ϕ*.

Note that the index parameter *ϕ* determines in a unique way the periodic pattern of characteristic lines that arises after a sufficiently large amount of time, as it is not related to the specific initial conditions chosen: the *ϕ*-field pattern in figure 3 can indeed be obtained in different ways and each way will have a different current distribution. For instance, in figure 6*a* the same *ϕ*-field pattern of figure 3 is launched by injecting current at a point (*a*,0) belonging to the matrix, whereas in figure 6*b* it is launched by injecting current at three points at a different time (note that in this figure the reference system is different from the one in figure 3). We will see in §6 that some patterns parametrized by the same parameter *ϕ* blow up in time, while others have a wave-like behaviour which does not blow up.

If the initial conditions are suitably chosen, more than one field pattern can be launched. This gives rise to a collection of field patterns, each one parametrized by its index parameter *ϕ*_{i}, with *i*=1,2,…,*m*, *m* being the number of field patterns launched. For instance, pick two parameters *ϵ*_{1} and *ϵ*_{2} lying between 0 and 1, and choose real amplitudes *s*_{1}, *s*_{2}, *j*_{1} and *j*_{2}. Then, the initial conditions
*l* and *n* are integers, can launch anywhere from one to four field patterns in the space–time microstructure of figure 3, depending on the value of the parameters *ϵ*_{1} and *ϵ*_{2}. Suppose, first, that the points where the current is injected, (*x*,*t*)=(2*x*_{0}(*l*+*ϵ*_{1}),0) and (*x*,*t*)=(2*x*_{0}(*n*+*ϵ*_{2}),0), lie at the base of the inclusions. If they both belong to inclusions in odd columns, we have *ϵ*_{i}<*c*_{2}/(2*x*_{0})=*c*_{2}/(2(*c*_{1}+2*c*_{2})), whereas if they both belong to an inclusion in an even column we have *ϕ*_{1} and *ϕ*_{2} are easily identified as *ϕ*_{i}=2*ϵ*_{i}*x*_{0}=2*ϵ*_{i}(*c*_{1}+2*c*_{2}), in the first case, and *c*_{2}−*ϕ*_{i}=2*ϵ*_{i}*x*_{0}−*x*_{0}=(2*ϵ*_{i}−1)(*c*_{1}+2*c*_{2}), in the second case. Note that if *ϕ*_{1}=*ϕ*_{2} then only one field pattern is generated. Finally, if *x*,*t*)=(2*x*_{0}(*l*+*ϵ*_{1}),0) and (*x*,*t*)=(2*x*_{0}(*l*+*ϵ*_{1}),0) and (*x*,*t*)=(2*x*_{0}(*n*+*ϵ*_{2}),0) belong to the matrix and there could be up to four associated field patterns: the index parameters are found by tracing the relevant characteristic or characteristics that originate from the source at *t*=0 until they strike the base of an inclusion.

Collections of field patterns are especially interesting because of their multi-dimensional nature. They are similar in some respects to non-interacting particles in quantum mechanics. The idea is that one can split the space–time diagram into an infinite number of disjoint ‘conducting networks’ that do not interact. Each field pattern in a collection then lives on one of these conducting networks.

Consider a collection of *m* field patterns, parametrized by the index parameters *ϕ*_{1},*ϕ*_{2},…,*ϕ*_{m}. The potential *V*_{i}(*x*,*t*) associated with the field pattern with index *ϕ*_{i} satisfies the wave equation and, by the superposition principle and the linearity of the problem, also the total potential
*V* (*x*,*t*) is governed by the wave equation, it is far simpler to track the dynamics of *V*_{i}(*x*,*t*) for each field pattern, *i*=1,2,…,*m*, because the field patterns do not interact. We can think of *V*_{i}(*x*,*t*) as living in its separate dimension, *x*_{i}, and rather than considering the potential *V* (*x*,*t*) one can consider the multi-dimensional potential
*V* (*x*,*t*). It is expected that one will be able, in appropriate conditions, to homogenize the behaviour of field patterns so that *V*_{i}(*x*_{i},*t*) is replaced by some suitably defined ‘coarse-grained’ potential *V*_{i}(*x*_{i},*t*) and that satisfies an appropriate homogenized equation (still to be found). While one could speak about the dynamics of the multi-dimensional coarse-grained potential

An alternative to introducing these extra dimensions is to introduce a vector potential **V**(*x*,*t*) with *m* components *V*_{i}(*x*,*t*). With coarse-grained potentials ** σ**(

*x*,

*t*)) the work of Khruslov [59] and Briane and Mazliak [60,61] (see also [41] where related ideas are embodied in a quantum mechanical context) suggests that the homogenized equations may simply couple these field components. If this turns out to be the case then the full multi-variate potential

## 4. Solving the cell problem

In this section, we aim at making steps towards determining the effective properties of the material having the space–time microstructure shown in figure 3. The results concerning the microgeometry in figure 4 are given in the electronic supplementary material, as they are very similar to those presented here. We will just be concentrating on the case where the fields **e**(**x**) and **j**(**x**) have the same periodicity as the field pattern, i.e. their cell of periodicity contains two inclusions.

As already noted in the previous section, the special geometry in figure 3 is such that the characteristic lines follow a certain periodic pattern. Let us consider, then, the unit cell for the network dynamics (figure 7) (note that the size of such a unit cell is twice the size of the unit cell of the space–time microstructure, that is, 2*x*_{0} and *t*_{0}) and let us determine the solution of the problem on the unit cell in the case the initial conditions are
*V* ^{+}_{i}(*x*,*t*) and *V* ^{−}_{i}(*x*,*t*) in phase *i* take the following expressions:
*a*^{+}_{i} and *γ*_{1} and *γ*_{2} relate the potential jumps across the characteristic lines to the current flowing through them.

We highlight the fact that the case analysed in detail in this section is of particular interest as the solution of such a problem can be used as the starting point to build the solutions corresponding to the choice of other initial conditions, as also shown in §5, where, instead of a prescribed jump in potential at *t*=0, we consider the case of a linear potential.

The symmetry of the unit cell with respect to the vertical centreline suggests the idea of splitting the dynamics represented in figure 7 into dynamics that are symmetric and dynamics that are antisymmetric: every type of dynamics can be recovered by suitably choosing the linear combination between these two elementary cases.

### (a) Symmetric dynamics

To solve for the currents and potentials associated with the symmetric dynamics with periodic currents (and hence periodic electric fields), we suppose all the currents are flowing in the direction of positive time, and have the magnitudes *j*_{0}, *j*_{1}, *j*_{2} and *j*_{0}′, *j*_{1}′, *j*_{2}′ as indicated on the lines in figure 8. Periodicity and symmetry ensure the current flow has this structure. By conservation of current at the interfaces between the phases (see equation (2.10)), we require that

With reference to figure 8, we let *V*_{i}, *i*=1,2,…,9, denote the potentials in each of the regions 1,2,…,9 and we let *V*_{i}′, *i*=2,4,6,7,8,9, denote the potentials in each of the regions 2′,4′,6′,7′,8′,9′. For simplicity, we set *V*_{1}=0. Then, from the aforementioned rule that the potential jump across a characteristic line is *γ*_{i} times the current flowing through it, we obtain successively
*j*_{0}+*j*_{1}=*j*_{0}′+*j*_{1}′, as can be seen by multiplying the first formula in (4.7) by *γ*_{1}. So all the currents can be expressed in terms of only two independent currents, say *j*_{0} and *j*_{0}′. Then,

The advantage of studying the symmetric dynamics in figure 8, rather than the general dynamics in figure 7, lies in the fact that the symmetry with respect to the centreline of the unit cell leads to average current fields and average electric fields that have non-zero components only in the time direction. In particular, the average current field is easily worked out from the flux of current into the lower boundary of the cell of figure 8, and is (2*j*_{0}+2*j*_{0}′+*j*_{1}+*j*_{1}′)/*x*_{0} in the time direction. The average electric field is easily worked out from the potential jump across the cell in the vertical direction, and is (*V*_{1}−*V*_{5})/*t*_{0}. The ratio of the average current field divided by the average electric field is
*β*_{*}.

### (b) Antisymmetric dynamics

To solve for the currents and potentials associated with the antisymmetric dynamics with periodic currents (and hence periodic electric fields), we assign the convention that currents flowing in the direction of positive time have positive sign, and currents flowing in the direction of negative time have negative sign. The currents are labelled *j*_{0}, *j*_{1}, *j*_{2} and *j*_{0}′, *j*_{1}′, *j*_{2}′ on the lines in figure 9. Periodicity and antisymmetry ensure the current flow has this structure. By conservation of current at the interfaces between the phases (see equation (2.10)) we require that
*V*_{i}, *i*=1,2,…,9, denote the potentials in each of the regions 1,2,…,9 and we let *V*_{i}′, *i*=2,4,6,7,8,9, denote the potentials in each of the regions 2′,4′,6′,7′,8′,9′. Once again, for simplicity, we set *V*_{1}=0. Then, from the rule that the potential jump across a characteristic line is *γ*_{i} times the current flowing through it, we obtain successively
*j*_{0} and *j*_{0}′ can be chosen independently, and in terms of these
*V*_{5}=*V*_{1}, there is no average electric field in the vertical direction. Similarly, there is no current field in the vertical direction. In the horizontal direction, by looking at the current flux out of the unit cell, one sees the average current to the right is 2(*j*_{1}−*j*_{0})/*t*_{0}. The average electric field pointing to the right is (*V*_{9}′−*V*_{9})/*x*_{0}. The ratio of the average current field divided by the average electric field is then
*α*_{*}.

Therefore, from (2.13), (4.9) and (4.14) we see that the ‘effective conductivity tensor’ is
*c*_{1}, even when the parameters of phase 2 approach those of phase 1. If we had chosen our units of space and time so that *x*_{0}=1 and *t*_{0}=1, then the corresponding dimensionless speed would be
** σ**(

**x**) had a very tiny imaginary part (as relevant to composites of hyperbolic materials, when

**(**

*σ***x**) is replaced by the dielectric tensor field

**(**

*ε***x**), and time is replaced by a spatial variable) then indeed

*σ*_{*}would be the appropriate ‘speed’ giving the effective ‘characteristic lines’ of propagation. In the absence of such an imaginary part one needs to derive the appropriate homogenized equations. This homogenized equation is unlikely to be simply

## 5. Associated field patterns

As before let us assume the origin *x*=*t*=0 coincides with the bottom-left corner of an inclusion, as in figure 3. Now given a non-negative index parameter *ϕ*<*c*_{2}, suppose we launch a field pattern *V* (*x*,*t*) by setting the initial conditions
*V* (*x*,*t*) is just equal to *H*(*x*−*ϕ*−*c*_{2}*t*) until the time 1−*ϕ*/*c*_{2} when the discontinuity strikes the right-hand side of the inclusion. Let us label this potential *V* (*x*,*t*,*ϕ*) and its associated current **j**(*x*,*t*,*ϕ*) to make explicit the dependence on *ϕ*. Now given two index parameters *ϕ*_{1} and *ϕ*_{2}, where 0<*ϕ*_{1}<*ϕ*_{2}<*c*_{2}, we can consider the potential
*W*(*x*,*t*,*ϕ*_{1},*ϕ*_{2}) is clearly a solution of the wave equation being a superposition of solutions. It has discontinuities in slope along the lines indicated in figure 10. Let the associated current field be labelled **J**(*x*,*t*,*ϕ*_{1},*ϕ*_{2}). It is given by
*W*(*x*,*t*) satisfies the initial conditions,

By considering the evolution of this disturbance, it is clear that the associated field pattern *W*(*x*,*t*,*ϕ*_{1},*ϕ*_{2}) is a linear function of *x* and *t* in each space–time polygonal region with boundaries marked by the lines of discontinuity of slope in figure 10. Thus, the lines of discontinuity in the associated field pattern can be considered as sources and sinks of the ‘electric field’ −∂*W*(*x*,*t*,*ϕ*_{1},*ϕ*_{2})/∂*x*, like electric charges. We call *W*(*x*,*t*,*ϕ*_{1},*ϕ*_{2}) an associated field pattern of the first degree. An example of an associated field pattern of the first degree that is periodic is graphed in figure 11

An associated field pattern of the second degree would be the function
*ϕ*_{11},*ϕ*_{12},*ϕ*_{21} and *ϕ*_{22} are parameters such that
*Y* (*x*,*t*,*ϕ*_{11},*ϕ*_{12},*ϕ*_{21},*ϕ*_{22}) will have discontinuities in its second derivatives across the lines in the field pattern. Field patterns of higher degree can then be defined in the obvious way.

## 6. Numerical results

To test the theoretical results obtained in §4a and explore the subject of field patterns further, we perform some numerical analyses. Because a field pattern lives on its own discrete network, it is only necessary to study the dynamics of field patterns on these discrete networks. On each discrete network the evolution of the currents in the ‘wires’ can be viewed as a dynamical system, and due to the periodicity in time it suffices to study the currents in the ‘wires’ at times *t*=*τ*+*nt*_{0} for *n*=0,1,2,…, where *τ* is some fixed time. To simplify the description, we choose *τ* to be a time where none of the‘wires’ in the discrete network intersect, as shown for example by the horizontal line at *t*=*τ* in figure 7. Then, the state of the system at these discrete times is entirely determined by specifying the currents in each of the ‘wires’: it is captured by the function *j*(*k*,*m*,*n*), where *k*, taking integer values between 1 and 12, indexes the ‘wires’ in each unit cell, the integer *m* indexes the cell and the integer *n* indexes the discrete time. The state at any intervening time between the discrete times is easily determined from the dynamics: we can think of the set of currents at the discrete times as being representative of the full dynamics in much the same way that a Poincaré map helps one visualize the dynamics of a dynamical system.

Our aim is to determine the evolution of the state function *j*(*k*,*m*,*n*), as *n*=0,1,2,…, which is representative of the time, increases. To achieve such a goal, we calculate the Green function that allows one to recover the currents at a certain time *t*=*τ*+*nt*_{0} with *n* fixed, given the currents at time *t*=*τ*+(*n*−1)*t*_{0}. Clearly, this is done by taking one unit cell, say with *m*=*m*_{0} and injecting, at *t*=*τ* (*n*=0), a unitary current in each of the 12 points *k*=1,2,…,12 marked by the green dots on the line *t*=*τ* in figure 7, one at a time, and by calculating how such a current flows along the characteristic lines to determine the currents at *t*=*τ*+*t*_{0} (*n*=1). Note that the current injected in some of the 12 points may cross the boundary of the unit cell: if one injects, for instance, a current at point 1, this will flow towards the points 9, 10 and 12 of the adjacent cell on the left, having *m*=*m*_{0}−1, whereas, if one injects current at point 12, for example, this will flow towards the points 1, 3 and 4 of the adjacent cell on the right, having *m*=*m*_{0}+1. This means that the currents injected in a unit cell at *t*=*τ* (*n*=0) may influence at *t*=*τ*+*t*_{0} (*n*=1) the currents in up to three unit cells.

Green’s function so calculated, then, can be denoted by *G*_{k,k′}(*m*−*m*′) to indicate that it provides the current at point *k*, with *k*=1,2,…,12, of cell *m*, given the current at point *k*′, with *k*′=1,2,…,12, of cell *m*′. Such a function obviously only depends on the differences *m*−*m*′, and its explicit expression is given in §7. Then, the current at the point *k* of cell *m* at time *t*=*τ*+*nt*_{0} is determined by the currents at points *k*′ of cells *m*′ at the previous time *t*=*τ*+(*n*−1)*t*_{0} by
*x* direction, we could consider a very large number *M* of cells. It is then natural to take periodic boundary conditions, so that we can think of the discrete network as lying on the surface of a cylinder, with the axis variable corresponding to time and the angle variable corresponding to space. Thus, in (6.1) cell *M* is identified with cell 0 and cell −1 is identified with cell *M*−1: equivalently, the argument *m*−*m*′ of *G*_{k,k′}(*m*−*m*′) in (6.1) should be replaced with (*m*−*m*′) mod *M*.

### (a) On testing the periodic dynamics

To test the periodic solution corresponding to the symmetric dynamics that we derived in the previous section, we have just to impose that, at *t*=*τ*, the distribution of currents in the 12 points of each unit cell be equal to the periodic distribution of currents given by the symmetric dynamics of figure 8, and check that for *n*=1,2,…, such a distribution does not change. In figure 12, we represent the time evolution of the currents for *n*=0,1,…,100 in 10 cells (so that the total number of points is 120), for the case when *j*_{0}=1 and *j*_{0}′=−2, and *γ*_{1}=1 and *γ*_{2}=3. As expected, the solution depicted in figure 12 is clearly periodic both in space and time.

A similar result is obtained when one considers as initial distribution of currents that corresponding to the antisymmetric dynamics of figure 9.

### (b) Blow-up

At this point one may ask what happens when at *t*=*τ* the current is injected only at one of the 120 points. In order to address this question, we supposed that at time *t*=*τ* the only non-zero current is the one at point 1 of cell 5, that is, at the point labelled with 61. It is obvious that, due to the periodicity condition for which cell 1 is treated as adjacent to cell 10, the point of injection of the current could be the point 1 of any cell. We found that the solution blows up exponentially with time. In order to better appreciate such a behaviour, in figure 13 we report the solution for the case of 100 cells with the current injected only at point 601, that is, point 1 of cell 50.

One way to avoid this blow-up is to set *γ*_{1}=*γ*_{2}, so there is no impedance mismatch, as done in the analysis of Lurie [17]. Then, if we start by injecting current at a single point, the field pattern degenerates to a single trajectory. The current flows either to the right or to the left in the space–time diagram: there are no reflected waves but only a transmitted wave. Conservation of current then implies that the current along the trajectory remains constant: there is no blow-up.

### (c) Eigenvalues and eigenvectors of the transfer matrix

Now the question is: Is it possible to obtain a solution that does not blow up without imposing any special constraint on the material parameters? To this purpose, let us calculate the eigenvalues of the transfer matrix. Clearly, with periodic boundary conditions, the transfer matrix is a 12*M*×12*M* matrix, and (6.2) gives its elements in terms of Green’s function, whose components are explicitly given in §7. For simplicity, suppose we consider only 10 cells, for a total of 120 points (12 points for each unit cell) so that the number of eigenvalues is equal to 120 (we write the transfer matrix as a 120×120 matrix that, applied to a vector with 120 components describing the current distribution in the 120 points at a certain time, provides the vector with 120 components representing the currents in the same 120 points after a period of time). For the particular case when *γ*_{1}=1 and *γ*_{2}=3, these eigenvalues are plotted in figure 14. From the graph, it is clear that many eigenvalues are on the unit circle. This is a consequence of the *x*→−*x* when the origin is chosen at the centre of an inclusion, and *t*→−*t* (when again the origin is chosen at the centre of an inclusion). Excellent reviews of *et al.* [62] and Konotop *et al.* [63]. It may seem that the time reversal symmetry is broken in our discrete dynamics, by the choice of *τ*. Note, however, that nothing really changes if we modify *τ* so long as the line *t*=*τ* does not intersect the inclusions: one just has to label the currents appropriately so they are consistent at the different values of *τ*. Thus, if the line *t*=*τ* moves through a point where two current lines cross, then the ordering of the current numbering labels undergoes a swap. One can even choose *τ* so the line *t*=*τ* is exactly midway between two rows of inclusions, and then one clearly has time reversal symmetry. Nothing really changes, but at the points of intersection of two current lines one has to be careful to distinguish the current flowing upwards to the right from the current flowing upwards to the left.

Clearly modes having eigenvalues with modulus greater than 1 blow up in time, those with eigenvalues having modulus less than 1 decay in time, while those with eigenvalue having modulus 1 oscillate with time. Among all the eigenvalues, we select those with modulus equal to 1, and we apply the corresponding eigenvector as the initial current distribution in each unit cell. If *γ*_{1}=1 and *γ*_{2}=3, there are 99 eigenvalues with modulus 1 in total: the real one has multiplicity 3, whereas among the 96 complex eigenvalues the independent ones sum up to 13 pairs (each pair includes one eigenvalue together with its complex conjugate). In particular, seven couples have negative real part and six positive real part. The results can be grouped into three classes: solutions periodic in both time and space with time period equal to *t*_{0} and space period equal to 2*x*_{0}, solutions periodic in both time and space but with period larger than *t*_{0} and 2*x*_{0}, and, finally, solutions that are not periodic. To the first class belong the symmetric (figure 8) and antisymmetric (figure 9) dynamics considered in the two previous subsections and, therefore, we refer to figure 12. Solutions of the second type are shown, as an example, in figure 15, whereas solutions of the third type are given, for instance, in figures 16 and 17.

## 7. Conclusion

This paper launches the study of field patterns, but leaves many avenues for further research. In particular, it should motivate subsequent investigations on how things change if there is a small nonlinearity (of possible relevance to quantum mechanics); how things change if the tensor ** σ**(

**x**) has a small imaginary part (of possible relevance to determining the effective behaviour of composites of hyperbolic metamaterials, when

**(**

*σ***x**) is replaced by the dielectric tensor field

**(**

*ε***x**) and time is replaced by a spatial variable); how things change if one considers the wave equation in two or three spatial dimensions, rather than just one; and how things change if one looks at field patterns associated with other equations such as Maxwell’s equations with a space–time microstructure, or perhaps a modified version of Dirac’s equation that allows one to insert some space–time microstructure. A key point is that the underlying wave equation, in the ideal case, should not have any dispersion because otherwise the field patterns will lose their structure. Also our analysis begs the question as to whether there are space–time microstructures that give rise to field patterns, with appropriate moduli such that the transfer matrix only has eigenvalues with modulus 1, so that there are no growing modes. Additionally, from the viewpoint of homogenization, and high-frequency homogenization [1,64–70] in particular, one would like to know if there are solutions that have rapid oscillations at the scale of the cells, but which have macroscopic modulations, and one would want to describe the effective equations that describe how these modulations propagate. It will be exciting to see how our understanding develops.

## Authors' contributions

G.W.M. conceived the mathematical model. O.M. implemented and performed the simulations. G.W.M. and O.M. interpreted the computational results and wrote the paper. Both authors gave final approval for publication.

## Competing interests

We declare we have no competing interests.

## Funding

The authors are grateful to the Minneapolis Institute for Mathematics and its Applications for support as part of the special year on Mathematics and Optics. They also thank the National Science Foundation of the USA for support through grant no. DMS-1211359.

## Acknowledgements

Alexander and Natasha Movchan and Hoai-Minh Nguyen are thanked for comments on the manuscript. Maxence Cassier is thanked for suggesting that we plot the eigenvalues of the transfer matrix. Carme Calderer is thanked for interesting discussions about liquid crystals with space–time microstructures.

## Appendix A. Green’s function for the space–time microstructure with aligned geometry

In this section, we give the components of Green’s function associated with the space–time microstructure illustrated in figure 3, where the rectangular inclusions are aligned. We inject unitary current at time *t*=*τ* at each of the 12 points marked by the 12 green dots on the line *t*=*τ* in figure 7, and recall that at each time step currents injected into one cell can generate currents at time *t*=*τ*+*t*_{0} not only in that cell but also in the two neighbouring cells. Green’s function is calculated by determining how the currents split along the characteristics. Clearly, as Green’s function only depends on *m*−*m*′, the case where the currents are injected at points in other cells is straightforward: one has just to suitably translate the expressions of the components of Green’s function. Recall that *G*_{k,k′}(*m*−*m*′) gives the current at point *k*, with *k*=1,2,…,12, of cell *m*, given the current at point *k*′, with *k*′=1,2,…,12, of cell *m*′. Because this only depends on *m*−*m*′ (more precisely (*m*−*m*′) mod *M*) it suffices to take *m*′=0. Then, *G*_{k,k′}(*m*) gives the current at point *k*, with *k*=1,2,…,12, of cell *m*, given the current at point *k*′, with *k*′=1,2,…,12, of cell 0: *m*=−1 then refers to the cell on the left of cell 0, i.e. cell *M*−1, and *m*=+1 refers to the cell on the right of cell 0, i.e. cell 1. So we see, step by step, if the current is injected at

## Footnotes

Electronic supplementary material is available online at https://dx.doi.org/10.6084/m9.figshare.c.3680647.

- Received November 9, 2016.
- Accepted January 18, 2017.

- © 2017 The Author(s)

Published by the Royal Society. All rights reserved.