## Abstract

We build efficient and unitary (hence stable) methods for the solution of the linear time-dependent Schrödinger equation with explicitly time-dependent potentials in a semiclassical regime. The Magnus–Zassenhaus schemes presented here are based on a combination of the Zassenhaus decomposition (Bader *et al.* 2014 *Found. Comput. Math.* **14**, 689–720. (doi:10.1007/s10208-013-9182-8)) with the Magnus expansion of the time-dependent Hamiltonian. We conclude with numerical experiments.

## 1. Introduction

Rapid advances in laser technologies over the recent years have led to a significant progress in the control of systems at the molecular level [1]. Pioneering work in the control of chemical systems at the quantum level was done in the study of photo-dissociation and bimolecular reactions. Various control techniques such as the pump-dump quantum control scheme [2] and the coherent control schemes [3] have had numerous experimental validations and applications [4,5].

These experimental successes and a dramatic improvement in our ability to shape femtosecond laser pulses over the recent years has led to a great deal of interest in the development of a systematic way of designing controls (shaping laser pulses) and a requirement for rigorous mathematical analysis of issues such as controllability [6].

In the case of laser-induced breakdown (photo-dissociation) of a molecule, for instance, there is a great deal of interest in designing lasers that achieve efficient breakdown. The fact that the dissociation timescales are often themselves in femtoseconds means that it cannot generally be assumed that the laser pulse causes near-instantaneous and efficient excitation of a molecule sitting in the ground state, having no other influence thereafter—the correct dynamics require taking into account the time-dependent nature of the electric potential (laser) throughout the evolution of the wave function.

To analyse the control exerted by these lasers, we need efficient means of computing the Schrödinger equation featuring time-dependent Hamiltonians, existing strategies for which are either low accuracy or become prohibitively expensive with higher orders of accuracy.

Optimal control schemes for designing laser pulses is often posed as an inverse problem that is solved via optimization schemes requiring repeated solutions of Schrödinger equations with modified time-dependent Hamiltonians. An ability to efficiently solve these Schrödinger equations with moderately large time steps and high accuracy becomes crucial here, creating a need for high-order methods [7].

In this paper, we are interested in the numerical computation of the linear, time-dependent Schrödinger equation in a semiclassical regime
*u*(*x*,0)=*u*_{0}(*x*), and periodic boundary conditions. We assume, that the potential function *t*≥0.

Solving the Schrödinger equation on the torus in this manner is a standard practice in state-of-the-art numerical methods, particularly when it comes to the semiclassical regime, even though the true problem ought to be solved on

In principle, as the splittings presented in this paper are developed without discretizing in space first, it should be possible to combine them with other discretization strategies and boundary conditions—so long as the spatial oscillations are resolved correctly and the differential operator has a skew-symmetric discretization.

Equation (1.1) is posed on a Hilbert space *x* at time *t*. For this reason, the initial condition *u*_{0}(*x*) is normalized to one and it is easy to see that the norm of the solution is an invariant

The regularity that we require from the potential *V* depends on the order of desired accuracy of the Magnus–Zassenhaus scheme. However, for convenience we have assumed that it is smooth in its domain. The initial condition is usually a high-frequency wave packet, but even if it is non-oscillatory it can be shown, cf. the analysis in [8], that the solution to this Schrödinger equation is highly oscillatory, with frequency of at least

The first step in approximating (1.1) usually is spatial discretization, which yields the following system of ODEs:
*M*×*M* matrices representing the discretization of the second derivative and the multiplication by *V* (⋅,*t*), respectively. We understand, that *t* and ** u**(0) is derived from the initial conditions.

The exponential midpoint rule, a standard second-order method, can be obtained by freezing the matrix *t*] and applying the Strang splitting
*ε* and *ε*^{−1}) as well as easily computable exponentials. Using spectral collocation or spectral spatial discretization methods, the matrices

The exponential midpoint rule is the lowest order method among a family of methods based on Magnus expansion that have been found to be effective in computational chemistry [11]. Higher order methods of this type are obtained by representing the solution to (1.2) through a Magnus expansion
*M*×*M* skew-Hermitian matrix obtained as an infinite series *Θ*^{[k]}(*t*) composed of *k* nested integrals and commutators of the matrices *h*, in order to keep truncation errors small while ensuring convergence of the series.

The convergence of Magnus expansion based methods for solving linear Schrödinger equations under *ε*=1 was analysed by Hochbruck & Lubich [12], where authors conclude that Magnus expansion-based methods achieve their prescribed orders of accuracy when the time step *h* is such, that for some constant *c*, the inequality

Another serious drawback of this approach lies in the costly approximation of the exponential *e*^{Θ(t)}. As it occurs, the exponent ** Θ**(

*t*) ends up to be of a large size (both: spectral and dimensional), and neither diagonal nor circulant. Indeed, observe, that the highly oscillatory nature of the solution to (1.1) requires a large number of degrees of freedom in the spatial discretization,

**(**

*Θ**t*), as a sum of nested commutators of

The standard Magnus–Lanczos schemes which rely on exponentiation of the truncated Magnus expansion via Lanczos iterations become prohibitively expensive in the semiclassical regime due to the large spectral radius while Yošida-type splittings feature an exponential growth in costs as we seek higher orders of accuracy.

Powerful tools like Zassenhaus splitting or Baker–Campbell–Hausdorff formula were historically avoided in splitting methods due to the large computational cost of nested commutators. However as it happens, choosing the correct, infinite–dimensional Lie algebra in case of the Schrödinger vector field, these commutators lose their unwelcome features and enable the derivation of effective, asymptotic splittings.

In [13], the current authors established a new framework for a numerical approach to the linear time-dependent problem with an autonomous potential
*V* , respectively. Such asymptotic exponential splittings derived in [13] are superior to standard exponential splittings in a number of ways.

Firstly, instead of quantifying the errors in terms of the step size, *h*, which could have been misleading due to large hidden constants, the errors are quantified in terms of the inherent semiclassical parameter *ε*, taking into account the

Secondly, these require far fewer exponentials than classical splittings to attain a given order. To be precise, the number of exponentials is shown to grow linearly, rather than exponentially, with the order. Moreover, the exponents decay increasingly more rapidly in powers of *ε*, yielding an asymptotic splitting.

Thirdly, each of these exponentials can be computed fairly easily. The exponents *W*^{[0]} and *W*^{[1]} are either diagonal or circulant matrices and their exponentials can be computed either directly or through FFT, respectively. Remaining exponents are very small and their exponentials can be computed cheaply using low-dimensional Lanczos methods.

The overall cost is quadratic in the desired order, in contrast to the exponential costs of Yošida-type splittings which becomes increasingly prohibitive once the Hamiltonian to be split features more than two terms. Moreover, the resulting Magnus–Zassenhaus schemes are effective for much larger time steps such as

The aim of the paper is to derive asymptotic exponential splittings for Schrödinger equations with time-varying potentials. To develop such a splitting, we must first resort to the Magnus expansion. We follow the approach of [14] in §3, discretizing the integrals in the Magnus expansion using Gauss–Legendre quadratures. However, unlike the traditional Magnus expansion for ODEs, we work with infinite dimensional operators to evaluate the commutators. To arrive at such a commutator-free expression, we work in the free Lie algebra of the infinite dimensional operators *V* discussed in §2. Following the framework of [13], a symmetric Zassenhaus splitting is carried out on the commutator-free Magnus expansion, to present, eventually, the Magnus–Zassenhaus scheme of the fifth order (3.22). Obviously, following this derivation one can obtain the method of any desired order (table 1). Implementation and numerical examples are discussed in §4.

Convergence and unitarity of our method follows from exactly the same argument that was presented in [13]. Namely, it can be easily shown that all the exponents appearing in the derived splitting (3.22) are skew-Hermitian, hence the exponentials are unitary, which suffices for the stability of the method. Now, given consistency of our method (indeed, our scheme will be shown to be of local accuracy much higher than the order one required for the method to be consistent), we can use Lax-equivalence theorem and conclude the convergence of the method.

Realistic systems in quantum chemistry could involve time-dependent matrix-valued, highly oscillatory and stochastic potentials, among others. The first of these will require an extension of our Lie algebraic framework and is under active investigation, while extensions of an alternative scheme that was developed in a recent work [15] could prove promising for oscillatory and low regularity potentials. In this approach, the integrals appearing in the Magnus expansion are discretized at the very last stage, following a symmetric Zassenhaus splitting.

## 2. Lie-group setting

Following the established framework in [13], we suppress the dependence on *x* in (1.1) and analyse the following abstract ODE:
*A*(*t*) belongs to

The vector field in the Schrödinger equation is a linear combination of the action of two operators, *V* . Since our main tools, Magnus expansion and exponential splitting methods, entail nested commutation, we consider the free Lie algebra
*V* . Following [13], we describe their action on sufficiently smooth functions, e.g.
*p* means periodicity in [−1,1]. It is trivial to observe that

In similar vein to [13], we proceed in the pursuit of stability to replace all odd powers of ∂_{x} that are accompanied by *i*. The identities
*y* is a *C*^{1} function, suffice for our presentation. The general form for expressing

In the Zassenhaus splitting for time-independent potentials [13], the commutators arise solely from the symmetric Baker–Campbell–Hausdorff formula where each commutator has an odd number of letters. In the case of the Schrödinger equation, where our operators *V* are each multiplied by *i*, this translates into an odd power of *i* for each commutator.

The Magnus expansion, however, does not possess such a desirable structure—it has commutators with odd as well as even number of letters. As a consequence, we have odd and even powers of *i* accompanying our terms and it is not enough to blindly replace odd powers of ∂_{x}. Instead, we replace all odd powers of ∂_{x} when accompanied by an odd power of *i* and all even powers of ∂_{x} when accompanied by an even power of *i*. A general formula for the replacement of even derivatives by odd derivatives can be proven along similar lines as [13]. For all practical purposes, however, we only require the identities

Once appropriate odd and even differential operators are replaced, operators of the form *V* —these algebraic forms also appear in Zassenhaus splittings for time-independent potentials [13]. We introduce a convenient notation
*V* 〉_{0}=*V* .

It is worth noting that there is rich algebraic theory behind these structures which will feature in another publication, but not much is lost here by considering these as merely a notational convenience. For the purpose of this work, we make observations which can be verified using the machinery of (2.2) in conjunction with the odd and even derivative replacement rules. We present identities which suffice for simplifying all commutators appearing in this work
*iV* =*i*〈*V* 〉_{0} reside in

For a real valued *f*, 〈*f*〉_{k} is symmetric if *k* is even and skew-symmetric otherwise. This property is preserved under discretization once we use spectral collocation on a uniform grid. In that case ∂_{x} is discretized as a skew-symmetric matrix *V* is discretized as a diagonal matrix *f*〉_{k} is discretized as *k* is even and skew-symmetric otherwise. Consequently, elements of *i*^{k+1}〈*f*〉_{k}, which are skew-Hermitian operators, discretize to skew-Hermitian matrices of the form

This structural property of

### Definition 2.1

The height of a term is defined as

These terms benefit from a remarkable property of height reduction which is stated here without proof

For the largest part, our work will proceed in the language of the undiscretized operators introduced in this section. At the very last stage, we will resort to spectral collocation on the uniform grid over [−1,1] for spatial discretization. For this purpose, we will need at least

More formally, following [10] we assume that the solution *u*(*t*), which is known to feature *ε*), are irrelevant in the asymptotic limit of *ε* and do not affect the asymptotic analysis carried out here.

The property of height reduction leads to a systematic decrease in the size of terms with commutation
*ε* and assume that our choice of the time-step, *h*, is governed by *σ*≤1. Larger values of *σ* correspond to very small time steps and are best avoided.

## 3. The solution

### (a) The Magnus expansion

To look for the solution of (2.1) one needs to take into account some features of the operator *h*
*e*^{Θ(t,t+h)} is a flow evolving the solution from *t* to *t*+*h*. Let us observe now, that the aim of the paper consists in a derivation of the asymptotic exponential splitting for a certain function of type *e*^{Θ(t,t+h)}. This means, that the algorithm we are going to present will advance in small time steps *h* (exactly like the method (1.3) does). For the clarity of exposition, however, we will focus on the first step of Magnus expansion, i.e. (3.3), noting that (3.2), when required for any time window [*t*,*t*+*h*], is easily recovered from (3.3) by a straightforward translation of the vector field *Θ*(*h*) instead of *Θ*(*h*,0).

Simple differentiation of the ansatz in (3.3) together with elementary algebra, see [17] or [19] for details, lead to the conclusion that the exponent *Θ*(*t*) satisfies the dexpinv equation
*B*_{k} are Bernoulli numbers (*B*_{0}=1, *B*_{1}=−1/2, *B*_{2}=1/6 *B*_{3}=0, *B*_{4}=−1/30, *B*_{5}=0, *B*_{6}=1/42) and the adjoint representation is defined recursively by

The first few terms of the Magnus expansion ordered by size in *h* are

We say that a multivariate integral of a nested commutator, *m* if *p* to *φ*, but also the numerical flow *Φ*=*e*^{Ωp(h)}, satisfy
*p* leads to a gain of an extra unit of order [20]. This means that if we aim for a numerical method of order six it suffices to consider the truncation of the Magnus expansion only to the terms listed in (3.5).

### (b) Magnus expansion in practice

It turns out that the multivariate integrals can be efficiently computed using simple univariate quadrature rules of Munthe-Kaas & Owren [14]. We will follow their approach and evaluate the potential at the Gauss–Legendre quadrature points (*t*_{2}=1/2,

Substituting *V* _{j}=〈*V* _{j}〉_{0} to arrive at a Magnus expansion in the format

The grade one commutators of the self-adjoint basis appearing in (3.8), for instance, can be simplified as follows:

Substituting (3.10)–(3.16) in (3.8) gives us a truncated Magnus expansion for the Schrödinger equation (1.1) in the Lie algebra

For *σ*≤1, the last two terms in *Ω*_{5}, which are

We note that, due to the property of height reduction discussed in §2, a grade *n* commutator in the Magnus expansion of *ε*, the terms in the expansion are decreasing in size with increasing *n* for any *σ*>0, so that convergence of the Magnus expansion also occurs for much larger time steps such as

Since *Ω*_{5} includes the term

### (c) The Zassenhaus splitting algorithm

Let us recall the basic principle for the iterative symmetric Zassenhaus splitting [13]. Our goal is to compute *X* and *Y* are at most *X* from the exponent at the cost of correction terms in form of higher order commutators. Assuming that the corrections are decreasing in size, it is then enough to identify the largest terms as *W*^{[1]} in the central exponent *s* steps can be written as
*W*^{[k]} that we want to extract. Except for some special cases, at least one of the exponents in this splitting will feature an infinite series of terms. To construct a finite splitting scheme featuring a certain accuracy we may discard, at each stage, all terms smaller than the desired threshold.

Assuming that a grade *k* commutator of *X* and *Y* scales as *p*>0 at the very least. In the case of the Schrödinger equation, we choose *p*=*σ*−1, and this naively translates to a very stringent time-step restriction: *σ*>1. However, the remarkable feature of height reduction means that a grade *k* commutator in this context scales as *p*+1>0 which translates to *σ*>0.

### (d) Applying the Zassenhaus algorithm to Magnus expansions

We perform a Zassenhaus splitting on *Ω*_{5}, choosing to extract the largest terms—analysed in powers of *ε*—first. We commence the splitting with *W*^{[0]}=−*ihε*^{−1}*V* _{0}, for instance, and arrive at a variant of the splitting presented here. The exponent to be split is *W*^{[1]}=−*ihε*^{−1}*V* _{0}, whereby
*W*^{[2]} consist of the *σ*≤1, combining them in this way is not a cause for concern. The outcome is the Magnus–Zassenhaus splitting

## 4. A numerical scheme

As we have seen, the derivation of the method has two components. First, we choose the desired order of accuracy in the small parameter *ε* and compute the Magnus expansion up to this order *Ω*_{p}. This will lead to an effective exponent of the form (3.8), detailed steps for which can be found in [14,19]. Commencing from these expansions we compute the commutator-free Magnus expansion using the rules (2.3) of the Lie algebra

For numerical realization of these splittings schemes, it is typical to impose periodic boundary conditions in order to resolve spatial oscillations with spectral accuracy. Recall that we restrict the domain to [−1,1], imposing periodic boundaries at *x*=±1. We discretize using spectral collocation on the equispaced grid *x*_{n}=*n*/(*N*+1/2), |*n*|≤*N*, where *M*=2*N*+1 is the number of grid points. The unknowns are *u*_{n}≈*u*(*x*_{n}), |*n*|≤*N*. The differential operator ∂_{x} is discretized as a circulant matrix *V* as a diagonal

All exponents in our splitting (3.22) are of the form *i*^{k+1}〈*f*〉_{k} and are discretized as skew-Hermitian matrices

The outermost exponentials *W*^{[0]} and *W*^{[1]} are replaced by the circulant *W*^{[2]} and *W*^{[2]}. Clearly, this is the exponential midpoint rule which resorts to a Strang splitting after freezing the potential in the middle of the interval.

The exponential of the circulant matrix

The first non-trivial splitting is obtained upon including *W*^{[2]} once more,
*W*^{[2]}. This splitting commits an error of *Ω*_{3}, which is a lower order Magnus truncation that is easier to obtain and uses merely two Gauss–Legendre quadrature knots, while to obtain splittings that are of higher order than the

The exponents *W*^{[2]} and *σ*=1, the most costly case we consider, the exponentials of these terms can be evaluated to *σ*=1/2, their exponentiation to an accuracy of

We refer the curious reader to [13,15] where semi-discretization strategies, stability analysis and exponentiation methods are addressed in greater detail.

### (a) A numerical example

Consider the evolution of the wave-packet
*x*_{0}=−0.3 and *k*_{0}=0.1 specifying the initial mean position and momentum, respectively, heading towards the lattice potential

In figure 1, we show the evolution of the wavepacket from *u*(0) to *u*(*T*) at *T*=0.75 under the influence of the time-independent potential *V* _{0} alone. In these examples, we use *δ*_{1}=10^{−3} and *δ*_{2}=10^{−4} (specifying the localization of the wavepacket) while the semiclassical parameter is chosen to be *ε*_{1}=2^{−8}≈0.0039 and *ε*_{2}=5×10^{−4}—the same order of magnitude as the choices of *δ*. The corresponding wavepackets are labelled *u*^{1} and *u*^{2}, respectively.

When we excite the wavepacket using an additional time-varying potential
*V* _{E}(*x*,*t*)=*V* _{0}(*x*)+*E*(*x*,*t*), a significantly larger part of the wave packet is able to make it across the lattice to the right-hand side (see

The excitation pulse is not active for the entire duration since *ρ*(3*t*−1) acts as a smooth envelope simulating the switching on and off of the excitation. The excited potential is evident at *t*=*T*/2 in figure 1*c*.

In figure 2*a*, we present the global error at time *T*=0.75 in the propagation of *V* _{E} using the scheme (3.22). Under the scaling *σ*=1, we commit a local *L*_{2} error of *σ*=1 is *M*∼5*ε*^{−1} and *h*∼*ε*/2.

We remind the reader that the Magnus–Zassenhaus methods developed in this paper are highly effective in the semiclassical regime, where *ε* becomes very small. Moreover, the accuracy of these methods is analysed asymptotically. Where *ε* is moderate and considerations of spatial accuracy are governed by initial conditions rather than by the high oscillations induced by a small *ε*, the number of grid points *M* may end being much larger than *ε*^{−1}. In this case, different scaling laws for the time step *h* might be found to be optimal and the number of Lanczos iterations required will need to be re-analysed. While this analysis is beyond the scope of this paper, in principle, it should be possible to develop effective Magnus–Zassenhaus schemes for moderate *ε* by using the techniques discussed here.

In figure 2*b*, we depict the global *L*_{2} errors in *T*=0.75 under different choices of *ε*, letting the time step vary independently of *ε*. The number of grid points for this experiment was fixed at *M*=1001. As expected, the accuracy of these methods is found to be higher when *ε* is of the same order as 1/*M* (or *W*^{[2]} and *ε*=0.1 and *M*=1001, for instance.

In figure 3*a*, we present the errors under the scaling *L*_{2} error is *M*∼*ε* and

### (b) Finding a reference solution

Since no analytic solution of (1.1) is available, reference solutions must also be obtained via a numerical approach. We obtain the reference solution *u*_{R} for our numerical experiments by resorting to the exponential midpoint rule
*T*/*h*_{R} time steps required for finding the solution *u*_{R}(*T*), the potential is frozen in the middle of the interval [*t*,*t*+*h*_{R}] and a Strang splitting is used.

Since the exponential midpoint rule is also the lowest order method in the Magnus–Zassenhaus family of schemes, we require very small time steps for convergence—certainly *h*_{R}≪*h* is required for the reference solution to possess an error smaller than the scheme (3.22) whose error we are attempting to quantify.

We rely on this method for producing reliable reference solutions since it is simple and its error is easily analysed. Directly exponentiating a Hamiltonian (via MATLAB's expm, for instance) with potential frozen at the middle of the interval is more expensive but no more accurate than the Strang splitting—this is because freezing the potential is akin to disregarding the nested integrals and commutators in the Magnus expansion which are of the same size (in powers of *ε*) as the error committed in the Strang splitting.

Another factor we must take into account is the growth of spatial oscillations with decreasing *ε*. To capture this, starting from *M*_{R}=3*M*=15*ε*^{−1}, we iteratively increase the grid resolution for the reference solution till no high frequencies are clipped and convergence is achieved. In the end, the spatial resolution used for obtaining a reference solution is much greater than that used for (3.22), *M*_{R}≫*M*∼5*ε*^{−1}.

Using such a low-order method for generating reference solutions to a high degree of accuracy in a brute force manner means generating reference solutions is orders of magnitude slower than the splitting method (3.22) requiring validation. The exorbitant cost of reference solutions is what restricts rigorous experimental study of numerical errors to moderate values of *ε* and *T*.

Having provided experimental evidence for the *σ*=1, we use (3.22) with very small time steps and fine grid resolution (finer than prescribed by *σ*=1) for generating reference solutions while analysing the accuracy of our splitting under *σ*=1/2 (figure 3*a*).

In figure 3*b*, we explore the convergence rate for larger values of *T* by resorting to the splitting (3.22) for generating reference solutions by using higher spatio-temporal resolutions than prescribed by our scaling laws.

## Authors' contributions

All the authors have contributed equally to this work.

## Competing interests

We have no competing interests.

## Funding

P.B. has been supported by the Australian Research Council. P.S. has been supported by a King's College Cambridge Studentship.

- Received October 22, 2015.
- Accepted August 15, 2016.

- © 2016 The Author(s)

Published by the Royal Society. All rights reserved.