## Abstract

We present a new method for computing spectra and pseudospectra of bounded operators on separable Hilbert spaces. The core in this theory is a generalization of the pseudospectrum called the *n*-pseudospectrum.

## 1. Introduction

Let *T* be a bounded operator on a separable Hilbert space with an orthonormal basis {*e*_{j}} and suppose that we wish to compute the spectrum of *T*. We are faced with the slightly unpleasant problem of computing a quantity that may depend discontinuously on the matrix elements 〈*Te*_{j},*e*_{i}〉. In particular, the following example illustrates the problem: Let be defined by
1.1
Now for *ϵ*≠0 we have *σ*(*A*_{ϵ})={*z*:|*z*|=1} but for *ϵ*=0 then *σ*(*A*_{0})={*z*:|*z*|≤1}. A numerical analyst may express concern about this problem. One can argue that if one should do a computation of the spectrum on a computer, the fact that the arithmetic operations carried out are not exact may lead to the result that one gets the true solution to a slightly perturbed problem. This type of analysis is often referred to as backward error analysis in the numerical linear algebra literature. As suggested in the previous example, getting the answer to a slightly perturbed problem could be disastrous.

This poses a slightly philosophical question; is it impossible to compute spectra of arbitrary operators? And if so, does that mean that there are operators whose spectral theory might be crucial for understanding physical phenomena, but their spectra will never be determined. This would imply that there is a rather unpleasant barrier between what we can compute and what we want to compute. In Hansen (2008, submitted *a*) several new methods for estimating spectra and pseudospectra of operators were presented. Our goal in this article is to show that these results can be used for actual computations, and that, indeed, it is possible to compute spectra of arbitrary bounded operators on separable Hilbert spaces. We will emphasize the computational task and refer to Hansen (submitted *a*) for justifications of the mathematical statements that will be presented. Our theory is very much inspired by the pseudospectral theory that has emerged through the last two decades (Trefethen 1999; Trefethen & Embree 2005; Davies 2007). The main reason is that to overcome the discontinuity problem suggested above, one is forced to consider the computation of a different set than the spectrum, even though estimating the spectrum may be the main goal. As we will see in §4, variants of the pseudospectra, namely the *n*-pseudospectra, are excellent candidates for sets that approximate the spectrum well. Also, these sets do not behave discontinuously with the operator. Let us recall the definition of the pseudospectrum.

### Definition 1.1.

*Let T be a closed operator on a Hilbert space* *such that* , *and let ϵ*>0. *The ϵ-pseudospectrum of T is defined as the set*

If we were to estimate the pseudospectrum instead of the spectrum, the problem suggested by the example above would not occur. The reason is that the pseudospectrum varies continuously with the operator *T* if *T* is bounded (we will be more specific regarding the continuity below).

## 2. Notation

Before we continue let us recall some basic notation and definitions. Throughout the paper, will always denote a separable Hilbert space and the set of bounded linear operators on . For we denote the spectrum by *σ*(*T*), and if *T*−*z* is invertible, for , we use the notation *R*(*z*,*T*)=(*T*−*z*)^{−1}. The closure of a set will be denoted by , however, when convenient, the notation *cl*(*Ω*) may be used. We will denote orthonormal basis elements of by *e*_{j}, and if is a basis and then *ξ*_{j}=〈*ξ*,*e*_{j}〉. The word basis will always refer to an orthonormal basis. Convergence of sets in the complex plane will be quite crucial in our analysis and hence we need the Hausdorff metric as defined by the following.

### Definition 2.1.

*For a set and δ>0 we will let ω*_{δ}(*Σ*)*denote the δ-neighbourhood of Σ*(*i.e. the union of all δ-balls centred at points of Σ*).*Given two sets , we say that Σ is δ-contained in Λ if*.*Given two compact sets their Hausdorff distance is**where*.

If is a sequence of compact subsets of and is compact such that as , we may use the notation . The fact that arithmetic operations may not be carried out exactly on a computer is crucial in our analysis, and *ϵ*_{mach} will always denote the machine epsilon in the computer software used. The software of choice is Matlab, and in that case *ϵ*_{mach}=10^{−16}. Before we continue with pseudospectral theory, we would like to make a short detour via the finite section method and try to convince the reader that the finite section method is not a serious contender to the ‘method of the month’ award among algorithms for the general computational spectral problem. Although it is of great importance for some self-adjoint problems and all compact problems.

## 3. The finite section method

Suppose that we have an operator and that we know the matrix elements *a*_{ij}=〈*Ae*_{j},*e*_{i}〉 with respect to some basis {*e*_{j}}. The question is then how do we compute the spectrum and the pseudospectra of *A* using {*a*_{ij}}. A natural thought may be to reduce this to a finite-dimensional spectral problem by constructing (using {*e*_{j}}) a sequence of finite rank projections {*P*_{m}} such that *P*_{m+1}≥*P*_{m} and strongly, where *I* is the identity, and then compute the spectrum and pseudospectra of . Typically *P*_{m} would be the projection onto *span*{*e*_{1},…,*e*_{m}}. This is often referred to as the finite section method in the literature. Now, this may work in some cases, e.g. if the operator is compact or in the case of computing pseudospectra, if one is considering a Toeplitz operator, see Böttcher (1994). However, one must be very careful using the finite section method and it should not be used unless accompanied by a rigorous analysis that justifies the convergence
It is quite easy to find elementary counter examples to show that the finite section method can fail dramatically. Consider the shift operator defined by *Se*_{n}=*e*_{n+1} on . This operator has the following matrix representation
Thus, if *P*_{m} is the projection onto *span*{*e*_{1},…,*e*_{m}}, we would get that for all *m*, but *σ*(*S*) is the closed unit disc. To find examples where the finite section method fails when wanting to compute the pseudospectrum, one does not have to go very far away from the Toeplitz operators. The finite section method may have serious trouble finding the right pseudospectra of Laurent operators. Note that if we have a Laurent operator *A*_{L} given in its matrix representation with respect to the basis and choose *P*_{m} to be the projection onto then is a Toeplitz matrix. So, if *A*_{T} is the Toeplitz variant of *A*_{L}, meaning that it has the same matrix elements but is an operator on instead of , then, as shown by Böttcher (1994)
but we may have that *σ*_{ϵ}(*A*_{L})≠*σ*_{ϵ}(*A*_{T}), and in this case the finite section method will fail. This is visualized in the following example. Define the Laurent operator by
then is far from *σ*_{ϵ}(*A*_{L}) as visualized in figure 1 for *ϵ*=0.1.

## 4. The *n*-pseudospectrum

Given a closed operator *T* on , the motivation for the *n*-pseudospectrum is the desire to approximate the function *z*↦*dist*(*z*,*σ*(*T*)), in order to estimate *σ*(*T*). A convenient formula for this is *dist*(*z*,*σ*(*T*))=1/*ρ*(*R*(*z*,*T*)), where *ρ* denotes the spectral radius. Thus, in principle, we have reduced the problem of estimating the distance from *z* to *σ*(*T*) to a problem of estimating the spectral radius of a bounded operator. Now, numerically that is a non-trivial task, but keeping in mind the spectral radius formula, namely, for , we can approximate the spectral radius by estimating the norm of powers of the operator. By choosing a subsequence of {∥*A*^{n}∥^{1/n}}, namely, {∥*A*^{2n}∥^{1/2n}} we get a decreasing sequence ∥*A*^{2n}∥^{1/2n}≥∥*A*^{2n+1}∥^{1/2n+1} and . Hence, we have
and . This gives the motivation for the following definition of the (*n*,*ϵ*)-pseudospectrum, or the *n*-pseudospectrum for short.

### Definition 4.1.

*Let T be a closed operator on a Hilbert space , and let and ϵ>0. The* (*n*,*ϵ*)-*pseudospectrum of T is defined as the set*

As we see, the *n*-pseudospectrum is just a generalization of the pseudospectrum. By the analysis above, one can deduce that the *n*-pseudospectrum should be a better approximation to the spectrum than the pseudospectrum, and hopefully also share its nice continuity properties. In particular, one should expect and hope for
where *ω*_{ϵ}(*σ*(*T*)) denotes the *ϵ*-neighbourhood around *σ*(*T*). A famous example in pseudospectral theory is the complex harmonic oscillator
acting on , see Davies & Kuijlaars (2004). To visualize the difference between the pseudospectrum and the *n*-pseudospectrum, we have computed the pseudospectrum and the 1-pseudospectrum for *H* when *c*=*i* in figure 2. The figures have been produced by using a discretization technique established in Hansen (submitted *b*) (where convergence is proved). It is the compactness of the resolvent of *H* that allows such discretizations. Such techniques will not work in general.

### Remark 4.2.

Note that the 0-pseudospectrum is just the classical pseudospectrum, so the *n*-pseudospectral theory is just an extension of the classical pseudospectral theory. Thus, one should by no means give up the classical pseudospectrum. In fact, if the operator is normal, there is obviously no point in using the *n*-pseudospectrum for *n*>0. For more details see §9. Note also that the algorithm we will develop in §6 is valid for *n*-pseudospectra with . Thus, we automatically get an algorithm for the classical pseudospectrum.

## 5. Properties of the *n*-pseudospectra of bounded operators

The *n*-pseudospectrum was first introduced in Hansen (2008) as a tool for the general spectral problem. It was further investigated in Hansen (submitted *a*), and in this section, we will discuss some of the nice properties of these sets. The discussion will be accompanied by examples. Before we start let us define, for , the function . In particular, let and define
5.1

### Theorem 5.1 (Hansen submitted *a*).

*Let* *and* *such that* *in norm. For* *let* γ_{n} *be as in equation (5.1). Then the following is true*:

*where**in (iv)*,*in (v) and*ω_{ϵ}(σ(*T*))*denotes the*ϵ-*neighbourhood around*σ(*T*).

Theorem 5.1 provides several important observations. Firstly, (iv) allows us to use the *n*-pseudospectrum as an approximation to the spectrum. Secondly, the problem of inexact arithmetic is solved by (v). Thus, in theory, we can get arbitrarily close to the spectrum by computing the *n*-pseudospectrum and still allow the computation to be in inexact arithmetic. Now, of course the *ϵ*_{mach} will have to decrease as *n* grows.

The function *γ*_{n} and the fact that provide us with a tool for estimating the *n*-pseudospectrum. In fact, by recalling equation (5.1), we have now reduced the problem of finding the spectrum of a non-normal operator to a problem of finding the smallest element in the spectrum of a self-adjoint operator. In the following examples, we will show some of the properties of the *n*-pseudospectra listed in theorem 5.1.

### Example 5.2.

To demonstrate the property of the pseudospectra, we have chosen the following operator:
where *a*_{1}=2, *a*_{2}=0.5, *b*_{j}=1+*i*(2/*j*)^{1/6} and *c*_{j}=1/*j*^{1/2}. Now, *T* can be written as a sum of two operators where one is compact and the other one has only essential spectrum and thus *A* should have plenty of isolated eigenvalues. The four largest eigenvalues with corresponding *n*-pseudospectra are displayed in figure 3.

### Example 5.3.

To visualize the property that if and in norm, it follows that
a natural test object is the example considered in the introduction. The discontinuity of the spectrum shown in that example was a strong motivation for the introduction of the *n*-pseudospectrum. Recall that we define as in equation (1.1), and that for *δ*≠0, we have *σ*(*A*_{δ})={*z*:|*z*|=1} but for *δ*=0 then *σ*(*A*_{0})={*z*:|*z*|≤1}. We have computed the *n*-pseudospectrum of *A*_{0} for *ϵ*=0.025 and *n*=2, which coincides with the closed *ϵ*-neighbourhood of the unit disk. We have also computed the *n*-pseudospectrum of *A*_{10−16} to demonstrate that, at least up to the accuracy of the grid size we have chosen, the computed results are identical. Now, if we actually wanted to compute the spectrum of *A*_{10−16}, we would have to choose computer software with higher precision and also take *n* much larger. The *ϵ*_{mach} in Matlab limits us to take *n*≤2 since our computation requires operations with (*ϵ*_{mach})^{1/2n+1}. Hence, since (*ϵ*_{mach})^{1/23+1}=0.1, we may experience that for *n*=3, our computation will be accurate only up to the first decimal. However, we have visualized (in figure 4) (iii) in theorem 5.1 by computing the *n*-pseudospectra of *A*_{0.005} for *n*=1,2, in which case the *n*-pseudospectra approximates the spectrum of *A*_{0.005} quite well even for small values of *n*.

## 6. Computing the *n*-pseudospectrum

### (a) Designing the algorithm

Numerically, a self-adjoint spectral problem is much easier to deal with than a non-self-adjoint problem, but we cannot attack the task of computing equation (5.1) as it is, since this is an infinite-dimensional problem. We therefore need to find an approximation to *γ*_{n} (as defined in equation (5.1)) that is suitable for computations. A natural choice seems to be to choose a sequence of finite rank projections {*P*_{m}} such that *P*_{m+1}≥*P*_{m} and strongly, e.g. we may choose a basis {*e*_{j}} and let *P*_{m} be the projection onto *span*{*e*_{1},…,*e*_{m}}. Now we can try to approximate *γ*_{n} by the function
6.1
and if in some sense, we can hope that
In fact this is almost the case as the following theorem guarantees.

### Theorem 6.1.

*Let* *and let {P*_{m}*} is an increasing sequence of finite rank projections converging strongly to the identity such that P*_{m+1}*≥P*_{m}*. Define γ*_{n,m} *as in equation (6.1), then* *locally uniformly as* *, and for compact* *such that* *we have*
*where the convergence is understood to be in the Hausdorff metric.*

The following proposition from Hansen (submitted *a*) is crucial in the proof of theorems 6.1 and 6.3.

### Proposition 6.2.

*Let be continuous and let be a sequence of functions such that and locally uniformly. Suppose also that for ϵ>0, then cl*({*z*:*γ*(*z*)<*ϵ*})={*z*:*γ*(*z*)≤*ϵ*}. *Then for any compact set K such that it follows that*

### Proof of theorem 6.1.

This theorem follows by the techniques used in the proof of theorem 6.3 in Hansen (submitted *a*). We will sketch the ideas. One first establishes the fact that locally uniformly. Note that this fact alone is not enough to deduce the claim of the theorem. One also needs the facts that and in order to invoke proposition 6.2. The former has already been established in theorem 5.1. To prove the latter one uses some abstract operator theory from Treil (2004) together with the following result by Shargorodsky (2008): if *Ω*_{0} is a connected open subset of and *Z* a Banach space, and is an analytic vector valued function, ∥*F*(*z*)∥≤*M* for all *z* in an open subset , and ∥*F*(*z*_{0})∥<*M* for some *z*_{0}∈*Ω*_{0}. Then ∥*F*(*z*)∥<*M* for all *z*∈*Ω*. ■

Now, computing *γ*_{n,m} still involves the products ((*T*−*z*)*)^{2n}(*T*−*z*)^{2n} and (*T*−*z*)^{2n}((*T*−*z*)*)^{2n}, which may be challenging to compute as *T* acts on an infinite-dimensional space. However, there is a solution to this problem. Instead of computing the products ((*T*−*z*)*)^{2n}(*T*−*z*)^{2n} and (*T*−*z*)^{2n}((*T*−*z*)*)^{2n} we will compute
where *P*_{k} is a finite rank projection as in theorem 6.1. As *P*_{k} has finite rank this is feasible, in particular, we can define the function
6.2
and argue that as . In fact we have:

### Theorem 6.3.

*Let* *and let {P*_{m}*} be an increasing sequence of finite rank projections converging strongly to the identity such that P*_{m+1}*≥P*_{m}*. Define γ*_{n,m,k} *as in equation (6.2), then* *locally uniformly as* *, and for compact* *such that* *we have*
*where the convergence is understood to be in the Hausdorff metric.*

### Proof.

This theorem follows by the techniques used in the proof of theorem 6.3 in Hansen (submitted *a*). We will sketch the ideas. One first establishes the fact that locally uniformly. Note that we have already established the fact that in the proof of theorem 6.1. And, one observes that for *k*≥*m* then the latter fact actually implies that (just replace *T* by *P*_{k}*TP*_{k}). We can now invoke proposition 6.2 to establish the theorem. ■

For a full infinite matrix, *γ*_{n,m,k} can be a tough challenge to compute, since there are two limit processes going on at the same time, namely and . It is therefore important to take advantage of structured problems.

### Definition 6.4.

*Let be a basis for and let . If*
*then T is said to be banded with bandwidth d*.

The following theorem is important for the computation of *γ*_{n,m,k} when the infinite matrix is banded.

### Theorem 6.5 (Hansen submitted *a*).

*Let* *and {e*_{j}*} be a basis for* *. Let P*_{m} *be the projection onto* *Define γ*_{n,m} *and γ*_{n,m,k} *as in equations (6.1) and (6.2). Suppose that the matrix representation of T with respect to {e*_{j}*} is banded with bandwidth d. Then, for m>d,*

### (b) The algorithm

As theorem 6.1 suggests, the estimation of the *n*-pseudospectra can be done by computing values of *γ*_{n,m,k} on a grid in the complex plane. Also, by theorem 6.5, if the matrix *T* is banded with bandwidth *d* then
where *d* is the number of off diagonals. Thus, we are left with the task of computing
6.3
and
6.4
As *m* becomes large, both equations (6.3) and (6.4) are difficult to compute since
6.5
may have many eigenvalues very close to zero, and standard numerical routines as Matlab’s `eigs` will have trouble detecting the smallest eigenvalue to sufficient precision. In fact, it is not hard to cook up examples such that Matlab’s `eigs` crashes completely. If one wants a contour plot of the *n*-pseudospectrum, there is no way around the previous problem, but if one only wants the *n*-pseudospectrum for one specific *ϵ*>0, it is unnecessary to compute the smallest eigenvalue in equations (6.3) and (6.4). In fact, since we are only interested in knowing whether *γ*_{n,m,k}(*z*)≤*ϵ* for some complex *z*, we only need to check if the self-adjoint matrices
and
are both positive definite. If they both are, then *z*∉{*z*:*γ*_{n,m,k}(*z*)≤*ϵ*}. This is where the Cholesky decomposition comes in handy and saves the day.

### (c) The Cholesky decomposition

It is well known that a self-adjoint matrix is positive definite if and only if it has a Cholesky decomposition *A*=*GG**, where *G* is lower triangular with positive elements on the diagonal, see Golub & Van Loan (1996). Thus, to determine whether *A* is positive definite or not, we need to find out if the decomposition *A*=*GG** exists. This can be done in the following way. Let
6.6
where *α*>0 if *A* is positive definite, so . If *α*≤0 we conclude that *A* is not positive definite and we are done. Now, *B*−*vv**/*α* is positive definite if and only if *A* is positive definite since it is a principal submatrix of *U***AU*, where

If there is a Cholesky factorization then it follows from equation (6.6) that *A*=*GG**, where

We can continue this argument with *G*_{1} and do this recursively to obtain . Thus, if all *G*_{j}s turn out to be positive definite then *A* is positive definite, and if there is a *G*_{j} that is not positive definite then *A* cannot be positive definite. The standard algorithm for this requires *n*^{3}/3 flops. A neat tool for determining whether or not *A* is positive definite is Matlab’s `chol` routine that has a built-in-check for positive definiteness of matrices.

Suppose that *T* is a banded infinite matrix with bandwidth *d*, the following Matlab program will plot the following set where *K* is a rectangle in and *γ*_{n,m,2nd+m} is defined in equation (6.2).

### Algorithm 6.6.

### (d) Tests on Laurent and Toeplitz matrices

The spectral theory of Laurent and Toeplitz operators is very well understood, and they are therefore a natural choice when it comes to test objects for numerical algorithms. We briefly recall some of the basics from the Laurent and Toeplitz operator theory from Böttcher & Silbermann (1999). Given a Laurent operator *A*_{L} on
it is well known that *A*_{L} is a bounded operator if and only if there is a function , where denotes the circle, such that is the sequence of Fourier coefficients of *f*, that is
Also, , where denotes the essential range of *f*. For a Toeplitz operator *A*_{T} on , given by
we have a similar result, namely, *A*_{T} is bounded if and only if there is a function such that its Fourier coefficients are the sequence . The function *f* is called the symbol of the Laurent or Toeplitz operator.

As for the spectrum of *A*_{T}, note that if *t*↦*f*(*e*^{it}), *t*∈[0,2*π*] is a continuous function, then is a curve in , and hence we can assign a winding number to every point with respect to the curve. We then have that *σ*(*A*_{T}) is equal to this curve together with all complex numbers with a non-zero winding number with respect to the curve. In our examples (displayed in figures 5 and 6) we have chosen Laurent and Toeplitz operators with symbols
and
where the corresponding winding numbers are displayed on the figures. Our numerical computation is done as suggested in algorithm 6.6, where we check whether
is less than or equal to *ϵ*, for some *ϵ*>0, on a grid in the complex plane, where *T* here is either *A*_{L} or *A*_{T}. If *γ*_{n,m}(*z*)≤*ϵ* the point *z* is assigned the colour black.

The choice of the projections is the natural one. Namely, in the case of Laurent operators, *P*_{m} is the projection onto the span of where is the obvious basis for (that is, *e*_{j} has one on the *j*-th coordinate and zero elsewhere). For Toeplitz operators this is done similarly, but with *P*_{m} being the projection onto and being the obvious basis for .

The computational costs of these figures are quite high owing to large numbers of grid evaluations, and the computational time for some of them can typically take one night on a desktop computer. It is therefore difficult to get really accurate results. However, the computations done with the symbol *f*_{2} in figure 3 are done with a small grid-size to show accuracy.

### (e) Tests with the operator *Ψ*(*Q*) for

In this example we consider the operator *Ψ*(*Q*) on , where and *Q* is the self-adjoint operator on (on its appropriate domain) defined by (*Qf*)(*x*)=*xf*(*x*). When constructing such examples, the functional calculus and the spectral mapping theorem come in handy. In particular, we have that
In this example, we have chosen
We have visualized *σ*(*Ψ*(*Q*)) in figure 7*a*. To obtain a matrix representation of *Ψ*(*Q*), we have chosen a basis for by first considering a basic Gabor basis, namely, a basis of the form
(where *χ* is the characteristic function) and then chosen some enumeration of into to obtain a basis {*ψ*_{j}} that is just indexed over . To get our basis we let , where is the Fourier Transform. The matrix representation {〈*Ψ*(*Q*)*ϕ*_{j},*ϕ*_{i}〉} of *Ψ*(*Q*) yields a full infinite matrix. Hence, we cannot use *γ*_{n,m}, as defined in equation (6.1), but the function *γ*_{n,m,k} (recall equation (6.2)) comes to our rescue. In figure 7*b* we have shown (black dots) for *ϵ*=0.0073, *n*=0, *m*=10 000 and *k*=15 000, where *Θ* is a quadratic grid with grid size 0.004.

## 7. Other types of pseudospectra

### (a) The residual pseudospectrum

Even though the previous examples show very good results, one must be aware that our approach using the *n*-pseudospectrum can only give an estimate on the position of the spectrum. The reason why the computations in the previous section are so close to the spectrum is simply because the *n*-pseudospectrum is close to the spectrum even for small *n*. This may of course not be the case in general, and we will now show how one can use the computations of *γ*_{n,m} to determine subsets of the spectrum. The disadvantage of the *n*-pseudospectrum is that even though one can estimate the spectrum by taking *n* very large, *n* may have to be too large for practical purposes. Thus, since we only have the estimate for that , it is important to get a ‘lower’ bound on *σ*(*T*), i.e. we want to find a set such that . A candidate for this is described in the following.

### Definition 7.1.

Let and define
and
*Let ϵ>0 and define the ϵ-residual pseudospectrum to be the set*
*and the adjoint ϵ-residual pseudospectrum to be the set*

### Theorem 7.2 (Hansen submitted *a*)

*Let* *and let* *such that* *in norm, as* *. Then for ϵ>0 we have the following:*

*For any compact**such that**it follows that**For any compact**such that**it follows that*

The previous theorem shows that the residual and the adjoint residual pseudospectra have similar continuity properties as the pseudospectra. Hence, these sets are suitable for computations. The approximations are very similar to the techniques we have used in the previous sections.

### Theorem 7.3.

*Let* *and suppose that {P*_{m}*} is a sequence of finite rank projections converging strongly to the identity such that P*_{m+1}*≥P*_{m}*. Define*
*Let δ∈(0,ϵ). Then we have the following.*

*If K is compact such that**then**If K is compact such that**then**If K is compact such that**then**If K is compact such that**then*

### Proof.

The theorem follows by using the techniques in the proof of theorem 7.3 in Hansen (submitted *a*). The techniques are almost identical to the techniques used in the proofs of theorems 6.1 and 6.3. In particular, one uses theorem 7.2 and proposition 6.2. ■

### (b) Examples with the residual pseudospectrum

We now have a computational tool for estimating the spectrum both from ‘above’ and ‘below’, meaning that for we have
Thus, it would be natural to compute, for *ϵ*>0 and *δ*∈(0,*ϵ*), both
and {*z*:*γ*_{n,m,k}(*z*)≤*ϵ*}, where *γ*_{n,m,k} is defined as in equation (6.1) to get an estimate for the spectrum. To simplify the notation we define
7.1
and
7.2

Given an infinite matrix *T*, we will in this example show how computations of
can give quite good estimates on the position of the spectrum. As test objects we have chosen Toeplitz-like operators, where we have kept much of the Toeplitz structure, but let some of the subdiagonals have two different numbers that alternate instead of constants. As we are left with few (if any) mathematical tools to estimate the spectrum, we can only rely on the computed estimate, which in some cases seems quite acceptable. Consider the three infinite matrices *T*_{1}, *T*_{2} and *T*_{3} given by
where *a*=1+2*i*, *b*=−1, *c*=5+*i*, *d*=−2, *e*=1+2*i*, *f*=−4, *g*=−1−2*i*, *ϕ*_{j}=−2+−5+15*i*/*j*^{1/6} and *ψ*_{j}=1+2*i*+5+15*i*/*j*^{1/3}. Figure 8 shows computations of *Ω*_{ϵ,δ,m}(*T*_{j}) (where *Ω*_{ϵ,δ,m}(·) is defined as in equation (7.2)) and {*z*:*γ*_{0,m}(*z*)≤*ϵ*} for *T*_{j} (*j*=1,2,3), *m*=3000 and *ϵ*=10^{−5}, *δ*=*ϵ*−10^{−10}. Since
and
it is reasonable to believe that the computation displays the following relation
for some *ν*. As we tried this with several larger values of *m* up to *m*=10 000 without noticing any change, it suggests that *ν* is small in the experiment with *T*_{1}, where ‘small’ here means relative to the resolution of the figures displayed.

## 8. Discrete Schrödinger operators

### (a) The non-self-adjoint almost Mathieu operator

An important operator in non-self-adjoint spectral theory is the non-self-adjoint harmonic oscillator *H*, defined by
acting on . One of the motivations for this operator was that one wanted to take a well-known self-adjoint operator, alter it slightly so that it becomes non-self-adjoint, and then see how the spectral properties change. Indeed, the spectral properties of the non-self-adjoint harmonic oscillator are very different from the usual harmonic oscillator, as discussed in Davies & Kuijlaars (2004), see also Trefethen & Embree (2005). Our approach is to do the same with discrete Schrödinger operators.

The almost Mathieu operator on is known from the Ten Martini Problem, a problem that was initiated in 1981 by Kac and Simon and finally solved in 2003 by Puig (2004). The operator is defined as
where *ω*>0 is an irrational number, and . The usual almost Mathieu operator has , so that *H*_{b,ϕ} is self-adjoint and the Ten Martini problem was to show that for real non-zero *b* then *σ*(*H*_{b,ϕ,ω}) is a cantor set.

We do not claim anything about the spectral properties of the non-self-adjoint almost Mathieu operator (NSAM operator). However, we use it as an example of an operator for which there did not exist computational tools for numerically estimating the spectrum. Arveson (1994) gave a complete theory on how to handle the computational aspects of the spectral theory of the self-adjoint almost Mathieu operator. However, self-adjointness is crucial in Arvesons theory and therefore not suitable for our problems. But with the techniques suggested in the earlier sections of this chapter we can get numerical approximations to the spectra of these non-self-adjoint Schrödinger operators. In figure 9 we have computed pseudospectra and 1-pseudospectra of the NSAM operator for different values of *b* and *ω*.

## 9. Pros and cons of the *n*-pseudospectrum

### (a) The curse of the root

There are some limitations to the use of *n*-pseudospectra as approximations to the spectrum. The problem is related to the fact that we do not have infinite precision in the computations carried out, and hence our computations will depend on *ϵ*_{mach}. Thus, if and *ϵ* is the smallest positive number such that *σ*_{ϵ}(*T*) (or rather an approximation using equations (6.3) and (6.4)) can be evaluated, we cannot evaluate *σ*_{1,ϵ}(*T*) because of the root(s) in equations (6.3) and (6.4). However, we may be able to evaluate
So the best approximation we can get to the spectrum is . Now, this raises the question: are there operators such that there is an *ϵ*>0 for which it follows that The answer is affirmative as the following example shows. Define *B* by
Then for . This is visualized in figure 10.

### (b) The power of the *n*-pseudospectrum

But, it is in fact not in the last example above that the *n*-pseudospectrum shows its true strength. Note that in a numerical computation of the *n*-pseudospectrum the choice of the grid is crucial. In particular, if one computes *σ*_{n,ϵ}(*T*) for some and *ϵ*>0, then the grid size must be chosen according to *ϵ*. This is not so important if one computes equations (6.3) and (6.4); however, absolutely crucial in the case one uses the Cholesky decomposition as in §6*c*. (Since equations (6.3) and (6.4) sometimes are incomputable in practice, this is very important.) More specifically the grid size must be smaller than *ϵ* in order to ensure that one captures the whole of *σ*_{n,ϵ}(*T*). To visualize this, we have chosen the following example. Let *T*=*T*_{1}⊕*T*_{2}⊕*T*_{3}, where
Computing *σ*_{ϵ}(*T*) (using the Cholesky decomposition as in §6*c*) where *ϵ*=0.0005 and *g*=0.03 (here *g* denotes the grid size and the grid is quadratical) gives the left plot in figure 11. The problem is that the contribution from *T*_{2} is missing. This is due to the fact that *g*>*ϵ* and thus one may lose important spectral information. If one insists on doing computations only with the pseudospectrum there are two possibilities in order to overcome this obstacle. The first is to increase *ϵ* and the second is to decrease *g*. If the former is chosen, the highly non-normal *T*_{1} will dominate and yield a rather poor approximation to the spectrum. Choosing the latter is computationally very demanding as the number of grid points will increase dramatically. A better choice is therefore to compute the *n*-pseudospectra for different *n*. In particular, the middle and right plot in figure 11 shows *σ*_{1,ϵ1}(*T*) and *σ*_{2,ϵ2}(*T*) with and *ϵ*_{2}=0.1 computed with grid size *g*=0.03.

## Acknowledgements

The author would like to thank Erik Bédos, Brian Davies, Arieh Iserles, Marco Marletta, Olavi Nevanlinna and Nick Trefethen for useful discussions and comments, as well as the referees for valuable suggestions.

- Received November 20, 2009.
- Accepted March 25, 2010.

- © 2010 The Royal Society