## Abstract

A pseudo-dynamical approach for a class of inverse problems involving static measurements is proposed and explored. Following linearization of the minimizing functional associated with the underlying optimization problem, the new strategy results in a system of linearized ordinary differential equations (ODEs) whose steady-state solutions yield the desired reconstruction. We consider some explicit and implicit schemes for integrating the ODEs and thus establish a deterministic reconstruction strategy without an explicit use of regularization. A stochastic reconstruction strategy is then developed making use of an ensemble Kalman filter wherein these ODEs serve as the measurement model. Finally, we assess the numerical efficacy of the developed tools against a few linear and nonlinear inverse problems of engineering interest.

## 1. Introduction

We are interested in a class of inverse problems that arise in the study of many engineering systems and typically involve determining unknown coefficient (parameter) functions of imperfectly known mathematical models of such systems, given a partial and noisy set of statically obtained measurements of the system response. Numerical solutions of such problems have great practical significance in diverse engineering applications, *viz*. seismic imaging, biomedical imaging and structural health monitoring. However, the available numerical algorithms are often challenged by the inherent non-uniqueness and ill-posedness (in the sense of solutions not depending continuously on the observation data) of inverse problems. In particular, in the presence of small measurement noise, the solution may drift into a region that is practically infeasible. A possible confinement of the solution could sometimes be achieved by imposing *a priori* known constraints. For nonlinear inverse problems, the deterministic inversion algorithm is usually evolved within a (variant of the) Gauss–Newton iterative framework. For linear problems, however, a non-iterative strategy suffices as the associated Jacobian matrix remains unchanged. In all the cases, a major challenge is to tackle the numerical difficulties posed by the null space of the system operator and also address noise effects in the solution. The ill-posed nature of the inverse operator is usually handled through a regularizing term such as used in Tikhonov regularization (Engl *et al*. 2000). Based on the ‘true’ distribution of the model parameters, regularization through full or semi-norm total variation minimization has been proposed (Vogel 2002). Owing to the limited prior information available of the estimation model, an optimal choice for the regularization operator is seldom found, and this may sensitively affect the solutions (Engl *et al*. 2000). Broadly, a couple of approaches exist in arriving at a suitable regularization operator. The most common approach is based on the knowledge of measurement noise characteristics (e.g. noise variance) and is known as the discrepancy principle (due to Morozov; Vogel 2002). The other class of methods does not demand the knowledge of noise characteristics and merely attempts to extract the regularization parameters that result in an ‘optimal’ smoothness of the solution, given *a priori* model information. For instance, the *L*-curve method and generalized cross validation (Engl *et al*. 2000) are established methods within the above class.

A limitation of a deterministic approach is in tackling the measurement error within a finite-dimensional Hilbert space setting that does not admit realizations of white noise (Vogel 2002). Thus, a stochastic filtering approach, traditionally applicable only within a dynamical systems setting, provides a more rigorous probabilistic framework for obtaining the estimates as well as error characteristics (Kallianpur 1980). Here, the aim is to obtain the conditional probability density function of the model parameters, given the observations up to the current instant. While the Kalman filter provides the optimal solution to a linear inverse problem, nonlinear problems may be tackled either via variants of the optimal particle filter (PF; Doucet *et al*. 2000), geometric filter (Brigo *et al*. 1998) or via several suboptimal strategies, including extended or ensemble Kalman filters (EnKFs; Evensen 1994). A remarkable advantage of such filters, especially the ones that use Monte Carlo simulations through an ensemble of trajectories, is that they do not require explicit regularization.

Our present objective is to consider a novel family of pseudo-dynamical systems whose numerical integration in time provides an asymptotic solution to the inverse problem at hand. In particular, given the Gauss–Newton equation for updates, we convert it into a pseudo-dynamical form by explicitly adding a time derivative term. Then, we propose appropriate linearizations for the resulting dynamical system (for nonlinear inverse problems) and arrive at a few explicit and implicit strategies for integrating the linearized form. We evaluate the performance of this approach (vis-à-vis a few existing approaches) in the context of an exponentially ill-posed linear inverse problem. Following this, we exploit the proposed dynamical system within a suboptimal EnKF to solve the inverse problem. This also interfaces the deterministic and stochastic filtering approaches for this class of inverse problems. A common feature in all the methods is that we do not explicitly use any regularization while integrating the dynamical system. Finally, we employ the new schemes for solving a nonlinear inverse problem of interest in structural mechanics.

## 2. A pseudo-dynamical approach

Consider the problem of reconstructing the parameter vector , which denotes the finite-dimensional (nodal) discretization of associated function *μ*(** x**) via finite-element shape functions with

**∈**

*x**R*

^{q}denoting the spatial variable. A similar discretization of the response function

*u*(

*μ*(

**)) (probably governed by a partial differential equation (PDE)) yields the nodal vector . Assuming, without a loss of generality, that the elements of the (noisy) measurement vector**

*x*

*u*^{m}∈

*R*

^{m}are at nodal locations, we define to be the subset of

**U**corresponding to

*u*^{m};

*m*≤

*d*. The usual route to reconstruct

**μ**is by solving either a constrained or an unconstrained optimization problem; i.e. minimize the mean square error norm between the measured data and response computed through a mathematical model of the system of interest. The linearization of the resulting functional has a large null space and hence cannot be inverted without regularization. A common regularization technique is to penalize the optimization functional with some extra terms, broadly known as Tikhonov regularization, leading to the following optimization problem:(2.1a)where(2.1b)

**denotes the penalization operator on**

*L***μ**and ‖ ‖ the

*L*

_{2}norm.

*λ*is the regularization parameter. Moreover,

**denotes the parameter-dependent system (stiffness) matrix and**

*K***the non-parametric source (force) vector. The essential optimality condition (normal equation) for the problem is(2.2)where is the Jacobian matrix and . The above nonlinear equation is generally solved using successive linearization of equation (2.2) with the Hessian approximated as**

*f*

*J*^{T}

**(Gauss–Newton approximation). This leads to the following linear equation in the increment Δ**

*J***μ**:(2.3)with

**μ**being the linearization point. Instead of explicitly computing , the inverse of the Hessian may be directly approximated only through information on the gradient

**(**

*g***μ**) over successive iterations, leading to the so-called quasi-Newton method (QNM). However, the rate of convergence of the QNM is less than the Gauss–Newton method. Note that the addition of a regularization term in the optimization functional leads to a different minimization problem from the one originally envisaged. The closeness of solutions of the regularized and original problems is greatly influenced by the penalization terms and, in particular, by the penalization intensity (

*λ*). The operator

**″ is helpful in tackling numerical difficulties owing to the null space of the Jacobian matrix. Moreover, it should be so chosen as to account for available**

*L**a priori*knowledge on the spatial distribution of the parameters to be recovered. While an appropriate choice of regularization terms could lead to useful solutions, small variations in these terms could as well lead to severe degradation of solutions.

One way of obtaining an iterative solution to a (nonlinear) inverse problem while avoiding an explicit inversion of the linearized operator would be to introduce artificial dynamics and consider the solution to be the steady-state response (if it exists) of the artificially evolving dynamical system. For instance, the solution of a system of algebraic equations (with positive semi-definite coefficient matrix following linearization) may be thought of as the steady-state solution of a system of ordinary differential equations (ODEs). In this context, ‘time’ is a pseudo-variable and recursion through numerical integration plays the role of iterations. Moreover, this artificial dynamics may evolve on an invariant manifold (a Lie manifold) and thus may enable development of robust numerical schemes that may not diverge. It is of interest to note that Landweber iterations, which correspond to an explicit Euler time discretization of a system of ODEs describing the artificial dynamics of an inverse problem (Vogel 2002; Ascher *et al*. 2006; van den Doel & Ascher 2006), have a self-regularized character depending upon the time step. Keeping these in mind, we form a pseudo-dynamical counterpart of the Gauss–Newton iterative update equation. In particular, following the discretization of *μ*(** x**) and addition of a fictitious time derivative term in equation (2.3) with

*λ*=0, we obtain the linearized ODE in (

*t*,

*t*+Δ

*t*]:(2.4)where , and . The overdot represents the time derivative and

**μ**

^{*}the linearization point in the

*n*-dimensional vector space

*V*

_{μ}. Note that we have removed the regularization terms in the inverse operator while forming the dynamical system and that the ODE contains

*u*^{m}, which is noisy. Upon integration over indefinite time intervals, solutions may diverge (Ascher

*et al*. 2006). In order to avoid such blow-off of solutions, we intend to use a stopping criterion and thus integrate only over a finite time interval. The stopping criterion may be arrived at through the discrepancy principle or any other suitable scheme according to the user requirement.

The solution of the linearized ODE (2.4) may be written as(2.5)where is a constant forcing. Following the concept of local linearization (Roy 2001), the linearization point *t*^{*} (such that **μ**^{*}≔**μ**(*t*^{*})) could be chosen anywhere in the closed interval [*t*,*t*+Δ*t*] without affecting the formal error order. While choosing *t*^{*}=*t* yields the explicit phase space linearization (PSL; Roy 2000), *t*^{*}=*t*+Δ*t* results in the implicit locally transversal linearization (LTL; Roy 2004; Saha & Roy 2007). Denoting to be the time step and , the explicit PSL map corresponding to the continuous update (2.5) is written as(2.6a)with(2.6b)We thus bypass computing the inverse of the Hessian (*J*^{T}** J**) or its modifications, used with the regularization terms. The fact that exponentiation is being used while obtaining the time-stepping algorithm should enable a geometrically consistent progression on the associated (nonlinear) Lie manifold and thus provide a ‘natural’ stabilization for the inverse problem at hand. In this sense, a connection with the Tikhonov-type approach could also be brought out. However, a detailed exploration of such issues is outside the scope of this work and we presently emphasize the development of deterministic and stochastic numerical schemes based on equation (2.6

*a*) and (2.6

*b*). From the numerical standpoint, the computation of matrix exponentials is the key cost that might limit the viability of the method for higher dimensional problems. In such cases, one may exploit the sparse structure of

**along with a Krylov subspace projection to speed up computations (Saad 1989; Sidje 1998). Both the Gauss–Newton method (that requires matrix inversion and hence**

*J**O*(

*n*

^{3}) operations) and the pseudo-dynamical strategy (that employs matrix exponentiation, again involving

*O*(

*n*

^{3}) operations; Moler & Van Loan 2003) have similar computational overheads over a single iteration. Thus, a possible difference in the cost of computation should depend on the number of iterations needed for convergence, given a tolerance, in each method. Our limited numerical exploration suggests that the pseudo-dynamical approach typically needs a marginally higher number of iterations vis-à-vis its Gauss–Newton counterpart. However, an adaptive strategy for choosing the time step in the pseudo-dynamical approach could reduce the number of iterations, making it as efficient as the Gauss–Newton method.

We now consider several other numerical schemes to integrate equation (2.4). The forward (explicit) Euler discretization of equation (2.4) leads to the damped Landweber iteration,(2.7)This is also similar to the steepest descent method wherein the descent direction can be considered as the gradient direction. An aspect about the numerical integration of ODE (2.4) is that we wish to arrive at the steady state with the least computational effort, as the transient part of the trajectory is not of much interest. Thus, while developing the time marching scheme, it may suffice to use the linearized ODE (2.4) (instead of the originally nonlinear ODE) and so design the scheme to employ larger time-step sizes until we approach the steady state. A difficulty with an explicit method is that the choice of the time step is restricted by the spectral radius of the locally linearized operator. This restriction and consequent instability with higher time-step sizes may be bypassed through implicit methods. However, the convergence rate for exponentially ill-posed problems is typically very slow (Vogel 2002), and hence the performance of implicit schemes in such cases must be assessed afresh. For instance, a backward Euler scheme, derivable through a backward Taylor expansion of the vector field over (*t*_{k},*t*_{k+1}], is implicit and should allow stable simulations even with higher step sizes. The backward Taylor discretization yields(2.8)We note that, for a first-order Taylor expansion, variations in selecting the linearization point in the first derivative term lead to error of *O*(*h*^{2}), and hence this does not affect the formal order of the first-order expansion itself. Depending on such selection, one may construct explicit or implicit schemes for time marching. Of current interest is the implicit Euler scheme derivable through a backward Taylor expansion of the linearized vector field of ODE (2.7) over (*t*_{k},*t*_{k+1}] with . This leads to the implicit map(2.9)After some manipulations, we obtain(2.10)A downside with this scheme, unlike the update via equation (2.6*a*), is that we need matrix inversion over each step. Curiously, this expression may be shown to be similar to the Gauss–Newton update equation (2.3), if we choose *λ*=(1/*h*) and ** L**=

**. Indeed, a similarity with Levenberg–Marquardt algorithm is also readily discernible if we recast equation (2.10) as(2.11)We may thus look upon**

*I**h*as a regularization parameter. A somewhat analogous form of artificial dynamics (derived through an entirely different perspective) is also proposed in Ascher

*et al*. (2006). Now, for a linear inverse problem, we have the pseudo-dynamical system(2.12)where the subscript ‘

*L*’ represent

**-independence. The implicit Euler scheme for this case is(2.13)As demonstrated in the numerical section, the above expression has greater flexibility in choosing**

*μ**h*than the damped Landweber scheme. Thus, while the convergence rate is (formally) higher, an additional cost is in inverting the matrix

**+**

*I**h*

**. However, for ill-posed problems, an estimate of formal order of accuracy must be interpreted carefully. This is because the**

*M**O*(

*h*

^{2}) term in a Taylor expansion may not be smaller than the

*O*(

*h*) term (even for small

*h*), unless the linearized operator is Lipschitz and the domain of the nonlinear minimizer is strictly non-empty.

Finally, we consider the implicit LTL-based linearization (Roy & Dash 2005) of the pseudo-dynamical system corresponding to the normal equation (2.2),(2.14)The corresponding implicit map over (*t*_{k},*t*_{k+1}] is given by(2.15)where . Equation (2.15) provides a system of nonlinear algebraic equations in **μ**_{k+1} and its zeros may be obtained through Newton–Raphson or fixed-point iteration techniques. The main cost of evaluating equation (2.15) is to compute ** M** and

**at each Newton step, which can be efficiently approximated as(2.16)(2.17)**

*V*

*J*_{i}represents the

*i*th column of the matrix

**and**

*J***μ**

_{k,j}denotes the

*i*th element of

**μ**

_{k}etc.

## 3. A suboptimal pseudo-dynamic ensemble Kalman filter

We now exploit the pseudo-dynamical systems, developed in §2, within a stochastic filtering framework. In order to avoid the sensitive dependence of estimates through the (extended) Kalman filter ((E)KF) to process noise covariance (Grewal & Andrews 2001), we prefer to use an ensemble-based (Monte Carlo) filter. While the family of PFs (Manohar & Roy 2006; Ghosh *et al*. 2008) fits the bill admirably, we presently stick to the suboptimal EnKF (Evensen 1994, 2003), especially as the latter partially makes use of the analytical formulae provided by the KF and this, in turn, should help reduce the sampling variance. This is an important feature, given that the sampling error (standard deviation) reduces proportionally to the inverse of the square root of the ensemble size. Indeed, the fact that the EnKF combines the closed-form features of the KF-based estimate update with the PF-like simulation of the error covariance makes it insensitive to process noise covariance. Nevertheless, the PF-like simulation in the conventional EnKF also requires that multiple measurement datasets are artificially created in order that the error covariance update remains consistent with the KF formula. This in turn implies a huge computational overhead, especially for higher dimensional problems. A way out is provided by the so-called deterministic ensemble Kalman filter (DEnKF; Sakov & Oke 2008), wherein the requirement of multiple measurement sets is bypassed via an appropriate modification of the gain term. Given this advantage, we propose to make use of the DEnKF, even though the formulation, outlined below, remains applicable within any stochastic filter.

Within a stochastic filter, a parameter identification problem is typically tackled by including the parameters as additional states, which co-evolve with the other physical states (displacements/velocities) as zero-mean martingales (e.g. Wiener processes). However, in order to retain the computational expediency of the deterministic approach within the filtering setup, we remove all the physical states from the process equations and use only the parameters to be reconstructed as the filter variables in the process as well as observation equations. Simultaneously, we wish to exploit the Gauss–Newton pseudo-dynamical ODE (2.4) as part of the measurement model in the filter. This scenario leads to a filtering problem, wherein the process stochastic differential equations (SDEs) trivially describe the parameter states as a vector of Wiener noises along with a system of locally linearized measurement SDEs as shown below for :(3.1a)(3.1b)**ξ**(*t*) and **η**(*t*) are vectors of independently evolving zero-mean Wiener processes. The noise vector **ξ**(*t*) enables an artificial evolution of **μ**(*t*) as a Wiener martingale (of non-zero mean; Klebaner 2001) over time. **μ**^{*} is the linearization point that may be chosen arbitrarily over [*t*_{k},*t*_{k+1}]. Incidentally, the artificially constructed process and measurement SDEs attempt to evolve the same variable **μ**(*t*) using completely different drift fields, which appears dichotomous. In other words, while **μ**(*t*) via equation (3.1*a*) is just a Wiener process (a martingale), **z**(*t*) via equation (3.1*b*) has a semi-martingale structure owing to the presence of a non-zero drift field. However, this discrepancy is acceptable if we note that the aim of the filter is to obtain the conditional posterior (filtering) density , and thus the measurement SDE (3.1*b*) may be looked upon as a constraint on the evolution of **μ**(*t*) governed by the process SDE (3.1*a*). Here, the observation sequence is numerically generated by solving the SDE (3.1*b*) that contains the statically measured data *u*^{m}. The estimate of **μ**(*t*) will presently be given by , where *E* denotes the expectation operator under the measure corresponding to the density . Equations (3.1*a*) and (3.1*b*) must be discretized before obtaining the DEnKF-based updates and, in particular, the linearized SDE (3.1*b*) must be brought to the form(3.2)where *ψ*_{k+1} is a deterministic forcing term; *γ*_{k+1} is a zero-mean noise term (Ito integral); and **H**_{k+1} is the coefficient matrix independent of **μ**_{k+1}. **H**_{k+1} is recoverable only from drift term 1 in equation (3.1*b*). Drift term 2 arises merely as the first (constant) term of a two-term Taylor expansion of the minimizing functional corresponding to the inverse problem. Presently, we consider an explicit linearization by choosing in equation (3.1*b*) and this makes the task of separating out **H**_{k+1} easier. We note that the randomness in *u*^{m} does not contribute to the increasing *σ*-algebra (filtration) generated by the fictitious noise processes **ξ**(*t*) and **η**(*t*). Since **η**(*t*) is fictitiously introduced, we should choose the intensity of **η**(*t*) (as measured by the variances of its scalar components) to be small. This artificially induced dynamical character of the measurement model enables Bayesian update of **μ**(*t*) in time.

The EnKF approximates the error covariance through Monte Carlo simulation, bypassing the Riccati update of the KF. Denoting the ensemble size by the integer *N*, the error covariance (**P**) is approximated as(3.3)Here, is the matrix of realizations **μ**(*j*)∈*R*^{n}, *j*∈[1,*N*] and is the ensemble mean defined as(3.4)and denote the anomalies. Following integration of the associated SDEs, discrete versions of the process and measurement equations may be written as(3.5a)(3.5b)where is the explicitly derived fundamental solution matrix, and *t*_{k}=*kh*; *h* is the pseudo time step (presently assumed to be uniform). Over each time step parametrized by *k*=1, 2, 3, …, *m* (*t*∈(0,*t*_{m+1})), we recursively estimate **μ**_{k+1} as follows.

*Step 1*. Using the last analysed ensemble , obtain the forecast ensemble by equation (3.5*a*) as(3.6)and compute the forecast mean () by equation (3.4). Here, the *j*th column of the matrix *Q*_{k+1} contains the *j*th realization of Wiener vector increment . Now obtain forecast anomalies as(3.7)where is the identity matrix and is a matrix with each entry being 1/*N*.

*Step 2*. The Kalman gain is computed as(3.8)where is the forecast error covariance and **R** the measurement noise covariance. Using the discretized solution (3.5*b*) of the measurement SDE, **H**_{k+1} may be approximately obtained as(3.9)While deriving the above approximation, we have made use of the fact that we can relate **μ**_{k} and **μ**_{k+1} via a backward Taylor expansion and the vector field of the ODE (2.4) as(3.10)where *V*_{L} denotes the vector field of ODE (2.4) evaluated at (with ). Since ** J** is computed using , we avoid computationally intensive multiple evaluations of

**.**

*J**Step 3*. For obtaining the filter estimate corresponding to the filtering density, we follow the DEnKF strategy (Sakov & Oke 2008) and use the fact that . Thus the filter estimate at *t*_{k+1} is given by(3.11)The filter anomalies, needed for the ensemble spread over pseudo time, are obtained as(3.12)Note that, following the DEnKF scheme (Sakov & Oke 2008), the factor 0.5 in the second term of the r.h.s. of equation (3.12) produces realizations whose error covariance is the same as that via a KF-based Riccati update modulo a linear approximation in *h*. Then, we may obtain the realizations from the filter density as(3.13)Note that the essence of DEnKF is in using the correction factor 0.5 in equation (3.12) that enables the correct ensemble spread without the need for multiple measurement datasets.

Another way of obtaining the coefficient matrix **H**_{k+1} is to use an implicitly linearized form of the measurement SDE. Thus, we may use in drift term 1 of equation (3.1*b*) and obtain the measurement map as(3.14)We may thus extract the coefficient matrix as(3.15)Since and the ensemble are not known to begin with, one must use an inner iteration over each time step. While this approach could yield a more accurate description of **G**_{k+1}, it carries a significantly more computational overhead.

Denoting by the standard Wiener noise (i.e. noise with unit standard deviation) corresponding to **ξ**(*t*), the covariance matrix *Q*_{k+1} may be written as(3.16)where the sequence of positive real numbers *a*_{k} implies that the process noise could have a time varying intensity. However, we could go a step further and introduce a structure of spatial dependency among various elements of the vector process **ξ**(*t*). Note that **ξ**(*t*) could be looked upon as a discretized form of a random field **ξ**(*t*, ** x**) (say, via a Karhunen–Loeve scheme), we may require that a measure of spatial coherence of this field decays exponentially in

**. Post discretization, this leads to a so-called coloured noise**

*x***ξ**(

*t*), whose covariance matrix

*Q*_{k+1}is given by(3.17)Matrix

**C**is presently given bywhere

*d*

_{ij}represents the distance between nodes

*i*and

*j*(in an FE-based spatial discretization of the forward problem); is a positive constant; and

*l*is a reference distance (usually taken as the characteristic mesh size).

## 4. Numerical experiments

Before presenting the numerical results, a word about the stopping rule. Thanks to the unbounded inverse operator, an improper stopping criterion may lead to divergence of solutions even when the forward operator is well posed (Engl *et al*. 2000). We have presently taken the measure of misfit, as defined below (Ascher *et al*. 2006), to stop the progression over pseudo time.(4.1)where *u*^{c} is the computed response and *u*^{m} is the measured response. A necessary (and not sufficient) condition to stop the iterations is when **M** falls below a specified tolerance, which is fixed based on the noise level in the measurement (Morozov's discrepancy principle). However, even before the above stopping tolerance is arrived at, the inverse solution could become unphysical and oscillatory. Thus, over and above the above criterion, we also stop whenever the numerical solution either stalls or becomes unphysical.

### (a) Example 1. Fredholm integral of the first kind: a linear inverse problem

The relative performances of the proposed approach vis-à-vis the conventional Tikhonov regularization is first assessed against a linear inverse problem that is exponentially ill-posed. In particular, we consider Fredholm integral of the first kind with *x*∈*R*. It is basically a simplified one-dimensional version of the model that often occurs in two-dimensional optical imaging. The integral equation is of the form(4.2)where is the kernel function (point spread function) with *C*, *γ* being the positive parameters (see Vogel 2002, ch. 5); *f* is the light source intensity; and *g* is the image. The forward problem is to find out the blurred image, given *f* and *k*. Using the mid-point quadrature rule, a discrete form of the above equation is written as , where ; 1≤*i*, *j*≤*n*. Given the noisy measurement, the inverse problem is to find out the source profile. As *n* increases, so does the null-space dimension and one needs regularization to obtain meaningful ** f**. For numerical work, we take

*γ*=0.05, and

*n*=257. The QNM-based reconstructed profiles of

**, using different Tikhonov parameters, are shown in figure 1 and high sensitivity of solutions to variations in this parameter is self-evident. However, an optimal choice of this parameter, available following the**

*f**L*-curve analysis, yields good reconstruction. In order to assess similar sensitivity characteristics of the present method, we consider the effect of the stopping rule on pseudo-dynamic iterations. Although practically infeasible, we stop iterations when the Euclidean distance metric between the reconstructed and reference (exact) parameter profiles is minimum. Figure 2

*a*reveals that solutions are not significantly influenced by the time step when the above stopping rule is applied. Figure 2

*b*reports similar results based on the misfit criterion (equation (4.1)). The pseudo-dynamical approach is thus seen to possess better stability vis-à-vis the regularization-based method, which exhibits more oscillations. For the present scheme, the time step plays the role of regularizing the reconstruction and this is evidenced in figure 3. In particular, we plot

*L*

_{2}norms of solutions against residuals for different time steps in the log–log scale (similar to an

*L*-curve analysis). The time step that best regularizes the solution may be obtained from the corner of the curve in figure 3. For instance, for 1 per cent noise in measurement, this value is approximately 100 for the present case. For different time steps, figure 4

*a*,

*b*shows the drop in misfit and the reconstruction error, respectively. Clearly, the minimum reconstruction error is attainable for a particular time step only. Moreover as noise magnitude increases, so does the optimum time step. In any case, the sensitivity of the method to such choice is certainly less than the conventional Tikhonov approach (figure 1). The contributing factors for the improved performance could be that the present strategy avoids spuriously modifying the Hessian matrix through regularization and its dynamic nature bypasses matrix inversions. For higher time-step sizes, a better performance of the implicit pseudo-dynamical strategy via the backward Euler expansion, vis-à-vis the damped Landweber method, is illustrated in figure 5. While higher step sizes in the present method do not spoil the reconstruction, damped Landweber iterations are inapplicable for these steps because of the spectral stability requirement, . For this problem, results via the implicit dynamical scheme using backward expansion appear to be good even with

*h*=100 (figure 5

*a*). Figure 5

*b*shows the misfit drop that indicates the adequacy of lesser iterations with the backward scheme. However, the computational cost for the backward scheme is more than an explicit approach as the former needs matrix inversion for each time step.

### (b) Example 2. Parameter identification of a beam: a nonlinear inverse problem

We discretize a planar Euler–Bernoulli beam using standard beam elements involving Hermitian shape functions and two d.f. (displacement and rotation) per node. The skeletal structure model is adopted with element-wise constant material property distributions. Denoting ** α** as the damage index vector, the discrete forward problem is(4.3)The element damage index (

*α*

_{e}) is defined as , 0≤

*α*

_{e}≤1, with ‘

*e*’ referring to the

*e*th element. The goal is to determine the discontinuous damage index profile with a partial measurement of the deformation quantities. We use 50 beam elements and allow only the displacement components as the measured d.f. Results via the deterministic pseudo-dynamical approach, shown in figure 6, demonstrate that the damage profile is well reproduced and quantified. Sensitivity of the reconstruction to time steps is shown in figure 6

*b*. We observe that, even with a very high step size (e.g. 10

^{5}), the algorithm is able to correctly identify the damage zone in the structure.

Now, we try the DEnKF-based stochastic approach for the same problem. The result for *N*=100 is shown in figure 7*a*. A low-intensity noise (; *σ*_{η} denotes the standard deviation of the scalar Wiener increment characterizing the scalar component processes of *η*(*t*)) is used to ensure an inflating filtration and thus to avoid singularities in evaluating the gain matrix. We follow the same stopping rule as for the deterministic scheme. Results indicate that the pseudo-dynamic DEnKF identifies the location and reproduces the damage profile quite accurately. We also experiment with the pseudo-dynamic DEnKF involving coloured process noise (*b*=0.5, *l*=0.02, *N*=100). The results, shown in figure 7, bring out a significantly better performance of the DEnKF using coloured noise in terms of the accuracy of reconstruction. Results in figure 8, which show the DEnKF-based reconstruction with *N*=500 along with that via the deterministic pseudo-dynamic scheme, are indicative of a superior performance of the DEnKF over the deterministic scheme.

## 5. Concluding remarks

For inverse problems involving statically recorded measurements, we have reported a pseudo-dynamical approach that combines deterministic Gauss–Newton (or QNM) and stochastic filtering strategies. Posing such inverse problems in the forms of ODEs (or SDEs) enables the vast available resources of powerful numerical schemes for time marching as well as filtering to be used in the present context. While the proposed schemes bypass explicit regularization of the inverse operator, considerable regularization (or smoothing) is nevertheless achieved by tuning the time-step size (during time marching) or the characteristics of the fictitiously introduced process noise vector (in the stochastic filter). For nonlinear inverse problems, another important feature of the proposed method is the use of explicit or implicit forms of local linearizations to construct the vector field of the pseudo-dynamical system. The resulting semi-analytical solutions, constructed through matrix exponentiation, may enable future research on stabilized Lie group-based algorithms for this class of inverse problems. It would also be interesting to implement the stochastic filtering scheme via variants of the geometric or PFs.

## Acknowledgments

D.R. and R.M.V. are thankful to the Council for Scientific and Industrial Research (CSIR), Govt. of India, for partially funding this research.

## Footnotes

- Received December 4, 2008.
- Accepted January 22, 2009.

- © 2009 The Royal Society