## Abstract

We describe a novel technique that we call tilt scanning interferometry to measure depth-resolved structure and displacement fields within semi-transparent scattering materials. The method differs significantly from conventional optical coherence tomography, in that only one wavelength is used throughout the whole measurement process. Temporal sequences of speckle interferograms are recorded while the illumination angle is tilted at constant rate. Fourier transformation of the resulting three-dimensional intensity distribution along the time axis reconstructs the scattering potential within the medium. Repeating the measurements with the object wave at equal and opposite angles about the observation direction results in two three-dimensional phase-change volumes, the sum of which gives the out-of-plane-sensitive phase volume and the difference between which gives the in-plane phase volume. From these phase-change volumes the in-plane and out-of-plane depth-resolved displacement fields are obtained. The theoretical framework for the technique is explained in detail and a practical optical implementation is described. Finally, results from proof-of-principle experiments involving a semi-transparent beam undergoing bending are presented.

## 1. Introduction

The ability to measure internal displacements and strains within a material has many potential applications in engineering and medical sciences. A broad range of methods has been developed in the last few decades, such as neutron diffraction (ND; Fitzpatrick & Lodini 2003; Hutchings *et al*. 2005), photoelastic tomography (Abe *et al*. 1986; Aben *et al*. 2004), phase contrast magnetic resonance imaging (PCMRI; Steele *et al*. 2000; Draney *et al*. 2002) and three-dimensional digital image correlation (DIC) using data acquired with X-ray computed tomography (Bay *et al*. 1999) and optical coherence tomography (OCT; Schmitt 1998; Fercher *et al*. 2003). Each technique has a restricted range of materials to which it can be applied: ND, for example, is suitable for polycrystalline metals; PCMRI requires significant water or fat content in the sample. For many technologically and medically important materials (optically scattering polymers, composites and biological tissues), the existing techniques are often either not applicable or else are too insensitive. The lack of sensitivity arises from the fact that in DIC the sensitivity is coupled to the spatial resolution of the technique and is therefore limited by it.

Recent promising developments in OCT technology sidestep this problem by making use of the phase information within the OCT signal, thereby decoupling the displacement sensitivity and depth resolution (Gastinger *et al*. 2003; Gülker & Kraft 2003; Ruiz *et al*. 2004, 2005). As a result, displacement sensitivity is improved by 2–3 orders of magnitude compared to the depth resolution of state-of-the-art OCT systems.

The resolution of depth in OCT is achieved through the use of multiple wavelengths which are present either simultaneously (Born & Wolf 1959; Dresel *et al*. 1992), leading to a narrow-gated region of modulating scatterers, or else sequentially. The latter case is known as wavelength scanning interferometry (WSI; Takeda *et al*. 1982; Fercher *et al*. 1995; Lexer *et al*. 1997; Kuwamura & Yamaguchi 1997; de Groot 2000) and relies on a depth-encoding frequency shift in the interference signal as the wavelength of the light source is swept through a narrow range (see figure 1*a*). While the WSI form of OCT offers superior signal-to-noise ratio to standard OCT for displacement sensing, it suffers from a significant practical drawback that tunable light sources are expensive and difficult to operate without mode-hopping.

In this paper we present a different approach to measuring depth-resolved displacements within optically scattering materials, which is based on tilting the illuminating beam during the acquisition of the image sequences. This provides the necessary depth-dependent phase shifts that allow the reconstruction of the object structure and its internal displacements. The method is quite distinct from all the previous OCT literature, in that only a single wavelength is present throughout the entire data recording process, thereby considerably reducing the expense and complexity of the light source. The depth-encoding frequency shift can now be regarded as coming from the Doppler shift of the photons reflected from the tilting mirror in the object illumination beam path. This is shown schematically and in a simplified way in figure 1*b*, where different heterodyne frequencies are produced from scattering points lying at different depths within the material. As with the WSI version of OCT, displacement sensitivity of this new technique that we denote here by tilt scanning interferometry (TSI) is decoupled from the intrinsic depth resolution of the technique and is a few tens of nanometres at optical wavelengths. Extraction of depth and displacement information of non-transparent object surfaces has been previously reported using speckle contouring techniques based on source displacement (Rodríguez-Vera *et al*. 1992). TSI extends this capability to study depth-resolved displacement fields inside semi-transparent materials.

In §2, we provide a mathematical description of the technique following a ray-tracing approach to model the problem. We also explain how to obtain the scattering potential of the sample as well as depth-resolved displacement fields and introduce some important parameters of the technique, namely the depth resolution, gauge volume, depth range and displacement sensitivity. In §3, we present a proof-of-principle experiment based on a TSI approach and show encouraging results of depth-resolved displacements measured within a beam under three-point bending up to a depth of over 5 mm. We compare the measurements with finite-element predictions, discuss some error sources, and finally in §4 draw some conclusions.

## 2. Tilt scanning interferometry

### (a) Depth-dependent phase shift introduced by a tilting wavefront

Figure 2*a* shows a semi-transparent scattering material of refractive index *n*_{1} immersed in a medium of refractive index *n*_{0}, illuminated by a collimated beam of wavelength *λ* at an angle *θ* to the optical axis of the system. We assume that the surface is flat, though this restriction can be relaxed through a suitable extension of the analysis presented here. It is convenient to place a flat but microscopically rough opaque surface over the region *y*≤0 of the plane *z*=0, where the coordinate system (*x*, *y*, *z*) is as defined in figure 2*a*. This reference surface serves two purposes: first, allowing for correction of the nonlinearity of the tilting device and second (since it does not strain during the loading of the sample), enabling registration of the before- and after-load scans of the sample. We should emphasize, however, that it does not provide the reference wave for the interferometer, which rather is introduced by means of a separate beam splitter, BS. While the reference surface has the obvious drawback of obscuring the sample for some of the pixels in the field of view (FOV), it could, in principle, be dispensed with given a sufficiently well-calibrated tilting device.

The illumination beam is refracted at the object surface *z*_{1}(*x*, *y*) and reaches point F with coordinates (*x*, *y*, *z*) at an angle to the optical axis. Some of the light scattered at F travels vertically downwards, and is recombined by BS with a reference wave derived from the same laser light source as the object beam, and is imaged onto a pixel lying within a two-dimensional photodetector array.

The phase difference between light scattered at F and a reference wavefront with phase *ϕ*_{r} can be expressed relative to the phase difference at point G, which lies on a rough opaque reference surface R at the origin (0, 0, 0), as(2.1)where we assumed that *z*≥*z*_{1}≥0. The phase differences due to the first, second and third terms between square brackets in equation (2.1) account for the optical paths , and , respectively. The random distribution of scattering centres within the material gives rise to a speckle phase distribution along *x* in this two-dimensional representation.

If the illumination angle *θ* changes linearly with time about the centre angle *θ*_{c}, i.e.(2.2)*T* being the time it takes to scan through the tilt angle Δ*θ* and *t* the time variable (−*T*/2≤*t*≤*T*/2), then the phase *ϕ*(*x*, *y*, *z*) will vary as(2.3)In the last term between the square brackets the following relationship, derived from Snell's law of refraction, was used:(2.4)with . For convenience, we define a parameter *ξ* as(2.5)Equation (2.3) can be expressed in terms of temporal frequencies as(2.6)

(2.7)

*f*(0, 0, 0) is a carrier frequency due to the rigid body translation or piston term of wavefront GA as it tilts around an axis perpendicular to the plane of figure 2*a*, and is zero if that axis passes through point G. The frequency of the second term in equation (2.7) varies linearly with *x*, whereas *f*_{z1}(*y*, *z*_{1}) accounts for the distance from the object surface to the reference surface, *z*_{1}, and *f*_{z}(*y*, *z*−*z*_{1}) to the depth *z*−*z*_{1} of scattering points within the sample. The last two terms can be interpreted as depth-encoding heterodyne frequency shifts due to the Doppler shift of the tilting beam. *f*(*x*, *y*, *z*) in equations (2.6) and (2.7) represents the absolute modulation frequency produced by interference between light coming from point F at position (*x*, *y*, *z*) within the object and the plane reference wavefront with phase *ϕ*_{r} coming from the recombining beam splitter.

At *z*=*z*_{1}, and is the frequency associated with point D at position (*x*, *y*, *z*_{1}) on the object surface in figure 2*a*. If we put , and is the frequency associated with point B lying at (*x*, 0, 0) on the reference surface R in figure 2*a*.

### (b) Extraction of the scattering potential

The scattering potential of the sample can be extracted from the Doppler shifts by effectively mapping depth from frequency. In what follows, we will assume, as in standard OCT, that the contributions from multiple scattering within the material can be neglected. The interference between light coming from all the scattering points along in figure 2*a* and the reference wavefront *ϕ*_{r} gives rise to an intensity signal modulated with multiple frequencies:(2.8)While the first term in the right-hand side of equation (2.8) represents the dc component of the reference beam, the second term corresponds to the modulation due to interference between the reference beam and light scattered within the material. The integration limit *z*_{max} represents either the object back surface, the maximum penetration depth which depends on the absorbance of the material and the laser power, or the depth range of the system (as discussed in §2*d*), whichever is the minimum. The double integral in the third term is due to cross-interference between light coming from within the object and contributes to the dc component and low-frequency components in the interference signal.

The frequency of each term on the right-hand side of equations (2.6) and (2.7) is, in general, dependent on *θ* and therefore changes during the course of the scan. We assume for now that Δ*θ* is small enough so that the resulting frequency shifts can be neglected.

A one-dimensional Fourier transform of signal *I*(*x*, *y*, *t*) (the light intensity measured at a single pixel) along the time axis gives rise to a spectrum as shown schematically at figure 2*b*. Any given pixel either sees the reference surface or the sample, but not both; this figure can therefore be interpreted as a top view of the spectrum over all *y* onto the (*f*, *x*) plane, i.e. as a superposition of the spectra corresponding to the reference surface R and the interior of the sample. There is a dc peak at *f*=0 for all *x*, another peak at *f*(*x*, 0) corresponding to the reference surface R, and a band associated with the object. The leading edge at *f*(*x*, *z*_{1}) generally shows the highest amplitude of the band due to the refractive index change at the surface of the sample. For higher frequencies, the amplitude decreases due to absorption and multiple scattering within the material. The position of the peaks is therefore associated with the internal structure of the object and its position relative to the reference surface, whereas their amplitudes are related to the degree of scattering or reflection coefficient at each point within the object or at the reference surface.

Usually, the spectrum (where ∼ indicates a Fourier transformed variable) is obtained through the Fourier transform of the product of the intensity signal *I*(*x*, *t*) with a window function *W*(*t*). In this way, the Fourier transform of *I*(*x*, *t*) is convolved in the frequency domain with the Fourier transform of the window function, . A Hanning window was used in this work in order to reduce the cross-talk between secondary peaks that would be present if a rectangular window, or equivalently no window at all, were used.

The position of the object surface relative to the reference surface R, *z*_{1}, is proportional to *f*_{z1}, which is the frequency difference between the spectral peak due to the scattering at the sample surface and that due to the reference surface. This difference is not present in a spectrum from a single pixel, but rather is calculated from two or more pixels imaging the reference and sample at the same *x*-value (i.e. from the same column of the photodetector array), so that all other terms in the right-hand side of equation (2.7) are common to both. From the third term of equation (2.7), we have(2.9)

For simplicity, the following analysis will be limited to the case where *z*_{1}(*x*) is a constant. For objects with a surface of arbitrary shape, the refracted angle *θ*_{r} at point F would depend on the coordinates *x*, *y*, *z* and the angle between the incident illumination beam and the normal of the object surface to the optical axis at the point where the ray that ends in F intersects the surface. In this case, the expression for the optical phase difference *ϕ*(*x*, *y*, *z*) in equation (2.1) would include extra terms. This case can be simplified in two different ways:

Experimentally, by immersing the object in a cell filled with index matching fluid. This approach is equivalent to having a flat object surface which is parallel to the reference surface R.

Numerically, by solving the refraction problem in two stages. The problem of an object with a flat surface parallel to the reference surface is solved first and then the refraction effects due to a curved surface (or arbitrary surface) are evaluated to correct the first approximation.

For the initially flat samples used in the experiments presented here, such corrections were relatively minor and neglected for the rest of the paper.

Once *z*_{1} has been evaluated through equation (2.9), the position *z* of a scattering point underneath the object surface can be obtained as(2.10)

The difference between the terms in square brackets is simply (as seen in figure 2*b*) and depends on the refractive indices of the material and surrounding medium and on the illumination angle through the parameter *ξ*.

The spectral bandwidth Δ*f* associated with a thickness Δ*z* within the object can be obtained using equation (2.10) as(2.11)This is illustrated in figure 3, with an object of constant thickness and different refractive index for different values of *x*. The spectrum looks narrower for the material with higher refractive index *n*_{2}, for which *ξ* is lower.

The scattering potential is obtained simply by a mapping of the modulation amplitude from spectrum coordinates (*x*, *y*, *f*) into spatial coordinates (*x*, *y*, *z*) through the relationship between frequency and position shown in equation (2.10).

While a single illumination direction is sufficient to obtain the scattering potential, multiple directions can be used to improve its estimation. A symmetric lateral illumination configuration has the advantage of providing spectra with the same bandwidth for a given object thickness, assuming a uniform refractive index distribution. If the left and right illumination angles *θ*_{c}=*θ*_{L} and *θ*_{c}=*θ*_{R} are set so that is a maximum, then small differences between *θ*_{L} and *θ*_{R} will produce negligible differences between and , thus relaxing alignment requirements. A more important consequence of this choice of *θ*_{L} and *θ*_{R} is that the depth resolution of the system is also optimized. Figure 4 shows the variation of with *θ* for a range of typical *Χ* values. The variation with refractive index means that the depth resolution outside the object is better than that within it.

### (c) Depth-resolved displacements

The position of scattering points within the object will change according to the mechanical properties of the material and the loads applied to it. The value of *ϕ* at a particular voxel (*x*, *y*, *z*) in the specimen will reflect that change and can be calculated from the real and imaginary parts of the spectrum at a frequency given by equation (2.10).

For the system shown in figure 5, displacements in the *y* direction cause no phase change due to the illumination geometry. If point F at (*x*, *y*, *z*) moves to a new position F′ with coordinates (*x*+*u*, *y*, *z*+*w*) after deformation (see figure 6), then from equation (2.1) the phase difference after displacement can be written as(2.12)

While a single illumination direction is sufficient to extract the scattering potential, at least two illumination directions are essential to determine the in-plane (*x*) and the out-of-plane (*z*) displacement components. For right and left lateral illumination, where *θ*_{R}>0 and *θ*_{L}<0, respectively, the phase difference due to object deformation is(2.13)where *w*(*x*, *y*, *z*_{1}) is the out-of-plane displacement of point D at (*x*, *y*, *z*_{1}) to D′ at (*x*, *y*, *z*_{1}+*w*(*z*_{1})). By choosing *θ*_{R}=−*θ*_{L}=*θ*, the in-plane (*x*) and out-of-plane (*z*) phase difference components and are obtained as(2.14)

(2.15)The first term within the {…} in equation (2.15) corresponds to the phase change due to a displacement of the object surface relative to the reference surface, while the second one is due to the displacement of point F to F′ along the optical axis. The in-plane and out-of-plane displacements can be extracted from equations (2.14) and (2.15) as(2.16)(2.17)Putting *z*=*z*_{1} into equation (2.17) gives(2.18)Substitution of equation (2.18) into equation (2.17) then leads to(2.19)This suggests that in the case of a sample with a lay-up of different refractive indexes, or index gradient in the depth dimension, the displacements evaluated for shallow slices should be used to correct for the displacements at deeper ones. In the case of rigid body translation of the sample along +*z*, then and equation (2.15) reduces to the standard expression for out-of-plane sensitivity (Huntley 2001).

### (d) Gauge volume, depth range and displacement sensitivity

In a symmetric lateral illumination system setup, the measurement volume is defined by the intersection of the illumination beams during the whole tilt scan (see figure 7). The depth resolution can be defined as the minimum distance between surfaces inside the measuring volume whose corresponding interference signals can be fully resolved in the frequency domain. The usual resolution criterion is that the frequency difference δ*f* between two neighbouring peaks has to be at least twice the distance from their centres to their first zero, which is equivalent to the width of the central lobe. If a window function *W*(*t*) is used, apart from reducing the leakage through secondary lobes, it has the effect of broadening the spectral lines. A rectangular window of duration *T*, for example, results in a sinc function of width δ*f*=2/*T*, while a Hanning window has a spectral width of δ*f*=4/*T*. From equation (2.11), the depth resolution is therefore(2.20)where *γ*=2, 4 for rectangular or Hanning windows, respectively. Although the rectangular window has superior depth resolution, this is accompanied by the undesirable presence of large secondary lobes in , which may interfere with other peaks, leading to phase errors. This effect is significantly reduced if a Hanning window is used instead.

If the layout in figure 1 is imaged by a telecentric optical system with magnification *M*, then the lateral resolution δ*x* and δ*y* will depend on *M* and the spatial resolution of the sensor used in the same way as a conventional camera. The lateral and depth resolution define a gauge volume of size δ*x*×δ*y*×δ*z* within the measurement volume, as shown in figure 6. All scattering points within the gauge volume centred at (*x*, *y*, *z*) contribute to the interference signal at position (*Mx*, *My*) in the image plane, modulated with frequencies within the range . After deformation, the scattering points initially inside the gauge volume centred at (*x*, *y*, *z*) move to a new position , indicated with a dashed line rectangle in figure 6. *w*(*x*, *y*, *z*) then represents the out-of-plane component of displacement vector ** d**, which is an average displacement of all scattering points inside the intersection of the solid line and the dashed line rectangles.

From equation (2.3), the total phase change Δ*ϕ* introduced in the wavefront coming from point F, by a tilt angle Δ*θ* introduced in the illumination angle in a time *T*, is(2.21)This phase change introduces modulation cycles in the interference intensity signal. In order to comply with the Shannon sampling condition, the number of samples required has to be at least twice the number of cycles:(2.22)

It follows that for a given central illumination angle and tilting range, the number of samples *N*_{f} determines the maximum depth a slice can be within the material in order to be able to determine its displacements, assuming the scattering, absorption and laser coherence length do not set a lower limit. This distance is known as the depth range, and is given by(2.23)

If we think in terms of frequencies, it is easy to realize that the offset frequency ((2.6) and (2.7)) pushes the spectrum towards the Nyquist frequency, thus reducing the depth range. In order to reduce this wasted bandwidth, *f*(*x*, *y*, 0) should be as low as possible, but without interfering with the cross-interference peaks close to the dc.

The displacement resolution *σ*_{z} (sometimes called sensitivity of the technique) is decoupled from the depth resolution δ*z* and only depends on the wavelength of the laser source and the degree of speckle decorrelation. For out-of-plane sensitivity, as is the case in figures 2*a* and 5, it is typically better than *λ*_{c}/30.

## 3. Proof-of-principle experiments

### (a) Experimental setup

In order to test the proposed technique described in §2, we carried out proof-of-principle experiments, in which the tilt scanning technique was used to measure displacements fields within a partially transparent scattering sample. The optical setup is shown in figure 5. A collimated continuous wave (CW) beam of wavelength *λ*=532 nm and power of approximately 100 mW is steered by mirror TM mounted on a tilting stage. Left and right illumination beams are obtained with the aid of a cube beam splitter CBS and mirrors M_{1} and M_{2}. TM is tilted by means of a piezoelectric lead–zirconate–titanate (PZT) actuator controlled by a ramp generator RG.

The imaging system consists of an imaging lens L_{1}, field and relay lenses L_{2} and L_{3}, respectively, wedge beam splitter WBS and CMOS high speed camera C (HCC-1000 Vosskühler). WBS is used with the reflective surface closer to the object, to avoid multiple reflections of the object beam within the glass thickness. This is crucial in order to avoid multiple peaks in the spectrum corresponding to the same depths in the sample. Preliminary experiments with an external BS in front of L_{1} resulted in multiple reflections of the object beam, causing mixing of the spatial information from the interference intensity signal. WBS serves to recombine reference and object beams onto the camera sensor. A smooth on-axis reference wave was used rather than a second speckled wave in order to maximize the signal-to-noise ratio. The purpose of lenses L_{2} and L_{3} is to place the WBS between the imaging lens and the camera sensor. A screen S by the CBS serves for alignment purposes: if the object is replaced by an optical flat aligned with its normal parallel to the optical axis, then the left and right illumination beams interfere at S after reflection (transmission) on the optical flat and onto (through) M_{2}, M_{1} and (CBS). When the left and right illumination beams subtend the same angle to the optical axis, the fringes are nulled at S. This is important because equations (2.14) and (2.15) assume that the angle is the same for both illumination directions.

Figure 7 shows a close up of the illumination beams in the region surrounding the object, before and after tilt of the beams by an angle Δ*θ*. It can be seen that the measurement volume is the intersection of the beams at the beginning and end of the scan range of the tilt angle. Scattering points within this volume give rise to a continuously modulated interference signal throughout the whole tilt scanning sequence.

The test object was a beam manufactured in-house with ‘water clear casting epoxy resin’ (from CFS Fibreglass Partnership), seeded with a small amount of titanium oxide white pigment to increase the scattering within the material. The front and back surfaces of the beam were polished to reduce superficial scattering and the back face was painted black to eliminate the scattering produced by the reflected component due to internal reflection. Under a three-point bending test, the beam was loaded with a ball-tipped (6 mm diameter) micrometer against two cylindrical rods (12 mm diameter), as shown in figures 7 and 8. A rough reference surface R was placed just in front of the object, so as to cover approximately 20% of the lower portion of the area imaged. As described earlier, this served to compensate for the shift of the peaks along the horizontal axis *x* and to allow for correction of the nonlinear response of the tilting stage at TM.

The measurement sequence was as follows:

The beam was preloaded by displacing the spherical indenter at the point of contact by −0.500 mm to bring the team into position against the support rods.

Two ‘reference-state’ three-dimensional data volumes

*I*_{L1}(*x*,*y*,*t*) and*I*_{R1}(*x*,*y*,*t*) were recorded sequentially with the left and right illumination beams (with right and left beams blocked, respectively, making use of a screen between CBS and M_{1}and CBS and M_{2}, respectively).The pin was displaced 40 μm along the −

*z*-axis to bend the beam.As in (ii), two ‘loaded-state’ three-dimensional data volumes

*I*_{L2}(*x*,*y*,*t*) and*I*_{R2}(*x*,*y*,*t*) were recorded with the left and right illumination beams.

The main measurement parameters were set as follows: camera exposure time ; framing rate *F*_{R}=7.16 fps; acquired frames *N*_{f}=480 frames; acquisition time ; spatial resolution of FOV: 256×256 pixels; size of FOV: 7.2×7.2 mm^{2}; driver ramp voltage range *V*_{pp}=0–80 V; tilt angle scanning range Δ*θ*=0.0048 rad; illumination angle *θ*=45°; refractive indices n_{0}=1 and *n*_{1}=1.4; laser wavelength *λ*=532 nm; laser power per beam: ∼35 mW CW.

### (b) Results and discussion

From equation (2.5) the refraction parameter is close to the maximum for the corresponding illumination angle and refractive index. The resulting depth resolution of the system was therefore δ*z*∼1.1 mm and the size of the gauge volume was . The number of frames *N*_{f} was adjusted to guarantee a depth range Δ*z* bigger than the object depth *d*=7.8 mm.

Figure 9 shows the interference intensity signal from a pixel imaging part of the reference surface (bottom), and from another pixel imaging part of the epoxy resin beam (top). The former shows a single frequency and corresponds to the signal coming from a single depth, while the latter shows a more complex wave train due to a mixing of frequencies coming from within the thickness of the sample. Figure 10 shows the magnitude spectrum (also scattering potential) for left and right illumination obtained along the horizontal axis *x*, averaged along the columns of and to reduce the noise content. These correspond closely to the scheme shown in figure 2*c*. The peak due to the reference surface and the band corresponding to scattering points through the whole thickness *d* of the beam can be clearly seen. Figure 10 should be interpreted as a ‘top view’ where both the reference surface and beam cross-section are visible.

The noisy appearance in the right-hand side of figure 10*a* is an artefact of the linearization routine we had to use to correct for the nonlinearity of the tilting stage. The effect of the linearization routine on the reference surface peak is shown in figure 11. The reference surface has no appreciable thickness, so it should appear in the spectrum as a narrow peak. The nonlinear tilt scan causes an asymmetric broadening of the spectral lines. We have numerically corrected for this effect by using the signal from the reference surface, as it does not change after sample deformation. First, the intensity signal *I*(*x*, *y*, *t*), with *x* fixed and *y* corresponding to coordinates in the reference surface, is evaluated along the time axis. Using the Takeda method of phase evaluation (Takeda *et al*. 1982), the phase shift *ϕ*(*x*, *y*, *t*) introduced by the tilting mirror is obtained from *I*(*x*, *y*, *t*). In an ideal system this should be a linear function, but it looks like a slanted ‘S’ due to our nonlinear tilting stage. A polynomial is then fitted to the inverse function *t*(*x*, *y*, *ϕ*) to get an analytical expression used to interpolate *t*(*x*, *y*, *ϕ*) and resample it at equally spaced phase shift values *ϕ*′. These new phase shift values *ϕ*′ lead to non-equally spaced sampling times . Finally, the original signal *I*(*x*, *y*, *t*) is interpolated and resampled at times *t*′ to get a linear phase shift between consecutive samples. Owing to the frequency dependence on *x* (2.6), resampling times *t*′ are evaluated for each position *x* along the horizontal axis and used to linearize all signals *I*(*x*, *y*, *t*) in the corresponding column. This linearization considerably reduces the leakage between spectral components and improves the depth resolution.

The bandwidth associated with the beam thickness (marked with arrows in figure 10*a*) was , which from equation (2.11) corresponds to a beam thickness Δ*z*=7.46 mm. This is within a 4% error of the actual beam thickness *d*=7.8 mm.

A one-dimensional Fourier transform was performed along the time axis of the four data volumes *I*_{L1}, *I*_{R1}, *I*_{L2} and *I*_{R2}, allowing the calculation of the optical phase for each (*x*, *y*, *f*) coordinate in the conjugate spectrum-volumes. This resulted in two three-dimensional phase-change volumes Δ*ϕ*_{L}(*x*, *y*, *f*) and Δ*ϕ*_{R}(*x*, *y*, *f*) corresponding to each illumination direction, the sum of which gave the out-of-plane-sensitive phase-change volume and the difference between them gave the in-plane phase-change volume. The top row of figure 12 shows the wrapped in-plane phase-change distribution for different slices within the epoxy resin beam starting at the object surface *z*−*z*_{1}=0 mm (left) in steps of 1.74 mm down to *z*−*z*_{1}=5.22 mm (right). The wrapped out-of-plane phase-change distribution for the same depth slices are shown in the bottom row of figure 12.

The depth-resolved in-plane and out-of-plane displacement fields within the beam corresponding to the wrapped phase-change slices shown in figure 12 are presented in the top and bottom rows of figure 13, respectively. Phase unwrapping of each slice was carried out using the algorithm described in Buckland *et al*. (1995) as implemented in the Phase Vision Ltd MATLAB phase unwrapping toolbox. It can be seen that the gradient of the in-plane displacements reduces as we move from the front to the back surface of the beam, indicating the expected variation of the tensile state for the first front slices. However, the gradient does not reverse to show a compressive state for the slices behind the neutral axis at *z*−*z*_{1}=3.8 mm. As discussed later, this discrepancy is likely caused by phase changes due to refraction in the curved surface of the beam and possibly also due to stress-optic coupling. The out-of-plane displacements show different levels of bending as we approach the back surface from the front surface. The asymmetry of the distribution is produced by the position of the point of contact between the loading pin and the beam, which was approximately 2 mm below the horizontal symmetry axis of the beam. The last slice at *z*−*z*_{1}=5.22 mm starts to reveal detail of the local deformation around the point of contact, marked with a cross in figure 13.

It is worth noting that the problems with the nonlinear scans described earlier arose from the use of a home-made open-loop tilting stage. Closed loop PZT tilt stages are now available commercially with sub-microradian resolution that can scan a beam through 100 mrad in a fraction of 1 s. At an angle of incidence *θ* of 45°, a wavelength of 532 nm, *n*_{0}=1 and , the effective depth resolution is given by equation (2.20) as 50 μm, equivalent to that in WSI provided by a tuning range of approximately 20 nm for a material with the same index of refraction. External cavity diode lasers in the visible or near IR are unable to provide such a wide range in a single sweep, requiring the use of expensive dye or Ti : sapphire lasers.

To study objects of arbitrary shape, index matching fluid can be used, as suggested in §2*b*, to eliminate distortions in the illumination angle due to refraction at the material interface. The matching fluid would also eliminate the problem due to interface deformation between a reference and a loaded state of the object.

### (c) Finite-element modelling

In order to validate the results from TSI, we performed a three-dimensional finite-element analysis of the geometry measured experimentally. Simulation of the displacement field distributions resulting from the contact interaction between two bodies requires, in general, the finite-element discretization of both the target surface (in this case the spherical indenter) and the contact surface (epoxy beam). In addition, it is necessary to include contact elements between the target and contact surfaces to adjust the level of interpenetration between the target and the contact surfaces. Even though this modelling technique provides good predictions about displacements and stresses, it is computationally expensive, as it requires high levels of mesh refinement and nonlinear solution algorithms that increase dramatically the solution time required for a given problem. In this work a simpler, yet effective, technique was used. Given the large ratio between the modulus of elasticity of the steel indenter and epoxy beam (*E*_{s}=270 GPa; *E*_{g} =2.41 GPa; *E*_{s}/*E*_{g}>100), the interaction between them can be simulated using the rigid-to-flexible contact approximation (Johnson 1985), where the indenter is modelled as a single rigid element and the beam is considered as an elastic medium. This technique reduced dramatically the solution times.

The general purpose finite-element software ANSYS was used. The target surface and the spherical indenter were treated in the original geometric configuration and the motion of the entire surface was then controlled by the imposed displacements on a *pilot node* that represents the kinematics of the surface. Therefore, the steel indenter was modelled using the 1-node element TARGE170 with standard spherical shape of radius 3 mm. The contact surface, surface of epoxy beam, was modelled using the 8-node element CONTA174, and the contact kinematics was governed using the pure Lagrange multiplier method (Bathe 1996) that prescribes zero penetration and zero slip when the contact occurs. The epoxy beam was modelled using the 20-node isoparametric element SOLID95, which has three degrees of freedom per node: translations in the nodal *x*, *y* and *z* directions. The elastic modulus of the epoxy beam was measured using the three-point bending test (result reported above).

The main objective of the finite-element modelling was to extract the displacements occurring in the same FOV used during the experiments. In order to minimize the discretization error (the discrepancy between the discretized FE model and the mathematical model, considering the latter as exact), the FE mesh was refined inside the FOV using the regular refinement method described in Cook *et al*. (1989). The number of elements in this area was incremented iteratively until the absolute difference between the maximum out-of-plane displacements of two successive iterations was less than 1%.

The finite-element representation of the indenter–beam system is illustrated in figure 14. The epoxy beam had working volume dimensions of 75×22.8×7.6 mm^{3} between the two cylindrical supports shown in figure 8. It is convenient to define a local coordinate system (*x*′, *y*′, *z*′) whose origin is at the bottom of the left-hand cylinder. In this system, the boundary conditions corresponding to the cylindrical supports are specified as follows: *u*=*w*=0 along the line formed by the intersection of the planes *x*′=0 and *z*′=0; and *w*=0 along the line formed by the intersection of the planes *x*′=75 mm and *z*′=0. The initial preload of the sample was sufficiently large to cause large deformations. Under these conditions a structure's changing geometric configuration can cause the structure to respond nonlinearly, and this is referred to as geometrical nonlinearities. Two static finite-element models were therefore solved to simulate the experimentally measured incremental displacement fields. In the first, the indenter displacement was specified as *w*=0.5 mm, and for the second *w*=0.504 mm. Both models were solved consecutively (Dell Precision M70, 1.86 GHz, 512 Mb RAM). The difference in the displacement fields between the two models, (*u*, *v*, *w*), were compared with the experimental incremental displacement fields over the experimental observation window (7.2×7.2 mm^{2}, centred at *x*′=36 mm, *y*′=8.85 mm). Figure 15 shows the in-plane and out-of-plane displacements within the beam as predicted by finite-element analysis. This figure can be compared directly with figure 13, and it is observed that the predicted behaviour is consistent with the measured results, even to the extent of showing the asymmetry of the out-of-plane displacement field with respect to the horizontal axis of the beam. Figure 16 shows four horizontal profiles of the in-plane and out-of-plane displacements—measured experimentally and predicted with the finite-element model—at a height equal to that of the pin loading contact point. The experimental two-dimensional phase maps were unwrapped independently of one another and as a result the phase offset between slices is unknown. For clarity, arbitrary offsets were therefore added to both the experimental and numerical results at different depths within the beam. Agreement between the experimental and predicted out-of-plane displacement fields is excellent. The agreement is somewhat less convincing for the in-plane displacements, a result that we attribute to the load-induced curvature of the sample surface. A relatively simple algorithm to correct for this effect was developed, which produced significantly closer correspondence between the experimental and numerical in-plane profiles, but degraded somewhat the agreement between the corresponding out-of-plane profiles. No account is taken of load-induced spatial variations in refractive index within the beam, and this may be a worthwhile avenue for further development of the technique.

## 4. Conclusions

We have proposed a novel technique that we call tilt scanning interferometry to measure three-dimensional depth-resolved displacement fields within semi-transparent scattering materials. A simple proof of principle experiment showed promising results even using a home-made open-loop PZT-driven tilting stage. Among the system parameters, the depth resolution is the most important from a practical point of view, and in our experiment it was δ*z*∼1.1 mm for a tilting range of 0.0048 rad. By means of TSI, the scattering potential within the sample can be reconstructed in a three-dimensional data volume as in scanning OCT. Most importantly, in-plane and out-of-plane displacements can be measured within the object under study with sub-wavelength sensitivity (decoupled from the depth resolution) and up to a depth exceeding 5 mm. This sensitivity is some four to five orders of magnitude better than the intrinsic depth resolution and could not therefore be achieved by image correlation techniques based on the magnitude of the spectrum.

## Acknowledgments

We are grateful to the Engineering and Physical Sciences Research Council for financial support. J.M.H. also acknowledges support from The Royal Society and Wolfson Foundation in the form of a Royal Society-Wolfson Research Merit Award.

## Footnotes

- Received October 26, 2005.
- Accepted January 27, 2006.

- © 2006 The Royal Society