This paper describes the formulation of a maximum-likelihood estimate of damage location for guided-wave structural health monitoring (GWSHM) using a minimally informed, Rayleigh-based statistical model of scattered wave measurements. Also introduced are two statistics-based methods for evaluating localization performance: the localization probability density function estimate and the localizer operating characteristic curve. Using an ensemble of measurements from an instrumented plate with stiffening stringers, the statistical performance of the so-called Rayleigh maximum-likelihood estimate (RMLE) is compared with that of seven previously reported localization methods. The RMLE proves superior in all test cases, and is particularly effective in localizing damage using very sparse arrays consisting of as few as three transducers. The probabilistic basis used for modelling the complicated wave scattering behaviour makes the algorithm especially suited for localizing damage in complicated structures, with the potential for improved performance with increasing structure complexity.
This study considers ultrasonic guided-wave structural health monitoring (GWSHM) with sparse transducer arrays (Michaels 2008a). In GWSHM, a permanent network of transducers excites and senses elastic waves in a structure in order to identify damage through the detection of wave scattering. Here, sparse arrays are defined as arrays in which the transducers are too far separated from one another to use wave phase information through phased-array beamforming (Holmes et al. 2005). Sparse networks provide higher coverage with fewer transducer elements at the cost of imaging resolution.
Most current GWSHM algorithms begin with ad hoc ideas based on intuition and approaches taken from the radar field and are then modified iteratively to improve performance for specific applications. This paper attempts to follow a more fundamental and structured approach by first establishing a basic statistical model of the active sensing process and then deriving an asymptotically optimal estimate of the damage parameters according to maximum likelihoods.
Damage assessment can be broken into three components: detection, localization and characterization. An optimal test (in the case of detection; Flynn et al. in press) or estimator (in the case of localization and characterization) is that which minimizes the expectation on some function of the error. Since the forms of the error for these three components are unique, their optimal tests or estimators are necessarily different. With that in mind, this paper considers damage localization in isolation, so that the estimator is derived according to the condition that damage is present somewhere in the structure. This focused treatment will more clearly develop the basic principles of the presented approach while promoting its application to the tasks of detection and characterization in future studies.
2. Guided-wave structural health monitoring process
The GWSHM procedure considered here assumes a plate-like structure instrumented with a permanent network of omni-directional, piezoelectric transducers that can both actuate and sense elastic waves. One at a time, each actuating transducer excites a high-frequency, narrow-band, short-duration mechanical pulse. Each resulting wave travels radially from the actuating transducer, interacts with the structure and the damage (if present) through reflection and scattering, and is both detected and digitally recorded by all the sensing transducers. Each transducer takes turns serving as both actuator and sensor in order to produce a vector of time histories, ym, for each of M possible actuator–sensor transducer pairs, with elements ym[n], n=1,…,N. These may be assembled into a complete N×M measurement matrix Y.
While a maximum-likelihood estimate (MLE) could be derived from this set of raw time series, a simplified set of feature vectors, V, will be used instead by removing information that is irrelevant to damage location. First, an expectation on the time histories when the structure is damage free is subtracted from the current, or ‘test’, time histories, Y(T). Since this expectation is difficult to model, a set of ‘baseline’ time histories, Y(B), recorded when the structure was in a damage-free state is used as an estimate. Note that parenthetical superscripts will consistently be used to differentiate related but distinct variables or functions. This baseline ideally was recorded when the structure was in a state as similar to the current structural state as possible in terms of environmental and operational conditions. In optimal baseline subtraction (OBS), a database of baseline time histories is stored, and for each test, the baseline time history that minimizes the error between the current and the baseline time history is used (Croxford et al. 2010). Ultimately, however, error in the estimation of the damage-free expectation (imperfect baseline subtraction) is one of the largest contributors to measurement ‘noise’.
Two assumptions are made in the further processing of the data: (i) the scattering process involves minimal wave mode conversion to other frequencies and (ii) the relative phase of measured scattered waveforms between any two sparse array elements cannot be predicted. With this in mind, the differenced time histories may then be band-pass filtered about the excitation frequency and subsequently envelope detected with minimal loss of information (Kay 1998) to produce the final set of so-called feature vectors, vm, with elements vm[n], which form the full N×M feature matrix V.
3. Deriving the maximum-likelihood estimate
Maximum-likelihood estimation is a popular and asymptotically optimal statistical approach to fitting model parameters using data (Lehmann 1998). For example, consider a sequential set of sonar range measurements to a target (observations) that follows the model y[n]=f(r)=r+w[n], where w[n] is a white Gaussian noise process, so that the observations are distributed according to y[n]∼N(r,σ2). The MLE of the true range, which can be thought of as just a parameter in the measurement model, is then 3.1which is just the arithmetic mean of the observations. For the present application, given that damage is centred at some location x, the feature matrix V (observations) will follow some model, V=f(x), parameterized by that damage location. The damage location can then be said to follow some likelihood function L(x|V). The MLE of the damage location is then 3.2
The derivation of the exact representation of the MLE begins by assuming a form for the likelihood function of the damage location given the feature vector. Each element of the feature vector is the magnitude of an analytical signal value of a narrow-band waveform. If there is a sufficient superposition of scattered and/or interfering waveforms, then the real and imaginary components of the analytical signal samples are approximately normal, independent and identically distributed. The envelope values are then Rayleigh distributed according to some Rayleigh parameter σ, i.e. vm[n]∼Rayleigh (σm[n]) (Rappaport 2002).
When damage is present at a single location, the received waveform can be divided into two sections: (i) the time before the arrival of any scatter from the damage and (ii) the arrival of the first, directly scattered wave and of the subsequent echoes of the scattered wave from the structure's geometric features. The waveform statistics are approximated as independent and stationary from point to point within each of these sections, so the likelihood function for damage at x, given a measured feature vector element vm[n], can be broken into two parts according to 3.3where ηm(x) is the time history index corresponding to the shortest total time of flight from the actuating transducer in pair m, to the coordinate x, to the sensing transducer in the same pair. If the response involves multiple guided-wave modes and or mode conversions, the time of flight index corresponds to the fastest travelling mode (or set of modes in the case of mode conversion) to which the transducers are sensitive. The Rayleigh probability density function is given by 3.4
Because the feature vector elements are assumed independent, the total likelihood is the product of likelihoods across the feature vector elements and transducer pairs, 3.5
Note that the independence assumption neglects the correlation between neighbour feature vector points. This could be resolved by down-sampling the feature vectors, which would also provide benefits in terms of processing and storage. However, the information redundancy resulting from the correlation ultimately does not affect the localization performance, and is therefore disregarded for this study.
Environmental variability, transducer bonding inconsistencies, non-stationary noise, complex propagation paths and unknown damage characteristics can all lead to the Rayleigh parameters being difficult to estimate a priori. This expands the estimator to the simultaneous maximization over three unknown parameter vectors, instead of one, 3.6
However, using equation (3.4), the MLEs of the Rayleigh parameters can be shown to be equal to 3.7so that the MLEs of and can be calculated as a function of the damage location, x, through the time of flight index, ηm(x), according to 3.8
Substituting the MLEs of the Rayleigh parameters given by equation (3.8) into the total likelihood function in equation (3.5), and taking the logarithm results in a test statistic for damage at x, 3.9where 3.10
Grouping summation terms and recognizing from equation (3.8) that 3.11leads to the reduced form 3.12
Note that the last two terms are constant with respect to η (and, in turn, the damage location) and can be left out of the computation since they do not affect the maximum. The MLE of location is then equal to the coordinates x for which I(x) is maximum, 3.13Maximizing the test statistic can be achieved through a number of constrained optimization techniques. The simplest approach, and that which will be used in this study, is to divide the structure into K discrete, sufficiently small and uniformly distributed regions with centroid coordinates , k=1,…,K. The test statistic, , is then calculated for each of these discrete points and the maximizing point is chosen as the damage location estimate.
It is common practice in GWSHM to generate images in which each pixel represents a test statistic value for damage at that location (Michaels 2008b). In order to make the interpretation of the Rayleigh maximum-likelihood estimate (RMLE) approach compatible with existing delay-and-sum imaging approaches, the values wm[n] may be thought of as the result of a nonlinear filter, termed the ‘arrival filter’, on the original waveforms. With the structure divided into discrete regions as described above, the filtered waveforms can be processed according to standard delay-and-sum, equivalent to the procedure in equation (3.9).
Each point wm[n] can be thought of as the sum of goodness-of-fits (consistency) of the stationary Rayleigh models applied to the measurements before (to the left of) and to the measurements after (to the right of) that point. The maximum of wm[n] is then expected to occur at the point when the original signal envelope jumps owing to the arrival of the first and all subsequent waveforms scattered from damage.
Figure 1 shows an example side-by-side comparison of a standardized waveform, its waveform envelope and the arrival-filter result. To interpret these graphs, imagine a cursor starting at zero, and moving to the right with time. Before the cursor reaches the first arrival, the envelope data points to the left follow a consistent Rayleigh model, however, the points to the right do not, with a sharp transition at the first arrival. As the cursor continues to move to the right, the consistency of the points to the left remains the same, while the consistency of the points to the right gradually increases as fewer of the pre-first-arrival points are included. As such, the arrival-filter steadily climbs. When the cursor hits the first arrival, the arrival-filter result reaches a peak, with the envelope being consistent to both the left and the right. As the cursor moves past the first arrival, the points on the left become less and less consistent, causing the arrival-filter result to decline.
Note that the shape of arrival-filter result reflects the ambiguity of the arrival time of the first scattered waveform. This shape is largely a function of the sharpness and magnitude of the jump in the Rayleigh parameter. Defects that result in very strong primary and secondary reflections will lead to tall, narrow peaks, while weak scatters will result in relatively low, broad peaks in the arrival filter response. Noise, therefore, reduces the accuracy of the RMLE in so much as it diminishes the prominence of the first arrival transition. As in most GWSHM approaches, the RMLE is particularly sensitive to highly structured noise resulting from imperfect baseline subtraction, which can lead to apparent changes in the natural reflections from the structure's geometry. Unlike most approaches, however, the RMLE is most sensitive to structured noise before the first arrival, where the levels are the lowest, and least sensitive to noise after the first arrival, where the levels are the highest but buried among the secondary reflections (Croxford et al. 2010). The effects of this property will be apparent in the test structure results.
4. Evaluating performance
Most studies in GWSHM demonstrate performance through qualitative image comparison. These images are generated either in the absence of noise, or in the presence of one realization of the noise. This serves as an adequate proof of concept, but fails to address the stochastic nature of the GWSHM system and ultimately provides no measure of system reliability. Reliability is arguably the most important measure of performance to a structural health monitoring (SHM) system operator, who requires quantitative, risk-informed decision-making capability from the system.
One common mistake in the GWSHM literature is to assert that the sharpness of the peak about the damage location in an image is an indicator of imaging performance, with sharper peaks equating to higher imaging ‘resolution’. The fallacy can be exposed by recognizing that according to this belief, the performance of some unbiased imaging algorithm, I=F(V), can always be ‘improved’ by modifying the algorithm to be . In such a case, while the noise of the image may appear to be suppressed, the underlying information is not altered.
This paper proposes two new approaches to evaluating GWSHM localization performance. The first, which is more qualitative, is a kernel density estimate of the localization probability density function (LPDF), . The estimate is obtained by taking N(d) sets of data from the structure when damage is at x and processing each according to the localization algorithm, so that each set produces a location estimate . The estimate of the LPDF is formed by applying a Gaussian kernel to each localization point according to 4.1with kernel bandwidth parameter h. This produces a density map (image) of the estimated probability of localizing the damage to when the damage is actually at x. The more completely the N sets of measurements sample the true distribution of the measurement noise, the higher the accuracy of the LPDF estimate.
The second approach, which is more quantitative, is related to the receiver operating characteristic (ROC) curve commonly used when evaluating detectors for hypothesis testing (Fawcett 2006). Each point on an ROC curves gives the rate of detection (correctly flagging damage) for a given rate of false alarm (incorrectly flagging damage), which statisticians refer to as the ‘size of test’.
First, a set of concentric circles with increasing radii are drawn centred at the true damage location x. For each circle, the fraction of the calculated localizations, , contained within its circumference is counted. These fractions are then plotted against the fraction of the total area of the plate contained within each circle. Each point on the curve represents an estimate of the probability of the localizer choosing as the damage location a point within a given area surrounding the true location of the damage. This curve will be referred to as the localizer operating characteristic (LOC) curve.
The LOC curve can be interpreted in the same way as an ROC curve. A shift up and to the left of the curve signifies improved performance. A perfect localizer will have a box-shaped LOC curve, while the LOC curve of a completely ineffective localizer (equivalent to random guessing) will follow the y=x line. The most meaningful point on an LOC curve is application dependent. For example, if manual inspection is performed using a traditional ultrasonic scan, the point on the curve corresponding to the imaging area of the scanner might be of the most practical interest.
To study the global performance of a given localizer, the LOC curves for a set of damage modes can be combined by averaging localization rates for each localization area. From a Bayesian statistics perspective, this provides a total localization rate for each localization area, provided that the prior probabilities of each damage mode occurring are equal. The total localization rate for non-uniform damage priors could also be calculated through appropriate weighted averaging. For this study, the localization areas will always be plotted on a log scale.
5. Existing localization approaches
Seven other localization approaches taken from the GWSHM literature were implemented as a benchmark for the presented RMLE method and as a means of demonstrating the proposed performance metrics. In all of the approaches, the waveforms were first band-pass filtered about the excitation frequency and, in all but one approach, were baseline-subtracted to produce each filtered, differenced waveform ym. Two other forms of additional processing were common among some of the approaches. The first is the use of the Hilbert envelope of the signal, which leads to the same feature vector used in the RMLE, 5.1where H(y) is the Hilbert transform. The second is an exponential windowing according to 5.2where is the index corresponding to the direct time of flight between transducer pair m, f(s) is the sampling rate and α is a decay coefficient. The windowing is intended to reduce the image corruption from secondary scatter reflections. Through a detailed search, a decay of α=80 μs, which corresponds to a propagation distance of 420 mm, was found to result in the highest performance, in terms of average localization error, for the structure, transducer placement and damage modes used in this study. Note that the decay parameter was optimally chosen during post-processing in order to compare the ‘best case’ performances of the applicable algorithms against the proposed RMLE method (which is parameter free). In practice, however, the values of such parameters must be selected prior to interrogation.
(a) Time of arrival/ellipse method
This is the standard delay-and-sum imaging approach (Michaels & Michaels 2007a). The test value (image value) at each point is defined as 5.3
(b) Windowed time of arrival method
Past studies have demonstrated that the time of arrival (TOA) method can be improved by instead using the exponential-windowed form of the enveloped waveforms, as described in equation (5.2) (Michaels & Michaels 2007a) 5.4
(c) Time difference of arrival/hyperbola method
The time difference of arrival (TDOA) method (Croxford et al. 2007) is based on the proposition that the received waveform at two sensing transducers, as actuated by the same actuating transducer, will be correlated according to the difference in the time of flight from a given region to each of these transducers. Owing to reciprocity, this correlation can be extended to any two transducer pairs that share exactly one common transducer (either actuator or sensor). The imaging algorithm is then 5.5where ρmj is the cross correlation between and , Aj is the set of transducer pairs that share only one common transducer with pair j, and 5.6
(d) Reconstruction algorithm for the probabilistic inspection of damage
Reconstruction algorithm for the probabilistic inspection of damage (RAPID), proposed by Gao et al. (2005), is defined as 5.7where 5.8
The value is the zero-lag cross correlation between the subsets of the filtered baseline and test waveforms extracted about the expected first scattered arrival for damage at x 5.9
The algorithm has the effect of shading in ellipses drawn around each transducer pair according to the correlation between the baseline and test waveforms. The parameter β controls the size of the ellipse and its optimal value for this study was determined to be 1.25. Note that while RAPID could be incorporated as part of a true probabilistic analysis of damage, as with all the presented approaches (including RMLE), it provides no actual measure of probability on its own.
(e) Energy arrival method
The energy arrival (EA) method was introduced by Michaels & Michaels (2007b). The image is calculated according to 5.10where the window and cumulative energies are 5.11
This approach can be thought of as an adaptively windowed version of the TOA approach, where the contribution of a component of a waveform is inversely weighted by the wave energy that arrived before it. This has the effect of adaptively reducing the amplitude of scatter echoes. The use of the waveform energy prior to the first scatter arrival as part of the imaging process makes the EA the most similar to the proposed RMLE method. In fact, the cumulative energy is related (to within a small time offset) to the first Rayleigh parameter estimate by 5.12
Unlike the RMLE, however, the EA method does not make use of the waveform energy after the first scatter arrival.
(f) Total product method
Ihn & Chang (2008) proposed an unnamed alternative to the traditional TOA method that uses the product of transducer pair contributions, rather than the sum. They formed images according to 5.13where Sm is the instantaneous frequency spectrum of the raw acquired waveform from transducer pair m and ω(E) is the excitation frequency. One can recognize that the instantaneous frequency spectrum at ω(E) is nearly equivalent to the square of the envelope-detected, band-passed signal, i.e. 5.14With this in mind, the imaging algorithm can be broken down according to 5.15
In other words, the total product (TP) approach can be thought of as a log-weighted version of the TOA approach, with an extraneous exponential function. This exponential tends to saturate the scale of linear-mapped images and so is not included in this study's generation of images, allowing their underlying shape to be more easily observed. This does not alter the supremum of the images, and therefore it does not affect localization.
(g) Windowed total product method
While not included in the originally proposed TP algorithm, in order to improve performance, the envelope-detected waveforms were first windowed according to equation (5.2), so that 5.16
6. Test procedure
The test structure, shown in figure 2 and described previously in Flynn et al. (in press), is a 3 mm thick aluminium plate with two epoxied, hollow square-section aluminium stiffening stringers. The stringer cross sections had an outside length of 25.4 mm and a wall thickness of 3.5 mm. The plate was instrumented with seven, 20 mm circular piezoelectric transducers designed to preferentially excite and sense the first symmetric wave mode, S0, at the chosen excitation frequency. The excitation was a five cycle, 190 kHz, Gaussian-windowed pulse, leading to an average wavelength of 27 mm for the S0 mode. Damage was introduced by drilling through-holes at the nine locations indicated in the figure. Tests using electromagnetic acoustic transducers showed that the stringer exhibited average through transmission and direct reflection coefficients of 0.30 and 0.32, respectively, for the S0 mode at 190 kHz, with unmeasured omnidirectional scattering, mode conversion and damping accounting for the remainder of the energy.
Before inducing damage, a 48 h ensemble of damage-free measurements was taken with actuation and sensing across all pairs performed every 15 min. After the no-damage measurements were acquired, nine holes were drilled across the structure, one at a time, each hole starting with a 2 mm diameter and widened in 2 mm increments to 8 mm. New baselines were taken before the drilling of every new 2 mm hole, but not between the widening of each hole.
The pairs of baselines and tests for each damage case were then processed one at a time. To simulate imperfect baseline subtraction, two random sets of measurements, taken no more than 6 h apart, were pulled from the damage-free ensemble (with replacement), subtracted from one another, and the difference added to the test waveform. This was repeated to generate 500 ‘noisy’ test measurements for each damage mode. This procedure for building noisy measurements follows from the assumption that the introduction of damage has a negligible effect on the measurement noise.
The 6 h window confined the absolute temperature difference between any two baselines in a pair to between 0 °C and 2 °C. This was done to mimic (but not replicate) the anticipated effective remaining discrepancy in baseline subtraction following temperature compensation approaches such as OBS (Croxford et al. 2010). While the RMLE is entirely compatible with such compensation approaches, their incorporation is beyond the scope of this work. Under this surrogate noise procedure, the resulting root mean square errors (r.m.s.e.s) between the filtered noisy test and filtered baseline waveforms were 0.36, 0.41, 0.50 and 0.55 for the 2, 4, 6 and 8 mm hole diameters, respectively, while the r.m.s.e. for surrogate noise alone, i.e. without any damage present, was 0.30.
Each of the test measurements was paired with the most recently recorded baselines. As such, each test–baseline pair only reflects the addition of one new hole at a time. Localizations were performed on each set of measurements according to the RMLE method as well as the seven other localization alternatives. This resulted in a total of 500 damage location estimates for each of the 36 damage modes and according to each of the eight processing methods.
Here, the performance of the eight localization methods is compared using the LPDF estimate and LOC curve. The LPDFs were estimated using a Gaussian kernel with a bandwidth of 3 cm. Unless otherwise specified, the LOC curves reflect the combined LOC curves for the 4, 6 and 8 mm hole diameters.
(a) Overall performance
Figure 3 provides a summary of the images I(x) generated according to each localization method for five of the nine 6 mm diameter holes. These images were created using baselines acquired within 5 min of each test measurement and without the introduction of surrogate noise through the process described above, leading to near-ideal baseline subtraction. The hole locations are ordered from top to bottom according to their distance from the centroid of the array. These images provide a qualitative measure of the nominal, noise-free localizations as determined by the maximum in each image. Their primary intent is to give an appreciation for the signature shapes that each of the localization algorithms produces. Examples of these include the distinct ellipses of the TOA and TP methods, the hyperbolas of the TDOA methods, and the filled ellipses of the RAPID method. Notice also the similarities in the shape between the proposed RMLE method and the EA method.
A more informative evaluation of the localization performance can be achieved by studying the estimated LPDFs shown in figure 4. These images show a two-dimensional kernel density estimate of the distribution of localizations in the presence of surrogate noise for the 6 mm hole size. A uni-modal distribution about the true damage location indicates a high level of accuracy. A uni-modal distribution away from the true location implies a location estimation bias, probably a result of secondary scatter echoes leading to false peak values in the localization image. Multi-modal distributions imply high sensitivity to measurement noise.
Figure 5 shows the combined LOC curves across all locations and the 4–8 mm diameters for each of the localization methods. The shaded area reflects performance regions that are worse than that of the optimal uninformed localizer, i.e. the localizer that always chooses the centre of the plate. These figures indicate that the top performers for this test were the RMLE, windowed time of arrival (W-TOA) and EA methods. The unwindowed TOA and TP methods generally performed worse than the uninformed localizer. Figure 3 suggests that this is a result of image corruption from secondary scatter echoes, which are not reduced through processing as they are in the windowed counterparts, W-TOA and windowed total product (W-TP).
For the lowest localization areas, the performance of the RMLE dips below those of the W-TOA and EA methods. In this region of the graph, the performance is dominated by the central hole locations. The performances of the W-TOA and EA methods are particularly high for these holes since the corruption owing to secondary scatter echoes from plate boundaries is low. The RMLE approach, on the other hand, is derived based on a model that dictates that secondary scatter echoes arrive immediately after the direct scatter. Since the model does not fit as well to these central holes, the RMLE performance is slightly diminished. This implies two things. First, the relative performance of the RMLE, in its current form, has the potential to improve as the structure becomes more complicated. This is particularly significant since real-world structures are often considerably more complex, making the proposed algorithm increasingly relevant as applications move away from the laboratory. Second, in situations where the structure geometry is relatively simple, the RMLE could probably be improved by dividing the waveforms into three sections instead of two: (i) the time before the first scatter, (ii) the band of time containing the direct scatter, and (iii) the time after the arrival of the first secondary scatter echo. This, of course, would necessitate a more sophisticated model of the wave propagation process, requiring knowledge of the time of flight of both the direct scatter and the first secondary scatter from each potential damage location.
Notice the unique shape of the LOC curve for the RAPID method. RAPID is the only algorithm that does not generate images through time-of-flight. Instead, indicators of damage are averaged across local regions about the direct paths between transducers. This makes it particularly robust to noise and scatter echoes at the cost of localization resolution. As such, RAPID goes from being one of the worst localizers when trying to achieve small localization areas to one of the best when allowing for large localization areas.
(b) Effect of reduced transducer count
Figures 6 and 7 show the raw images and estimated localization density functions for a subset of the localization methods for one hole at a diameter of 6 mm when using seven, four and three transducers. Figures 8 and 9 provide the corresponding combined LOC curves across all locations and the 4–8 mm hole diameters. Figure 10 shows a summary of LOC curves for seven, five, four and three transducer arrays for only the RMLE and W-TOA methods. These figures make it clear that the performance gap between the proposed RMLE method and the existing approaches increases significantly with decreasing transducer count.
It is apparent that most of the other localization methods were originally designed without scatter echoing in mind, implementing at best an empirically derived windowing procedure for compensation. As such, they rely on averaging through the combination of many transducer pairs in order to reduce the effect of echoes. The effect of echoes then is subsequently magnified as transducers are removed from the array. Conversely, the RMLE method is derived from a statistical foundation that accounts for the echoes in its design and therefore exploits them in the localization procedure.
The only approach other than the RMLE to perform significantly better than the uninformed localizer for the three transducer array was the EA method. Not coincidentally, this is the only other approach that attempts to include the identification of echoes in the localization process. Interestingly enough, however, the EA method is the only approach that performed better with three transducers than with four for a significant range of localization area fractions (in this case, greater than 0.03).
(c) Effect of drill hole diameter
Finally, the LOC curves in figure 11 show the relative performance when localizing 2, 4, 6 and 8 mm holes according to the RMLE and the W-TOA methods. The curves demonstrate that the 2 mm hole is, for the most part, not localizable with either method. The greatest jump in performance is between the 2 and 4 mm sizes, with diminishing improvement from 4 to 6 mm and almost no improvement from 6 to 8 mm. This trend is shared by both localization approaches.
This study has shown the RMLE to be an effective, parameter-free GWSHM localization algorithm ideally suited for complex geometries and sparse transducer arrays. Perhaps more significantly, however, is the demonstration of the use of model-based MLEs as a basis for improved algorithm development. Moreover, the two-part stochastic model used in this study is perhaps one of the most simple imaginable for the GWSHM process. Thus, as alluded to in §3, improved performance can undoubtedly be achieved through higher order modelling. In summary, the implementation of likelihood estimates directly connects SHM algorithm performance to the advancement of modelling capability.
While not considered in this study, a practical, and perhaps important, extension of the model could be to account the potential for, and relative probability of, multiple simultaneous damage modes. For example, to localize N simultaneous damage modes, the model might consist of N Rayleigh parameter transition points, where N is treated as a random variable, which must also be estimated. This, of course, will come at a price since having to simultaneously estimate more unknown parameters (both N and the additional Rayleigh parameters) will ultimately lead to a reduction in estimator performance. This sort of drop in performance following a reduction in information is a property of all effective estimators. As such, while further study is needed, according to the same line of reasoning explaining the RMLE's superiority, it is believed that a maximum-likelihood-based estimator of multiple simultaneous damage mode locations would outperform the seven alternative localization algorithms described in this study when presented with the same task.
The performance evaluation techniques presented here are designed with practical SHM implementation in mind. Ultimately, the operators of structural systems are not interested in obscure absolute differences between feature values. Their key performance metric is reliability, i.e. how often the SHM system succeeds and how often it fails (probability of detection, false alarm rate, etc.). LOC curves provide a measure of reliability (detection rate) for any operator-chosen level of operation flexibility (localization area), or vice versa. It is believed that the adoption of these types of statistics-based practices could go a long way to transition SHM from laboratory research into real-world practice.
The authors acknowledge the joint support of the National Science Foundation Graduate Research Fellowship and the Benjamin F. Meaker Visiting Professorship to the University of Bristol (UK).
- Received February 3, 2011.
- Accepted March 7, 2011.
- This journal is © 2011 The Royal Society