Royal Society Publishing


Based on the extensive literature that has developed on structural health monitoring over the last 20 years, it can be argued that this field has matured to the point where several fundamental axioms, or gen eral principles, have emerged. The intention of this paper is to explicitly state and justify these axioms. In so doing, it is hoped that two subsequent goals are facilitated. First, the statement of such axioms will give new researchers in the field a starting point that alleviates the need to review the vast amounts of literature in this field. Second, the authors hope to stimulate discussion and thought within the community regarding these axioms.


1. Introduction

The process of implementing a damage identification strategy for aerospace, civil and mechanical engineering infrastructure is referred to as structural health monitoring (SHM). In the context of this paper, damage is defined as changes to the material and/or geometric properties of these systems, including changes to the boundary conditions and system connectivity, which adversely affect the current or future performance of these systems. The damage identification process is generally thought to entail establishing: (i) the existence of damage, (ii) the damage locations, (iii) the types of damage, and (iv) the damage severity.

Based on the information published in the extensive amount of literature on SHM over the last 20 years (Doebling et al. 1996; Sohn et al. 2004a), the authors feel that the field has matured to the point where several fundamental axioms, or accepted general principles, have emerged. The word ‘axiom’ is being used here in a slightly different way to that which is understood in the literature of mathematics and philosophy. In mathematics, the axioms are sufficient to generate the whole theory. To take arithmetic as an example, the axioms for the field of real numbers specify not only the properties of the numbers, but also how the usual arithmetical operators act on them. As a consequence, nothing else is needed in order to derive the whole arithmetic of the real numbers. The word axiom here is used to represent a fundamental truth at the real numbers of any SHM methodology. However, the axioms are not sufficient to generate a given methodology. First, the authors are not suggesting that the axioms proposed here form a complete set, and it is possible that there are other fundamental truths that have been omitted here. Second, the axioms here do not specify the ‘operators’ for SHM; in order to generate a methodology from these axioms, it is necessary to add a group of algorithms that will carry the SHM practitioner from the data to a decision. It is the belief of the authors that these algorithms should be drawn from the discipline of statistical pattern recognition, and algorithms of this type will be used in the illustrations throughout this paper. The intention of this paper is to explicitly state and justify a (possibly incomplete) set of axioms for SHM. In so doing, it is hoped that two subsequent goals are facilitated. First, the statement of such axioms will give new researchers in the field a starting point that alleviates the need to review the vast amounts of literature in this field. Second, the authors hope to stimulate discussion and thought within the community regarding these axioms. Hopefully, such technical exchanges will generate research ideas leading to examples or theorems that either prove or disprove the validity of these axioms. The axioms that will be addressed are listed here.

  • Axiom I: All materials have inherent flaws or defects.

  • Axiom II: The assessment of damage requires a comparison between two system states.

  • Axiom III: Identifying the existence and location of damage can be done in an unsupervised learning mode, but identifying the type of damage present and the damage severity can generally only be done in a supervised learning mode.

  • Axiom IVa: Sensors cannot measure damage. Feature extraction through signal processing and statistical classification is necessary to convert sensor data into damage information.

  • Axiom IVb: Without intelligent feature extraction, the more sensitive a measurement is to damage, the more sensitive it is to changing operational and environmental conditions.

  • Axiom V: The length- and time-scales associated with damage initiation and evolution dictate the required properties of the SHM sensing system.

  • Axiom VI: There is a trade-off between the sensitivity to damage of an algorithm and its noise rejection capability.

  • Axiom VII: The size of damage that can be detected from changes in system dynamics is inversely proportional to the frequency range of excitation.

The layout of the paper is simple and each of the following sections will discuss the proposed axioms in turn. The paper concludes with a brief overall discussion and synthesis.

2. Axiom I

It is easy to imagine a perfect material. Suppose one considers aluminium for example, a perfect sample would comprise a totally regular lattice of aluminium atoms, a perfect periodic structure. It is well known that if one were able to compute the material properties of such a perfect crystal, properties such as the strength would be far higher than those observed experimentally. The reason for this discrepancy is of course that all materials, and hence all structures, contain defects at the nano/microstructural level such as vacancies, inclusions and impurities. An example of such defects is shown in figure 1a, where inclusions are evident at the grain boundaries in an as-fabricated sample from a U–6Nb plate. Figure 1b shows cracks forming along the inclusion lines in a similar specimen after it has been subjected to shock loading.

Figure 1

(a) Inclusions at the grain boundaries in U–6Nb. (b) A micrograph of a U–6Nb plate showing crack propagation along inclusion lines after shock loading. Courtesy of Dan Thoma, Los Alamos National Laboratory.

Metals are never perfect single crystals with a perfect periodic lattice. Broberg (1999) provides an excellent account of the inception and growth of microcracks and voids in the process region of a metal. In fibre-reinforced plastics (FRPs), defects can also occur at the macrostructural level owing to voids produced in manufacturing. These defects compromise the strength of the material, as coalescence of defects in extreme loading regimes will lead to macroscopic failure at the component level and, subsequently, at the system level. However, engineers have learned to overcome the design problems imposed by the inevitability of material imperfections. For example, in any composite materials text one can find properties of specific fibre/resin systems, virtually always provided as a range of values. These values are totally dependent on the manufacturing process used and any minor variation in the process will cause a departure from nominal values. Therefore, for composites, a basic material evaluation programme is often required at the design stage. The engineer will design a structure using failure criteria based on material property values from the lowest end of the range derived from the testing programme. Depending on the design philosophy in use, whether safe-life or damage tolerant, for example, damage may or may not be expected at some point in the operational life of a structure and this is where SHM becomes essential.

In many engineering materials, the effects of nano/microstructural level defects can be subsumed into the average material properties such as yield stress or fatigue limit and are not typically considered as ‘damage’. However, in other circumstances, like in composite materials, this may not be the case and the void content of the material should be regarded as initial damage. There is no way (under dynamic load) of preventing damage evolution from voids and the associated degradation of the material properties.

As all materials contain imperfections or defects, the difficulty is to decide when a structure is ‘damaged’. Here, the definition of damage in the first introductory paragraph must be extended to include the concept that damage is present when one cannot account for the imperfections in system design and performance prediction using bulk material properties. It is important to have a clear taxonomy for further discussion. Worden & Dulieu-Barton (2004) have described a hierarchical relationship between faults, damage and defects as follows.

A fault is when the structure can no longer operate satisfactorily. If one defines the quality of a structure or system as its fitness for purpose or its ability to meet customer or user requirements, it suffices to define a fault as a change in the system that produces an unacceptable reduction in quality.

Damage is when the structure is no longer operating in its ideal condition, but it can still function satisfactorily, but in a suboptimal manner.

A defect is inherent in the material and statistically all materials will contain a known amount of defects. This means that the structure will operate at its optimum if the constituent materials contain defects.

The above taxonomy leads to the notion that defects (which are inevitable in real materials) lead to damage and damage leads to faults. Using this idea, it is possible to go beyond the conservative safe-life design philosophy, where the structure is designed to reach its operational lifetime without experiencing damage, to design a damage tolerant structure (Reifsnider & Case 2002; Grandt 2003). However, in order to obtain a damage-tolerant structure, it is necessary to introduce monitoring systems so that one can decide when the structure is no longer operating in a satisfactory manner. This requirement means that a fault has to have a strict definition, e.g. the stiffness of the structure has deteriorated beyond a certain level.

The strict definition of failure coupled with a monitoring system allows one to consider the concept of prognosis. Here, prognosis means the prediction of the structure's future operational life, given some assessment of its current condition and some prediction of its anticipated future operational environment.

3. Axiom II

This is possibly the most basic of the proposed axioms. It is necessary to state it explicitly as it is sometimes stated that some approach or other ‘does not require a baseline’. It is argued here that this statement is simply never true, and any misunderstanding lies with the assumed meaning of ‘baseline’. In the usual pattern recognition approaches to SHM, a training set is required. In the case of damage detection, where novelty detection approaches can be applied, the training set is composed of samples of features that are representative of the normal condition of the system or structure of interest. For higher levels of diagnosis requiring estimates of damage location or severity, the training data must not only contain samples of normal condition data, but also must be augmented with data samples corresponding to the various damage conditions. In this case, there is no argument that the normal condition data constitute the baseline. In order to illustrate how this approach is used, it is necessary to fix on a particular algorithm and so outlier analysis will be used here (Worden et al. 1999). A very brief summary of the approach will be given for completeness.

A discordant outlier in a dataset is an observation that appears inconsistent with the rest of the data and therefore is believed to be generated by an alternative mechanism to the other data. The discordance of the candidate outlier is a measure that may be compared against some objective criterion allowing the outlier to be judged to be statistically likely or unlikely to have come from the assumed generating model. The application to damage detection is clear; the original dataset is taken to describe the normal condition and any subsequent indications of discordance signal the presence of damage.

The discordance test for multivariate data used in this work is the Mahalanobis squared distance measure given byEmbedded Image(3.1)where Embedded Image is the potential outlier feature vector; Embedded Image is the mean vector of the normal condition features; and S is the corresponding sample covariance matrix. A superscript T indicates transpose.

In order to label an observation as an outlier or part of the normal condition, there needs to be some threshold value against which the discordance value can be compared. This value is dependent on both the number of observations and the number of dimensions of the problem being studied. The value also depends upon whether an inclusive or exclusive threshold is required. A more detailed explanation is given by Worden et al. (1999), along with the means of computing the threshold to a given level of statistical confidence.

The approach will be illustrated here on a problem of damage detection in a carbon fibre composite (CFRP) plate. The plate was fabricated and instrumented with piezoelectric actuator/sensors by colleagues at INSA, France as part of the European Union project DAMASCOS (Pierce 2001). A schematic for the plate is shown in figure 2; the piezoelectric patches used as actuators are labelled E (for emitter) and the patches used as sensors are labelled R (for receiver). The emitters were used to launch Lamb waves that propagated across the plate and were measured at the receivers. Damage was initiated in the form of a drilled hole at the geometrical centre of the plate as shown in figure 2. This damage location means that the damage was on the axis between E1 and R1 and between E3 and R3. R2 was placed off-centre on an edge in order to establish if damage could be detected off-axis.

Figure 2

Schematic of composite plate with transducers used for Lamb-wave damage sensing.

For the situation discussed here, a Gaussian modulated sine wave was launched at E1 and received at R2 (off-axis). A window of 50 time points encompassing the arrival of the first wave packet was used as the feature for outlier analysis (figure 3a). One hundred feature vectors were used as the training set for the normal condition of the plate with no damage. Subsequently, the hole was drilled with 10 diameters from 1 to 10 mm and for each damage severity, the wave feature was measured. When the Mahalanobis distance was computed for each of the damages, the results shown in figure 3b were obtained. All damage equal to or over 2 mm were detected.

Figure 3

(a) Windowed time signal used for novelty detection. (b) Novelty index for test patterns including all damage samples. (c) Two-dimensional Sammon map of feature data used.

In order to visualize the data, a Sammon map was used. This procedure is a nonlinear generalization of principal component analysis, which was used to reduce the dimension of the data from 50 to 2 while retaining the predominant information contained in the data. This technique is described in detail by Worden & Manson (1999). The Sammon map is shown in figure 3c. The normal condition data are the 100 points represented by solid black circles, and the damage data represented by various symbols are clearly distinct from normal condition. As the damage progresses, the data move further away from the normal points. This property is reflected in the increasing Mahalanobis distance. Essentially, the damage index is simply a weighted distance from the centre (the mean) of the normal cluster. The interpretation of the normal data as a baseline is obvious here. It is equally obvious that the mechanism of detection is simply a comparison between the damage features and those from the normal condition. All novelty detectors in current use rely on the acquisition of a normal condition training set, which includes autoassociative neural networks (Worden 1997), probability density estimators (Bishop 1994) and negative selection algorithms (Dong et al. 2006).

The necessity of axiom II is not confined to damage detection methods based on pattern recognition, and it is also a requirement of the large class of algorithms based on linear algebraic methods. It is perhaps most obviously needed in the case of finite element (FE) updating methods (Mottershead & Friswell 1993; Friswell & Mottershead 1995), a class of algorithms that have proved successful at the higher levels of damage identification, i.e. in the location and quantification of damage (Fritzen & Bohle 2003; Goerl & Link 2003). The FE updating methodology is based on the construction of a high-fidelity FE model of the structure of interest in its normal condition. To assure that the model provides an accurate description of the virgin-state system, it is usually updated on the basis of experimental data, i.e. the parameters of the FE model (e.g. stiffness indices) are adjusted to bring it into closer correspondence with the experimental observations. The process of damage identification then proceeds by further updating the model on the basis of monitored data. Clearly, any further need for parameter adjustment will be because the system has changed and this change is assumed to be caused by the damage. The particular elements adjusted will pinpoint the location of the damage and the size of the adjustment provides an estimate of the severity of the damage.

Several approaches claim to operate without a baseline data, which one might interpret that they do not require a comparison of systems states. It is argued here that this is just a matter of terminology. One such method is the strain energy method of Stubbs & Kim (Stubbs et al. 1995). This approach appears only to operate on data from the damaged structure. Roughly speaking, an estimate of the modal curvature is used to locate and size the damage. In fact, one might argue that there is an implicit baseline or model for this data in the assumption that the undamaged structure behaves as an Euler–Bernoulli beam. Also, the feature computed—the curvature—cannot be used without a threshold of significance, which is computed on the understanding that most of the estimated curvature data come from the rest of the structure that is undamaged.

Another method based on time-reversal acoustics (Sohn et al. 2005) also makes the claim that no baseline data are needed. However, this method does make the assumption that the undamaged structure responds in a linear, elastic manner and will exhibit the time-reversal property, even though this assumption may not be experimentally verified. The baseline here is the ideal of an elastic solid. In fact, many methods that claim they do not require baseline data in actuality use a numerical model to generate the equivalent of baseline data. Stubbs & Kim (1996) and Sikorsky (2000) use the assumption that the undamaged structure exhibits linear response characteristic, or assume that the damage-sensitive features associated with the baseline structure are time-invariant and are not affected by operational and environmental variability.

Another approach that appears not to have a baseline, but explicitly requires a comparison between system states, is the nonlinear acoustic approach described by Donskoy et al. (2006) and Ryles et al. (2006). The idea is to instrument a plate, for example, with two actuators and a receiver. In a first test, actuator 1 is used to launch an ultrasonic (transient) Lamb wave at a high frequency, fh, and record the response at the receiver. This response is then converted to a spectrum and can of course be formed by averaging. Now, if the system is linear, the spectrum contains a single line at fh as shown in figure 4a. If the system is nonlinear, as a result of damage, the spectrum as shown may still appear to have a single line, as the other response components will be at the frequencies 2fh, 3fh, …, which are located in the higher frequency portion of the spectrum. Sampling parameters and anti-aliasing filters may not allow this portion of the spectrum to be observed. If the exercise is now repeated with the second actuator exciting with a low-frequency harmonic signal at fl and the exercise of forming the spectrum is repeated, the nonlinearity will result in the appearance of sidebands at the frequencies fh±fl as shown in figure 4b. The appearance of the sidebands in the second test is the indicator of nonlinearity and hence damage. An index of damage extent can then be formed by recording the height of the sidebands (Donskoy et al. 2006) or the spread of the sidebands (Ryles et al. 2006). Since the harmonics in the test corresponding to figure 4a are not observed, the measurement is a surrogate for the linear system and hence can be regarded as the baseline. Note that, in principle, one could infer the presence of nonlinearity from the observation of harmonics in the single test (figure 4a). However, such observations are usually less practical because they would require confidence that the harmonics are not being generated by the instrumentation, which may then require higher-cost data acquisition equipment. If one intends to look for harmonics, a practical approach to addressing the issue of nonlinearity of the instrumentation can be found in Brotherhood et al. (2003).

Figure 4

(a) Response to excitation at high frequency fh. (b) Response to simultaneous excitations at high fh and low fl frequencies.

The fact that damage detection algorithms require a comparison of system states is at the root of one of the main problems in SHM. If the normal condition or baseline state changes as a result of environmental or operational variations, then the application of a novelty detection algorithm may yield a false-positive indication of damage. This issue will be discussed in more detail later when axiom IV is considered.

4. Axiom III

The statistical model development portion of SHM is concerned with the implementation of the algorithms that operate on the extracted damage-sensitive features to quantify the damage state of the structure. The algorithms used in statistical model development usually fall into three categories. When data are available from both the undamaged and damaged structures, the statistical pattern recognition algorithms fall into the general classification referred to as supervised learning. Group classification and regression analysis are categories of supervised learning algorithms and are generally associated with either discrete or continuous classification, respectively. Unsupervised learning refers to algorithms that are applied to data not containing examples from the damaged structure. Outlier or novelty detection is the primary class of algorithms applied in unsupervised learning applications. All these algorithms analyse statistical distributions of the measured or derived features to enhance the damage detection process.

As previously discussed, the damage state of a system can be described as a four-step process that answers the following questions.

  1. Is there damage in the system (existence)?

  2. Where is the damage in the system (location)?

  3. What kind of damage is present (type)?

  4. How severe is the damage (severity)?

Answers to these questions in the order presented represent increasing knowledge of the damage state. When applied in an unsupervised learning mode, statistical models can typically be used to answer questions regarding the existence and location of damage. As an example, if a damage-sensitive feature extracted from measured system response data exceeds some predetermined threshold, one can conclude that damage has occurred. This conclusion also must rely on the knowledge that the change in the feature has not been caused by operational or environmental variability. Many approaches to damage detection in rotating machinery, such as those that examine change in the kurtosis values of the acceleration amplitude response to identify bearing damage, are based on such outlier analysis (Worden et al. 1999). Similarly, changes in features derived from relative information obtained from an array of sensors can be used to locate the damage as is done in many wave propagation approaches to SHM (Pierce 2001). In general, these statistical procedures cannot distinguish between possible damage types or the severity of damage without additional information.

When applied in a supervised learning mode and coupled with analytical models or data obtained from the structure when known types and levels of damage are present, the statistical procedures can, in theory, be used to determine the type of damage and the extent of damage. The previously mentioned FE model updating approaches to SHM can provide an estimate of damage existence, location and associated stiffness reduction (extent). However, these approaches are typically limited to cases where the structure can be modelled as a linear system before and after damage. Also, these procedures do not usually identify the type of damage present, but instead just assume that the damage produces a local reduction in stiffness at the element level. In the case of rotating machinery, large databases can be obtained from nominally identical pieces of equipment that have been run to some threshold condition level or, in extreme case, to failure where the type and extent of damage are assessed through some type of equipment autopsy. These data can be used to build group classification and regression models that can assess the type of damage and its extent, respectively.

In the example discussed in §3 and summarized by figures 2 and 3, the novelty index was a monotonic function of the damage size and the reader might suppose therefore that an unsupervised approach may lead to higher levels of damage identification. Unfortunately, this is not the case and counterexamples are not difficult to obtain, as the next case study of damage detection in an aircraft wingbox will be used as the counterexample.

The object of the investigation was to detect the damage in an experimentally simulated aircraft wingbox as shown schematically in figure 5. Damage was induced in the form of a sawcut in one of the stringers at the location shown. The sawcut was increased from an initial depth of 0.25 mm to 2.25 cm in steps of 0.25 mm, giving nine damage severities. The details of the experiment can be found in (Worden et al. 2003). Essentially, the structure was excited with a Gaussian random noise at a point on the undersurface of the top skin and the acceleration responses were recorded from accelerometers on the top surface above the two ends of the damaged stringer. These responses were used to form the transmissibility spectrum that was then used to construct a feature for novelty detection.

Figure 5

Experimental wingbox used for novelty detection study.

In order to accomplish the novelty detection, a set of 50 spectral lines spanning a single peak in the transmissibility was selected. This peak was chosen because it showed systematic variation as the damage severity increased (figure 6). The 50 lines gave a 50-dimensional feature that could be used for outlier detection as discussed earlier. When the Mahalanobis distance was computed for the undamaged condition and the nine damaged conditions, the results for 10 samples corresponding to each damage case were obtained as shown in figure 7.

Figure 6

Transmissibility peak chosen as a damage-sensitive feature.

Figure 7

Novelty index for damage detection in aircraft wingbox. Points are in groups of 10 in order of increasing damage severity.

It is clear from figure 7 that the novelty index in this case is not a monotonic function of the damage severity and therefore cannot be used to infer severity from the measured features. This result offers direct verification of this axiom. In this case, it is actually fairly simple to explain why a monotonic function is not obtained. This non-monotonic change in the feature with damage level results from the fact that at higher levels of damage the transmissibility peak actually shifts out of the pattern window. When the resonance shifts, there are two sources of novelty, one is the displaced peak and the other is the missing peak in the window corresponding to normal condition. Once the resonance has moved from the window, there is only the latter source of novelty and the Mahalanobis distance comes down as a result.

5. Axiom IVa

Sensors measure the response of a system to its operational and environmental input. Therefore, there is nothing surprising about the fact that sensors cannot directly measure damage. In a more basic context, it is similarly impossible to measure stress. The solution is to measure a quantity—the strain—from which one can infer the stress. (In fact, things are a little more indirect than this. The important point is that the sensor yields a value linearly proportional to the physical quantity of interest. Knowledge of the material properties then allows one to infer the stress from the strain measure.) In other words, the stress σ is a known function of the strain ϵ,Embedded Image(5.1)

In this case, the function f is known from observations of basic physics and is particularly simple. The situation is a little more complicated for damage. Suppose for the sake of simplicity that the damage state of a given system is captured by a scalar D, the first objective of damage identification is to measure some quantity, Embedded Image, usually vectorial (or alternatively, multivariate), which is a function of the damage state,Embedded Image(5.2)

The main difficulty for SHM is that the function Embedded Image is generally not known from basic physics and must usually be learned from the data. The data in question may often be of high dimensionality such as a computed spectrum or a sampled wave profile.

The main problem associated with the machine learning or pattern recognition techniques used to learn the function in equation (5.2) is their difficulty in dealing with data vectors of high dimensionality. This limitation is sometimes termed the curse of dimensionality (a phrase attributed to Richard Bellman). If one considers methods depending on the availability of training data, i.e. examples of the measurement vectors to be analysed or classified, then the curse is simply that, in order to obtain accurate diagnostics, the amount of training data theoretically grows explosively with the dimension of the patterns (Silverman 1986).

From a pragmatic point of view, there are two solutions to the problem. The first is to obtain adequate training sets. Unfortunately, this will not be possible in many engineering situations, owing to the limitations on the size and expense of testing programmes. The second approach is to reduce the dimension of the data to a point where the available data are sufficient. However, there is a vital caveat that the reduction or compression of the data must not remove the influence of the damage. (Also, one should be aware that if the high-dimensional features are insensitive to damage, no amount of dimension reduction will help.) This caveat means that the feature extraction must be tailored to the problem. One example of good feature extraction selection for damage identification in a gearbox would be to select only the lines from a spectrum that are at multiples of the meshing frequency, as it is known that these lines are the most strongly affected by damage (Mitchell 1993). This approach is feature selection on the basis of engineering judgment. More principled approaches to dimension reduction may be pursued, but care should be taken, for example, if one uses principal component analysis (PCA), then one certainly obtains a reduced dimension feature. However, this vector is obtained using a criterion that may not preserve the information from damage.

6. Axiom IVb

This section discusses what will be called intelligent feature extraction. The concern being addressed here is that the features derived from measured data will not only depend on the damage state, but may also depend on an environmental and/or operational variable. Temperature θ will be used here for illustrative purposes and thus equation (5.2) becomesEmbedded Image(6.1)and the machine learning problem is complicated by the fact that one wants to learn the dependence on D despite the fact that some of the variation in the measurand is likely to be caused by θ varying. The problem of feature extraction is then to find a reduced dimension quantity that depends on the damage, but not the temperature.

This section will illustrate the axiom using an example from damage detection using Lamb-wave propagation and outlier analysis once more. The sample is a 300 mm square CFRP plate and the experimental set-up is described in more detail by Manson (2002). Piezoceramic discs were bonded at the mid-points of two opposing sides, one acting as an emitter and the other as a receiver. The Lamb wave discussed here was launched by driving the emitter with a five-cycle tone burst at 300 kHz, which produced a wave that was predominantly the fundamental S0 mode. In this particular example, the feature was selected by Fourier transforming the time signal of the wave amplitude and down-sampling the resulting spectrum to 50 lines in the region of interest. The resulting pattern is shown in figure 8. This portion of the Fourier spectrum is referred to as a basic feature to signal the fact that it has been chosen simply because previous experiments had shown that such features were sensitive to damage. Environmental effects have not been considered yet.

Figure 8

Basic 50-point spectrum of the Lamb wave used for damage detection.

The next stage of the experiment was to investigate the effect of environmental variation. In order to obtain the required data, the composite plate was placed in an environmental chamber. Lamb-wave signals were recorded every minute. For the first 1355 signals (approx. 22.5 h), the chamber temperature was held at a constant 25°C. Note that these first 1355 signals, when shown in the temperature record in figure 9, show the ambient laboratory temperature, while the remaining points show the chamber temperature. This result may appear a little confusing, but it illustrates the extent of the ambient temperature variation, while the plate temperature was held constant. After the constant temperature phase, the chamber temperature was decreased to 10°C before being ramped to 30°C over a 3 h period, then back to 10°C again over a further 3 h. This cycle was repeated for three further cycles. At signal number 2483, the chamber was opened, a 10 mm hole was drilled in the plate directly between the two piezos and the chamber was closed. There are thus three phases to the test. Signals 1–1355 are from the undamaged panel held at a constant 25°C, signals 1356–2482 are from the undamaged panel with temperature cycling and signals 2483–2944 are from the damaged panel with temperature cycling. All the signals were converted to 50-point spectral features as discussed above.

Figure 9

Temperature variation over signal database for environmental chamber experiment.

In order to study the effect of the temperature variation on the novelty detection, the mean and covariance data were taken from a training set composed of every second signal from the range 1–1355, i.e. the constant temperature phase. The remaining signals from this range were held over to the testing set. The statistics on the training set were used to compute the Mahalanobis distance squared, equation (3.1), over the remaining patterns in the signal database that constituted the training set. The results are shown in figure 10.

Figure 10

Outlier analysis over testing set for environmental variation experiment.

Unsurprisingly, the vast majority of the first 677 patterns (the other half of the constant temperature set) give novelty indices below the threshold. However, although all the features from the damage set are substantially above threshold, so are all the features from the undamaged, temperature-cycled phase. This result is clearly undesirable. (If one were to include some of the temperature-cycled data as part of a validation set and set the threshold on the basis of this set, it would still be possible to distinguish the damage; however, this procedure would not be very robust.) However, this effect is not confined to laboratory specimens as it is shown by Farrar et al. (2000) that the natural frequencies of a bridge show greater variation as a result of the day–night temperature cycle than they do as a result of damage.

Note that the statement of the axiom includes the caveat ‘without intelligent feature extraction…’ It will be shown below that algorithms exist that can project out the environmentally sensitive component of the features while preserving the damage sensitivity. A few techniques have emerged for this purpose recently; the method shown here involves minor component analysis, but effective procedures exist based on univariate outlier statistics (Manson 2002) and factor analysis (Kullaa 2001).

For the basic features considered here, the principal component decomposition was computed for all the undamaged data including that from the temperature cycling, and it emerged that the great majority of the data variance corresponding to the temperature variation was projected onto the first 10 principal components. However, in order to be assured of eliminating these effects, only the last 10 components (the minor components) were extracted for use as an advanced feature. When the outlier analysis was carried out on the whole testing set, including the damage patterns, the results shown in figure 11 were obtained.

Figure 11

Results of outlier analysis using advanced feature computed using minor PCA components.

The results are excellent; the new 10-point feature separates the undamaged and damaged data perfectly even when the temperature-cycled data are included.

An alternative approach to the problem posed in this section is to learn the dependence on both the damage and environment (D and θ). (Note that this learning problem is a mixed one; supervised learning is used to obtain the temperature dependency, while unsupervised learning is used to detect the damage.) This approach will not be considered here, but a discussion can be found in Worden et al. (2002). Finally, the simplest approach to the overall problem here would be to directly select features which are sensitive to damage but not to environmental and operational variations. This is not easy in general; an interesting example of this strategy in the context of wave propagation features can be found in Michaels & Michaels (2005).

7. Axiom V

Axiom I introduced the concept of the length-scales associated with damage and pointed out that defects are present in all materials beginning at the atomistic length-scale and spanning scales where component and system level faults are present. In terms of time-scales, damage can accumulate incrementally over periods of time exceeding years such as that associated with some types of fatigue or corrosion damage accumulation. Damage can also result in fraction-of-second time-scales from scheduled discrete events such as aircraft landing impacts and from unscheduled discrete events such as blast loadings and natural phenomena hazards like earthquakes.

Axiom IVa states that a sensor cannot measure damage. Therefore, the goal of any SHM sensing system is to make the sensor reading as directly correlated with, and as sensitive to, damage as possible. At the same time, one also strives to make the sensors as independent as possible from all other sources of environmental and operational variability. To best meet these goals for the SHM sensor and data acquisition system, the following sensing system properties must be defined.

  1. Types of data to be acquired.

  2. Sensor types, number and locations.

  3. Bandwidth, sensitivity and dynamic range.

  4. Data acquisition/telemetry/storage system.

  5. Power requirements.

  6. Sampling intervals (continuous monitoring versus monitoring only after extreme events or at periodic intervals).

  7. Processor/memory requirements.

  8. Excitation source (active sensing).

Fundamentally, there are four factors that control the selection of hardware to address these sensor system design parameters.

  1. The length-scales on which damage is to be detected.

  2. The time-scale on which damage evolves.

  3. How will varying and/or adverse operational and environmental conditions affect the sensing system.

  4. Cost.

The development of a damage detection system for the composite wings of an unmanned aerial vehicle (UAV) is used as an example. In one case, damage is assumed to be initiated by a foreign object impact on the wing surface. Such damage is often very local in nature and may manifest itself as fibre breakage, matrix cracking, delamination in the wing skin or a debonding between the wing skin and spar, in the order of 10 cm2 or less. Accurate characterization of the impact phenomena occurs on a micro- to millisecond time-scale, which requires the data acquisition system to have relatively high sampling rates (greater than 100 kHz). This damage may then grow to a fault after being subject to numerous fatigue cycles during many hours of subsequent flight. The time-scales associated with the damage initiation and evolution influence the sensing system properties (ii)—(vi).

Practically speaking, the UAV impact damage will not be considered a fault unless it significantly affects the operation of the aircraft. One manner in which this type of damage can affect aircraft operation is by changing the coupling of the bending and torsion modes of the wing, which, in turn, changes the flutter characteristics of the aircraft. Identifying and characterizing the local damage associated with foreign object impact may require a local and somewhat dense, active sensing system, while characterizing the influence of this damage on the UAV's flutter characteristics will require a more global sensing system (Sohn et al. 2004b). These length-scale considerations will influence sensing systems properties (i)—(iii), (v), (vi) and (viii) directly, and properties (iv) and (vii) indirectly.

In summary, this example clearly demonstrates that the length- and time-scales associated with damage initiation and evolution drive many of the SHM sensing system design parameters. A priori quantification of these length- and time-scales will allow the sensing system to be designed in an efficient manner.

8. Axiom VI

Once again, this axiom is illustrated via a group of computer simulations involving outlier analysis. However, for the sake of variety, the features selected this time are from the low-frequency modal regime. The system of interest is the three-mass lumped-parameter system studied in more detail by Worden (1997). The problem is to detect a loss of stiffness between the centre mass and one of the end masses (the masses are arranged in a simple chain). The feature used here is a window of the transmissibility spectrum between the two masses. Figure 12 shows the transmissibility vector for the undamaged state together with the data corresponding to stiffness reductions of 10, 20, 30, 40 and 50%. Each pattern constitutes 50 spectral lines.

Figure 12

Transmissibility feature for normal condition data (solid line) and various damage states (dashed lines).

The object of this exercise is to investigate the threshold for damage detection as a function of the noise-to-signal ratio. This ratio is expressed as a percentage of the maximum of the transmissibility magnitude over the clean normal pattern vector. For a given noise ratio, the training and testing sets for the outlier analysis are generated as follows. The training set is composed of 1000 copies of the clean normal pattern, each corrupted by an independent 50-dimensional Gaussian noise vector with zero mean and a diagonal covariance matrix with all variances equal to the noise-to-signal ratio. The testing set is simply the clean normal pattern and the five damage patterns of increasing severity. These vectors are then corrupted with noise vectors with the same statistics as those used for the training set. Once the training set has been generated, it is used as the basis for the mean and covariance statistics used to compute the Mahalanobis squared distance, equation (3.1).

Figure 13 shows the outlier statistics computed over the testing set for noise-to-signal ratios of 0.03, 0.06 and 0.09. It is clear from the figure that higher noise leads to a higher level of damage before the outlier statistic crosses the 99% threshold for damage. A more complete study involving many noise ratios in the range 0–0.1 leads to figure 14, which shows the minimum detectable damage level as a function of the extent of the noise corruption. As the axiom asserts, the detection level increases monotonically (ignoring the local noise which is a result of generating independent random statistical samples). In fact, the function is remarkably linear in this particular case.

Figure 13

Outlier statistics as a function of damage level for various noise-to-signal (NS) ratios for the training and testing data.

Figure 14

Damage sensitivity as a function of pattern noise level.

In summary, one should attempt to reduce the level of noise in the measured data or the subsequently extracted features as much as possible. This can be accomplished by wavelet de-noising, analogue or digital filtering or even simple averaging.

9. Axiom VII

In the field of ultrasonic non-destructive testing, the diffraction limit is often associated with the minimum size of flaw that can be detected as a function of ultrasonic wavelength. This limit may suggest that flaws of a size comparable with half a wavelength are detectable. The diffraction limit is actually a limit to the resolution of nearby scatterers, i.e. if two scatterers are separated by more than a half-wavelength of the incident wave, they will be separable. In fact, a flaw will scatter an incident wave for wavelengths below this limit and this amount of scattering decreases with increasing wavelength. This result means that, if instrumentation is available to detect arbitrarily small evidence of scattering, i.e. arbitrary small reflection coefficients, then arbitrarily small flaws can be detected. However, as described above, scattering is always substantial when the size of the flaw is comparable with the wavelength and so it is advantageous to use small wavelengths in order to detect small flaws.

The wavelength λ is related to the wave phase velocity ν and frequency f byEmbedded Image(9.1)and from this simple relationship it is clear that, for constant velocity, the wavelength will decrease as the frequency increases, which in turn implies that the damage sensitivity will increase.

Note that energy does not necessarily have to be input to the structure at these wavelengths. Nonlinear structures can have many frequency up-conversion (and down-conversion) properties that lead to structural response in bandwidths removed from input bandwidths.

Evidence for damage detection well below the diffraction limit for the size of the flaw can be found in numerous places in the literature. Two examples are cited here. Alleyne & Cawley (1992) investigated the interaction of Lamb waves with notches in plates from both an FE simulation viewpoint and by experiment. Among the conclusions of the paper was the fact that Lamb waves could be used to detect notches when the wavelength to notch depth ratio was of the order of 40. This result was true as long as the notch width was small compared with the wavelength. In support of the axiom, the study found that the sensitivity of given Lamb-wave modes to defects increased for incident waves at higher frequency–thickness products. However, the paper also found that the wavelength of the Lamb-wave modes was not the only factor affecting sensitivity. In some regimes, increases in frequency–thickness did not bring correspondingly large improvements in sensitivity. The authors observed that ‘appropriate mode selection can sometimes remove the need to go to higher frequencies where the waveform could be more complicated’.

Another interesting study by Valle et al. (2001) concerned a FE model of the propagation of circumferential waves in a cylinder with a radial crack and noted that the guided waves could ‘detect cracks down to 300 μm—even though the wavelength of these signals is much greater than 300 μm’. The thesis that the damage sensitivity increases with excitation frequency is supported by both Alleyne & Cawley (1992) and Valle et al. (2001). However, they both also offer the possibility that for the detection of a given size flaw, frequencies much lower than those suggested by the diffraction limit may suffice.

The relationship between damage sensitivity and wavelength can be extended to more general types of vibration-based damage detection methods. In these applications, the wavelength of the elastic wave travelling through the material is replaced by the ‘wavelength’ of the standing wave pattern set-up in the structure that is interpreted as a mode of vibration. The technical literature is replete with anecdotal evidence that such lower frequency modes are not good indicators of local damage (Doebling et al. 1996; Sohn et al. 2004a). The lower frequency global modes of the structure that have long characteristic wavelengths tend to be insensitive to local damage. For the case of civil engineering infrastructure such as suspension bridges, these mode-shape wavelengths can be of the order of hundreds of metres (Farrar et al. 2000), and flaws such as fatigue cracks that must be detected to assure safe operation of the structure are of the order of centimetres in length. Even if one allows for the fact that the cracks may have influence over a larger distance than the crack dimensions, this distance is still small compared with the mode-shape wavelengths.

The observations regarding the relationship between the characteristic wavelength and the flaw size have led research to explore other high-frequency active sensing approaches to SHM. These methods are based on Lamb-wave propagation (Sohn et al. 2004b), impedance measurements (Park et al. 2003), high-frequency response functions and time-reversal acoustics (Sohn et al. 2005). In these applications, the excitation frequency can be as high as several hundred kHz and the corresponding wavelengths are of the order of millimetres. However, as the frequency increases and wavelength decreases, scattering effects (e.g. reflection of elastic waves off grain boundaries and other material interfaces) will eventually increase the noise in the measurements and place the limits on the sensitivity of the damage detection process. Optimal frequency ranges for damage detection can be determined based on the wavelength of the standing wave pattern and the condition that the damage is located in areas of high curvature associated with the deformation of the wave pattern.

A final example is presented to illustrate how sensitivity to damage increases with increasing frequency. The structure examined was a simple moment-resisting portal-frame structure, shown in figure 15. The structure consists of aluminium members connected using steel angle brackets and bolts with a simulated rigid base. Overall dimension is approximately 30 cm×56 cm×5 cm. A total of six bolts were used to assemble this structure. A piezoelectric patch was mounted on the top beam of this symmetric structure to measure the impedance signals at frequency ranges of 2–13 and 130–150 kHz. Baseline measurements were first made under the damage-free condition. Two damage states were sequentially introduced at two different locations by loosening the bolts from 18 Nm to hand tight. Damage is the condition when both corner bolts have been loosened. After implementing the damage, the impedance signals were again recorded from the patch at each step of damage.

Figure 15

Moment-resisting portal-frame test structure.

The impedance measurements (before and after damage) for Case II are shown in figures 16 and 17. These figures show baselines and damaged signals from the structure. The peaks in the impedance measurements correspond to the resonances of the structure. At the frequency range of 130–150 kHz, it is easy to see qualitatively that the damaged signals are quite different with the appearance of new peaks and shifts at all frequency ranges examined. With increasing levels of the damage, the impedance variation also becomes more noticeable. At the frequency range of 2–13 kHz, one clearly observes that there are only relatively small changes, a slight shift of the resonance, seemingly in the range of temperature or other variation. This simple example further demonstrates the varying sensitivity of the SHM techniques at different frequency range.

Figure 16

High-frequency impedance measurement.

Figure 17

Low-frequency impedance measurement.

Finally, there will generally be an increased energy requirement necessary to maintain a comparable amplitude excitation at the higher frequencies because higher frequencies have higher attenuation. As a result, higher frequency excitation procedures are typically associated with more local damage detection procedures.

10. Summary

In this paper, the authors have attempted to coalesce information that has been reported in the literature into axioms that form a set of basic principles for SHM. It is reiterated that the term axiom is being used as a ‘fundamental truth’ as opposed to the mathematical definition where axioms are sufficient to generate the whole theory. In all cases, the stated axioms have been supported by examples reported in the technical literature. Most of these examples include experimental studies that further support the respective axioms. In cases where one could argue that there are studies that may be interpreted as contradicting a particular axiom, such as axiom II, the authors have demonstrated that often these contradictions are actually related to terminology used in a particular study. When such terminology is put on a common footing, the axiom is shown to hold true.

The objective of this paper was to first give new researchers in the field a starting point that alleviates the need to review the vast amounts of SHM literature. Although not backed up by an exhaustive literature review, the authors have strived to find examples reported in the literature that contradict the reported axioms. To date, none have been found including those reported in two fairly significant reviews that the authors have participated in (Doebling et al. (1996) and Sohn et al. (2004a)). Therefore, the authors believe this goal has been met and these axioms do, in fact, represent a starting point for future research in the SHM field.

Second, the authors hope to stimulate discussion and thought within the community regarding these axioms. Hopefully, such technical exchanges will generate research ideas leading to examples or theorems that either prove or disprove the validity of these axioms. At this point, the authors can only state that when the subject of these axioms has been brought up in informal discussions with their professional colleagues, lively discussions have resulted. As yet, no contradictions to these axioms have arisen from these discussions. Time will tell if this paper meets this stated objective with the SHM community as a whole.

As a final caveat, the authors make no claim that the axioms presented in this paper are complete. In particular, if SHM is regarded as a hierarchical system encompassing detection, location, severity assessment and prognosis, then only the first three levels are addressed and the difficult area of prognosis requires attention. It is hoped that other researchers in the field will periodically be adding to this set of axioms in an effort to further establish the foundations for SHM, formalize this multidisciplinary field and better define its relationship to the more traditional and well-established non-destructive evaluation field. Finally, the authors believe that the current axioms and the anticipated future additions will facilitate the transition of SHM from an engineering research topic to accepted engineering practice through the definition and subsequent adoption by the SHM community of these ‘fundamental truths’.


The authors would like to thank Dr Janice Dulieu-Barton of University of Southampton for helping with the discussion of axiom I, and Thomas Monnier and Phillipe Guy of INSA, Lyon for providing the composite specimen and transducers used to illustrate axiom II. They would also like to thank Dr Gareth Pierce of University of Strathclyde for conducting the experiment that generated the environmental variation data used to illustrate axiom IVb and Dr Paul Wilcox of University of Bristol for some helpful discussion on axiom VII.


    • Received January 2, 2007.
    • Accepted February 21, 2007.


View Abstract