Exploring isospectral spring–mass systems with firefly algorithm

Rajdeep Dutta, Ranjan Ganguli, V. Mani

Abstract

This paper investigates in-line spring–mass systems (An), fixed at one end and free at the other, with n-degrees of freedom (d.f.). The objective is to find feasible in-line systems (Bn) that are isospectral to a given system. The spring–mass systems, An and Bn, are represented by Jacobi matrices. An error function is developed with the help of the Jacobi matrices An and Bn. The problem of finding the isospectral systems is posed as an optimization problem with the aim of minimizing the error function. The approach for creating isospectral systems uses the fact that the trace of two isospectral Jacobi matrices An and Bn should be identical. A modification is made to the diagonal elements of the given Jacobi matrix (An), to create the isospectral systems. The optimization problem is solved using the firefly algorithm augmented by a local search procedure. Numerical results are obtained and resulting isospectral systems are shown for 4 d.f. and 10 d.f. systems.

1. Introduction

Vibrating systems, which have identical natural frequencies are known as isospectral systems. In this work, we investigate chain-like spring–mass systems as shown in figure 1. Such an in-line system may be considered as a real system made up of rigid masses and massless springs or as the discrete approximation of a continuous system. For example, a fixed-free chain-like spring–mass system may be thought of as a discrete finite degree of freedom (d.f.) approximation of a cantilever beam. Thus, isospectral spring–mass systems can yield insights into exploring many other isospectral vibratory systems. Isospectral systems are important in mechanics as they yield alternative usable designs.

Figure 1.

Schematic of n-degree of freedom (d.f.) spring–mass system.

The existence of isospectral systems shows that a system cannot be uniquely identified from its spectrum. This proves that for the given system, a unique solution to the inverse problem is not possible. Thus, efforts of designers to create the system from the spectra are not likely to succeed. Isospectral systems are important for the area of damage detection. In structural dynamic systems, damage causes a change in the stiffness or mass properties. The existence of isospectral systems shows that unique identification of damage is difficult from the given spectra. Isospectral systems also allow the designer to create structural dynamic systems with desirable mass and stiffness distributions, which have similar dynamic properties in terms of the spectra.

Pioneering research on isospectral vibrating systems was done by Gladwell (1995, 2002), Gladwell & Morassi (1995, 2010) and Gottlieb (1987, 1988). Gladwell (1995) described several ways to form spring–mass systems isospectral to a given system. A pair of isospectral fixed-free systems was obtained by the interchange Embedded Image. This pair was also pointed out by Ram & Elhay (1995). Gladwell (1995) also showed how the indeterminacy associated with the reduction to standard form can be used to get isospectral systems.

The construction of isospectral systems requires an understanding of inverse problems (Boley & Golub 1987; Gladwell & Zhu 1992; Nylen & Uhlig 1997; Chu 1998). Inverse problems (Gladwell 1989, 1999) concern reconstruction of a system from frequency response data and are much more complicated than the corresponding direct problem. There are several applications of inverse problems over wide areas in science and engineering including vibration theory.

The reconstruction of a spring–mass system can be performed by using the interlaced spectrum corresponding to a modified system with different end conditions (Gladwell 1989; Egana & Soto 2000). This requires the knowledge of the eigenvalues Embedded Image of the original system and the eigenvalues Embedded Image corresponding to the modified system whose last mass is fixed. It is known that a simple discrete spring–mass system can be reconstructed uniquely from the poles and zeros of the frequency response function corresponding to sinusoidal forcing at the end. However, Gladwell & Willms (1988) showed that the reconstruction is unique even if forcing is applied at an interior point (not a node of any eigenmode).

A significant step in system construction involves the reconstruction of a symmetric tridiagonal matrix, i.e. a Jacobi matrix. The reconstruction of the Jacobi matrix from spectral data needs numerical techniques or some standard tridiagonalization methods for symmetric matrices. Boor & Golub (1978) used the algorithm devised by Forsythe (1957) to reconstruct a scalar Jacobi matrix. Gladwell & Willms (1988) and Soto (1989) used the Lanczos algorithm and Householder transformation, respectively, for tridiagonalization. Egana & Soto (2000) used a modified version of the fast orthogonal reduction algorithm to reduce computational cost. Once the Jacobi matrix has been found, the remaining part of system construction is to extract the masses and stiffnesses from the Jacobian. The reconstruction of mass–spring systems from the Jacobian matrix can be advantageous (Nylen & Uhlig 1997). A single result for the Jacobian matrix leads to different mass-spring reconstruction results. However, all the reconstructed spring–mass system may not be unique. Hence, some scaling factors need to be introduced to make the solution unique. For example, Gladwell (1995) used a scaling factor Embedded Image.

Despite considerable research, the exploration of isospectral vibrating systems remains an open challenge in the research world. Researchers were able to analytically obtain some isospectral systems. However, it may be possible to obtain other isospectral systems using a numerical approach. In other words, the problem can be framed as an optimization problem where the objective function may have multiple optima. Numerical approaches can complement linear algebraic approaches.

In this work, we pose the problem of finding isospectral mass–spring inline systems as an optimization problem where the objective function is a non-negative error function. The problem becomes similar to finding multiple solutions of a least-square problem. Previously, researchers (Chu 1995; Chen & Chu 1996) had proposed several numerical techniques on the least-squares solution of inverse eigenvalue problems. An iterative method, called lift and projection (LP), was proposed by Chu (1995) to construct a Hermitian matrix from its diagonal elements and eigenvalues. Chu (1995) also introduced the projected gradient method, a continuous approach to solve the inverse eigenvalue problem. Later, Chen & Chu (1996) proposed an efficient hybrid method called the LP-Newton, by first applying the LP method to attain a low order of accuracy and then switching to a comparatively faster locally convergent method (Newton method) to reach a high order of accuracy.

Recently, stochastic optimization algorithms (Fouskakis & Draper 2002; Ji & Klinowski 2006) have become very popular because of their advanced search mechanisms. These algorithms search from a population of solution points instead of from a single solution point. They need fitness function information rather than gradient-based knowledge and use probabilistic transition rules in their search for the optimal solution. In this paper, we use the firefly algorithm (FA) augmented by local search (LS) to minimize the objective function. FA (Yang 2008, 2009, 2010; Lukasik & Zak 2009) is a nature-inspired computational algorithm motivated by the sociobiology of fireflies. It has been shown by Yang (2009) that FA can capture multiple optimum points of a multi-modal function.

In this paper, we numerically find spring–mass systems that are isospectral to a given system. Organization of this paper is as follows: In §2, the mathematical model of the system is presented. In §3, optimization criterion and error function are formulated and an example of a 2 d.f. isospectral system is shown to illustrate the formulation. In §4, the firefly and LS algorithms are described. In §5, we present numerical results and resulting isospectral systems for 4 d.f. and 10 d.f. cases. Section 5 presents the conclusion.

2. Mathematical modelling of spring–mass system

The governing equation for an undamped spring–mass system with n-d.f. is given as Embedded Image 2.1 where the matrices Mn and Kn are the mass matrix and the stiffness matrix of order n, respectively. In the above equation, λi are the eigenvalues and Embedded Image are the eigenvectors ∀i=1,2,…,n. Eigenvalues are related to the frequency spectrum and eigenvectors represent the vibrating modes.

Consider a physical spring–mass system consisting of n masses mi>0 connected by n springs of stiffness ki>0 with fixed-free condition (kn+1=0), as shown in figure 1. The mass matrix Mn and the stiffness matrix Kn of the system are given by Embedded Image 2.2 and Embedded Image 2.3 Here, Mn and Kn are symmetric-positive definite matrices. Moreover, Mn (mass matrix) is a diagonal matrix and Kn (stiffness matrix) is a tridiagonal matrix. The positive definite mass matrix Mn can be written as Embedded Image. From this factorization, the generalized eigenvalue equation (2.1) can be reduced to the standard form as follows: Embedded Image 2.4 where Embedded Image and Embedded Image. The eigenvalues λi in equation (2.4) are related to the natural frequencies Embedded Image. The matrix An is symmetric tridiagonal positive definite with the same eigenvalues of the system (Kn,Mn) in equation (2.1). An is called the Jacobi matrix. An has the tridiagonal form as Embedded Image 2.5 where Embedded Image 2.6 The eigenvalues of the given matrix An are λi∀ i=1,2,…,n. Now, let us consider another Jacobi matrix Bn of order n having the same tridiagonal structure given as Embedded Image 2.7 where ci>0 (i=1,2,…,n) and di>0 (i=1,2,…,n−1). Let the eigenvalues of Bn be μi∀ i=1,2,…,n. The Jacobi matrix Bn is called isospectral to the given Jacobi matrix An, if both the matrices have the same set of eigenvalues, i.e. μi=λi ∀ i=1,2,…,n. In this work, we are interested in finding the isospectral matrix (matrices) Bn for a given An. This problem of finding isospectral Bn matrix (matrices) is posed as an optimization problem.

3. Formulation of the optimization problem

Let pn and qn be two vectors of size (n×1), containing the eigenvalues of the matrices An and Bn, respectively. The eigenvalues of An and Bn are arranged in ascending order λ1λ2≤⋯≤λn and μ1μ2≤⋯≤μn, in the vectors pn and qn, respectively. We need an objective function for the optimization problem. We construct a vector en, which is the difference between pn and qn vectors. Let Embedded Image (∥⋅∥2 denotes 2-norm). We will call Fn as the error function, which is always non-negative for symmetric positive definite Jacobi matrices An and Bn. The objective function in the optimization problem is the error function (Fn) and is expressed as Embedded Image 3.1 The system Bn is called isospectral to the given system An if μi=λi for all i. In other words, Bn is isospectral to An if the error function Fn=0.

Hence, in order to find the isospectral matrix (matrices) Bn for a given An, we minimize this error function (objective function). Thus, the optimization problem is given by Embedded Image 3.2 where C and D are the upper bounds on the diagonal and super-diagonal (or sub-diagonal) elements of the isospectral matrix Bn. Once we have the values of ci and di, we can form the Bn matrix and obtain its eigenvalues. The values of C and D are related to the availability of real springs and masses. The number of decision variables in the above optimization problem is n+(n−1)=(2n−1), i.e. n diagonal elements and (n−1) super-diagonal elements.

In order to reduce the computational effort in the optimization problem, we follow a theoretical result from linear algebra. We know that Trace(Bn)=Trace(An), when matrix Bn is isospectral to matrix An. We use the above result as a constraint in the optimization problem. Thus the constraint is given by Embedded Image 3.3 We now show a numerical example to illustrate this optimization problem.

(a) A motivating example

Let us consider a 2 d.f. system having masses m1=6.25 kg, m2=4 kg and stiffness k1=92.5 kN m−1, k2=20 kN m−1.

The mass matrix M2 of the system is given by Embedded Image 3.4 and the stiffness matrix K2 is given by Embedded Image 3.5

Using the above mass and stiffness matrices, the Jacobi matrix for this system is obtained as Embedded Image The eigenvalues of the above system (A2) are λ1=3.8678 and λ2=19.1322.

Consider another Jacobi matrix B2. The structure of the B2 matrix is Embedded Image We assume some random values of the elements of B2 s.t. Embedded Image The above Jacobi matrix has eigenvalues μ1=6.1690 and μ2=17.8310. The error function value becomes F2=(λ1μ1)2+(λ2μ2)2=6.9886≠0. Hence, the above Jacobi is not isospectral to the given one.

For the matrix B2 to be isospectral to the given matrix A2, the eigenvalues of B2 must be the same as the eigenvalues of A2. The constraint on B2 matrix is

  • — Sum of the diagonal elements (trace) of B2 should be equal to sum of the diagonal elements (trace) of A2; i.e. c1+c2=23.

Our interest is to find the isospectral B2 matrix. Let us consider c1=a1ϵ and c2=a2+ϵ; i.e. 18−ϵ and 5+ϵ. This satisfies the constraint mentioned above. We need to obtain the value of d1. We know that isospectral matrices have the same determinant which gives as follows: Embedded Image 3.6

From the above equation, we obtain Embedded Image 3.7

Now, we assume ϵ=2 and obtain d1=6.1644, where c1=16 and c2=7. This corresponds to an isospectral Jacobi matrix Embedded Image given by Embedded Image where, m1′,m2′ are the masses and k1′,k2′ are the stiffness of this isospectral system Embedded Image. We can see that the eigenvalues of Embedded Image are Embedded Image and Embedded Image. Hence, the error (objective) function value becomes Embedded Image. The values of m1′,m2′,k1′ and k2′ are obtained as follows. We know that Embedded Image 3.8 Embedded Image 3.9 and Embedded Image 3.10 If we choose m2′=1, then from equation (3.8), we get k2′=7. Using this m2′ and k2′ in equation (3.9), we obtain m1′=1.2895. Now, we know m1′ and k2′. From equation (3.10), we obtain k1′=13.6316. The mass matrix M2′ and stiffness matrix K2′ for this isospectral system Embedded Image are Embedded Image 3.11 This above system is obtained using ϵ=2 and m2′=1 kg. In table 1, we present ten 2 d.f. spring–mass systems isospectral to a given system. The given 2 d.f. system has the masses m1=6.25 kg, m2=4 kg and stiffness k1=92.5 kN m−1, k2=20 kN m−1. These are obtained for different values of ϵ for m2′=1 kg. We can also use a different value of m2′ and obtain additional isospectral systems.

View this table:
Table 1.

Ten 2 d.f. spring–mass systems isospectral to a given system.

For this 2 d.f. system, the isospectral system can be described in a pictorial manner as shown in figure 2. In this figure, the x-axis represents the value of c1. The value of c2 is obtained as (23−c1). The y-axis represents the value of Embedded Image. This figure shows the variation of sub-diagonal element d1 with the diagonal elements (c1,c2) of the Jacobi matrix B2. Each point on the curve PQR in figure 2 refers to a 2 d.f. spring–mass system, which is isospectral to the given system A2. The isospectral system corresponding with ϵ=2 and m2′=1 is shown by the point Q in figure 2. The values of Embedded Image for all values of ϵ are shown in this figure.

Figure 2.

Relation between sub-diagonal and diagonal elements.

We also see from this figure 2 that d1 is real if the values of c1,c2 are within the boundary points P(3.8678) and R(19.1322). The boundary points P and R are the eigenvalues of the system A2. Hence, the value of ϵ should be selected in such a way that the diagonal elements are within the eigenvalue range (minimum and maximum eigenvalues). The number of decision variables in this 2 d.f. optimization problem is 3; i.e. two diagonal elements c1 and c2, and one super-diagonal element (d1). By selecting the value of ϵ, the number of decision variables reduces to one (d1). Owing to this fact, we are able to obtain all the isospectral systems using equation (3.7).

For a general n-d.f. optimization problem, the number of decision variables is n+(n−1)=(2n−1), i.e. n diagonal elements and n−1 super-diagonal elements. In the general n-d.f. optimization problem also, the selection of ϵ will reduce the number of decision variables to n−1 (sub-diagonal elements). Hence, we need to use some optimization methods to obtain the isospectral system Bn for a given system An. The objective function in the optimization problem is the error function given in equation (3.2). The error function corresponds to the square of errors in the eigenvalues of a given Jacobi matrix An and another Jacobi matrix Bn. It is known that the elements of a Jacobi matrix are related to the mass and stiffness values of a spring–mass system.

For a general n-d.f. systems, the objective function is related to the decision variables in a complicated way. Gradient information is also hard to extract from the objective function. Hence, we choose a gradient-free stochastic optimization technique (FA) to solve this optimization problem. The constraint on trace defined in equation (3.3) helps us to reduce the computational time and effort.

4. Firefly algorithm

In this section, we briefly discuss the FA developed by Yang (2008, 2009, 2010) and its applications to our problem of obtaining the isospectral system Bn for a given system An. FA is a population-based stochastic optimization technique. FA is a recent addition to nature-inspired computational methods. It is based on the flashing behaviour of fireflies with the following assumptions (Yang 2008, 20092010).

  • — All fireflies are unisex so that one firefly will be attracted to other fireflies regardless of its sex.

  • — Attractiveness of fireflies is proportional to their brightness. Thus for any two flashing fireflies, the less brighter one will move towards the brighter one. Attractiveness and brightness both decrease as the distance between the fireflies increases. If a firefly does not find anyone brighter than itself, then it moves randomly.

  • — The brightness or light intensity of a firefly is affected or determined by the landscape of the objective function to be optimized.

The variation of brightness (or light intensity) and the formulation of attractiveness are the two important issues in the FA. The brightness (or light intensity) is related to the objective function value. The attractiveness of a firefly is determined by its brightness. Hence, we can see that a point with lower objective function value will be attracted by a point with higher objective function value in a maximization problem. A point is a solution to the optimization problem in a d-dimensional solution space (search space), where d is the number of decision variables.

Assume there are N fireflies in the search space, each associated with a coordinate xi(t) that represents the point (location) of firefly i at iteration t. The location is a possible solution to the optimization problem. Each location (solution) xi,i=1,2,…,N is a d-dimensional vector. The brightness (light intensity) I of a firefly at location xi can be chosen as Embedded Image 4.1 where f is the objective function value at that point xi. The attractiveness β varies with the distance rij between fireflies i and j and is given by Embedded Image 4.2 where β0 is the attractiveness when the distance rij=0. The light intensity decreases with the distance between two fireflies and the light is also absorbed in the media. The absorption coefficient is γ in the above equation. The distance between any two fireflies i and j located at points xi and xj (at iteration t) is the Cartesian distance Embedded Image 4.3 A firefly i at location xi will be attracted to another firefly j owing to the fact that the firefly j is brighter than the firefly i. The movement of firefly i towards firefly j is given by Embedded Image 4.4 Equation (4.4) can be written as Embedded Image 4.5 In the above equation, t is the iteration number and the second term (β(t)(xj(t)−xi(t))) is due to attraction. The third term (α(rand−0.5)) is randomization with control parameter α. ‘rand’ is a random number, between 0 to 1, obtained from the uniform distribution. The third term is useful in exploration of search space in an efficient manner. The reason for incorporating –0.5 is to allow the movement within the range of −0.5α to +0.5α. This value is recommended in earlier FA. A complete description of the FA is given in the literature (Yang 2008, 2009, 2010).

The control parameters of the FA are α, β0 and γ. The influence of other solutions is controlled by the value of attractiveness that depends on two parameters: initial attractiveness β0 and absorption coefficient γ. In general, β0∈[0,1] has been suggested (Yang 2008, 2009). When β0=0, the value of β(t)=0 and the movement of firefly i in equation (4.5) is random and is called non-cooperative search. When β0=1, the value of β(t) in equation (4.5) is a finite value depending on the distance between fireflies i and j. The movement of firefly i depends on the position of firefly j and is called cooperative search. The value of γ defines the nature of variation of the attractiveness with respect to the distance. Using γ=0 corresponds to the constant attractiveness whereas Embedded Image corresponds to the complete random search. Moreover, γ is related to the scale of the design variables. The value of γ can be derived from Embedded Image where γ0∈[0,1]. The randomness control parameter can be chosen from α∈[0,1]. The influence of control parameters on the optimization process can be understood in detail from the previous literature (Yang 2009, 2010; Lukasik & Zak 2009).

(a) Local search

The LS is a technique used in stochastic optimization algorithms to accelerate convergence to the optimal solution. We introduce the LS procedure described in the literature (Birbil & Fang 2003) in the FA. In LS procedure, information about points in the neighbourhood of the point xi is collected. If any of the neighbourhood points are better than the original point xi, then this better point is stored instead of the original point xi. A better neighbourhood point means that the objective function value of this point is better than the objective function value at point xi. Better objective function value implies more (less) in a maximization (minimization) problem. The LS procedure, described by Birbil & Fang (2003) is followed in our study. LSITER and δ are the two parameters that are used in the LS. The maximum number of iterations that are used in the LS is LSITER, and the multiplier for the neighbourhood search is denoted by δ.

We know that the solution to the optimization problem is represented as point xi in d-dimensional space, where d is the number of decision variables. In the LS algorithm, the maximum feasible step length δ is computed as Maxd(udld), where u and l are upper and lower limits on decision variables. For a given point xi, the LS is done by considering one dimension after another. For a given dimension k, Embedded Image represents the value of the point xi in dimension k. This Embedded Image is moved to a new position in dimension k and is represented as Embedded Image. The value of Embedded Image is obtained by using two random numbers ξ1=rand(0,1), ξ2=rand(0,1) and δ. If the objective function value of point Embedded Image is better than the objective function value of point xi, the point xi is replaced by the point Embedded Image. More detail of this LS algorithm is given in the literature (Birbil & Fang 2003). The pseudocode of the FA and the LS procedure are given below:

Localsearch(δ, LSITER)

  • for k=1:d all d dimensions

    • generate 1st random number ξ1=rand(0,1)

    • while (counter<LSITER)

      •   generate 2nd random number ξ2=rand(0,1)

        • if (ξ1>0.5)

        •      Embedded Image

        • else

        •      Embedded Image

        • end if

        • if (Embedded Image)

        •      Embedded Image

        •      Embedded Image

        • end if

      •   increase counter

    • end while

  • end for

The firefly algorithm

  • Initialize the population of N fireflies (solutions) xii=1,2,…,N

  • Evaluate the objective function value f(xi) at each solution xi in the population

  • Assign light intensity I(xi) using equation (4.1)

  • Define control parameters α,β0,γ

  • While (counter<MaxIteration)

    • for i=1:N all N fireflies

      • for j=1:N all N fireflies

        • if (Ij<Ii)

        •      Calculate the distance rij using equation (4.3)

        •      Vary the attractiveness β(t) using equation (4.2)

        •      Move firefly i towards j using equation (4.5)

        • end if

        • Evaluate new solutions and update the light intensity

        • Find the best solution

        • Call Localsearch(δ, LSITER) to update the best solution

      • end for j

    • end for i

    • increase counter

  • End While

  • Postprocess results and visualization

5. Computational results and discussions

In this section, we present the computational results obtained using the FA with LS procedure. Our objective is to find the isospectral system Bn for a given system An, where n is the number of d.f. of the system. The objective function is the error function given in equation (3.2). We present the results obtained for 4 d.f. and 10 d.f. systems using the FA.

In the FA with LS, initially all the fireflies (N) are placed randomly in the search space in a well-dispersed manner. The parameters used in the FA are: population size N=20; initial attractiveness β0=1.0; light absorption coefficient γ=0.04 and randomness control parameter α=0.2. In the LS procedure, we have used δ=0.02 and LSITER=20.

We know that the system Bn is isospectral to system An when the objective function value given by equation (3.2) is zero. Hence, we stop the algorithm when the objective function value is less than a given tolerance. The tolerance value used in our computation is 10−8 for 4 d.f. system and 10−6 for 10 d.f. system.

(a) Computational results for 4 d.f. system

Consider a 4 d.f. mass–spring in-line system i.e. n=4. The original Jacobi matrix(A4) is of the form: Embedded Image 5.1 Consider another Jacobi matrix(B4) of the form Embedded Image 5.2 Our interest is to find the elements of B4 such that it is isospectral to A4. We use the constraint on trace defined in equation (3.3) to reduce the computational time and effort. The constraint for this 4 d.f. system is Embedded Image 5.3 We follow the same methodology used for the 2 d.f. system to modify the diagonal elements of B4. The diagonal elements of B4 can be modified in the following manner by considering two variables ϵ1 and ϵ2: Embedded Image 5.4 Embedded Image 5.5 and Embedded Image 5.6 Note that many other ways of modifying the diagonal elements are possible; for example, {c1,c2,c3,c4}={a1+ϵ1,a2ϵ1,a3ϵ2,a4+ϵ2}. We have used the above three modifications in our computational algorithm. The key idea is to keep the trace of matrix B4 equal to the trace of matrix A4 given in equation (5.3). Thus, for the 4 d.f. system, the optimization problem is expressed as Embedded Image 5.7 Note that ϵ1,ϵ2≠0 ensures a non-trivial solution. The upper bounds(E1,E2) on the variables ϵ1,ϵ2 can be determined with the help of Horn's majorization theorem. Applicability of the Horn's majorization theorem to determine the bounds is explained in the appendix.

We now consider a 4 d.f. spring–mass system with mass and stiffness values given as: m1=1 kg, m2=2 kg, m3=3 kg, m4=4 kg, and k1=50 kN m−1k2=60 kN m−1k3= 70 kN m−1k4=80 kN m−1. The Jacobi matrix A4 with these values is given by Embedded Image The eigenvalues of the given system A4 are 2.2394, 30.6069, 73.8387 and 138.3150. The corresponding natural frequencies are 1.4964, 5.5323, 8.5929 and 11.7607 rad s−1.

For 4 d.f. spring–mass system, the optimization problem given in equation (5.7) has five decision variables: subdiagonal elements d1,d2,d3 and the values of ϵ1,ϵ2. The bounds on the subdiagonal elements are assumed to be as: bi−10≤dibi+10 for i=1,2,3. Using Horn's majorization theorem, bounds on the values of ϵ1,ϵ2 are: 0<ϵ1≤28.3 and 0<ϵ2≤17.76, as we use the modifications given by equation (5.4).

The FA is applied to the optimization problem given in equation (5.7) and gives an optimal B4 for certain values of ϵ1 and ϵ2. For different values of ϵ1 and ϵ2, we can get many B4 matrices that are isospectral to A4. A sample of 10 isospectral spring–mass systems obtained using the FA, for the given system A4 is presented in table 2. The mass and stiffness values of the isospectral systems are denoted as: m1′,m2′,m3′,m4′ and k1′,k2′,k3′,k4′ in table 2.

View this table:
Table 2.

Ten isospectral spring–mass systems (4 d.f.) for a given A4.

Using the FA, we can obtain the elements of B4 such that it is isospectral to the given system A4. The mass and stiffness values of B4 are obtained in a manner similar to that explained for the 2 d.f. system in §3. We will explain this for 4 d.f. system. Consider one of the B4 matrices obtained by the FA given below: Embedded Image The eigenvalues of B4 are 2.2394, 30.6069, 73.8386 and 138.3150. Hence, the error (objective) function value becomes F4=10−8≃0. The values of m1′,m2′,m3′,m4′ and k1′,k2′,k3′ and k4′ are obtained in a similar way as described for 2 d.f. system. The steps are given below: Embedded Image 5.8 If we choose m4′=1, then from equations (5.8), all the values of m1′,m2′,m3′ and k1′,k2′,k3′ and k4′ can be determined. This spring–mass system is shown in table 2 as isospectral system (01).

(b) Computational results for 10 d.f. system

Now, we apply the same method for a 10-dimensional problem. We consider a 10 d.f. chain-like mass–spring system as shown in table 3. Here, we modify only two diagonal elements of the original Jacobi matrix (A10), in one particular manner. We use one ϵ and apply to the 6-th and 10-th diagonal terms; i.e. the element a6 is modified to (a6+ϵ), and the element a10 is modified to (a10ϵ). The number of decision variables for this 10 d.f. case is 10; subdiagonal elements d1,…,d9 and ϵ. Bounds on the decision variables are obtained as bi−10≤dibi+10 for i=1,2,…,9 and 0<ϵ≤10. The results obtained for this 10 d.f. system using the FA are shown in tables 4 and 5.

View this table:
Table 3.

Specification of the 10 d.f. original spring–mass system.

View this table:
Table 4.

Mass elements of 10 isospectral spring–mass systems (10 d.f.) found by stochastic optimization.

View this table:
Table 5.

Stiffness elements of 10 isospectral spring–mass systems (10 d.f.) found by stochastic optimization.

6. Conclusions

The problem of finding isospectral spring–mass systems is posed as an optimization problem with the objective of minimizing an error function. The error function is defined using the Jacobi matrices of An and Bn. We also use the fact that the trace of two isospectral Jacobi matrices An and Bn should be identical. We use FA augmented with LS to minimize the error function. The stochastic optimization method is applied to 4 d.f. and 10 d.f. systems. The hybrid algorithm is able to capture multiple optimum points. The isospectral spring–mass systems (scaled) are constructed from the optimum points and all resulting systems are presented. We are able to obtain several isospectral systems using this methodology. Numerical results are presented for 4 d.f. and 10 d.f. systems. The results obtained show that this methodology is useful to obtain isospectral systems.

In this study, we have used an optimization approach to obtain isospectral spring–mass systems. This optimization approach is easy to understand and implement. The FA starts with a population of solutions and converges to an optimal solution. The decision variables in this approach, correspond to the stiffness values (ki) and the mass values (mi) for i=1,2,…,n. Once the values of ki and mi are known, we can compute the eigenvalues and obtain the objective function value. In this approach, we do not need gradient information and spectral information to obtain the optimal solution. Hence, the optimization approach is used.

Appendix A. Applicability of the Horn's majorization theorem

We illustrate the applicability of Horn's majorization theorem or Schur–Horn theorem (Horn & Johnson 1985) to estimate the bounds on the modification variables.

According to the theorem, the sum of k smallest elements of the diagonal vector (vector containing diagonal elements) of a Hermitian matrix should be greater than or equal to the sum of k smallest elements of eigenvalue vector (vector containing all the eigenvalues) of the matrix for k=1,2,…,n−1, where n is the order of the matrix.

We explain the applicability of the theorem with the help of an example of 4 d.f. spring–mass system. A4 is the given Jacobi matrix as shown by equation (5.1). The eigenvalues of A4 are ordered as λ1λ2λ3λ4. B4 is the modified Jacobi matrix as shown by equation (5.2). Without loss of generality, we assume c4<c3<c2<c1 for our ease of understanding. Then Horn's majorization theorem can be expressed mathematically as Embedded Image A1

We consider the modified Jacobi matrix B4 in a form so that diag(B4)={c1,c2,c3,c4}={a1+ϵ1,a2ϵ1,a3+ϵ2,a4ϵ2}. Using these inequalities (A1), we obtain the upper bounds (E1,E2) on the variables ϵ1,ϵ2 Embedded Image 2 and Embedded Image A3 We also obtain the condition (a3+a4λ1+λ2) to be satisfied.

  • Received February 17, 2011.
  • Accepted May 26, 2011.

References

View Abstract