## Abstract

While taboo search (TS), a method of global optimization, has successfully solved many optimization problems, little is known about its convergence properties, especially for continuous optimization tasks. We consider the global convergence of the original TS for solving continuous optimization problems, and give a condition which guarantees the convergence of the objective value sequence of the method. We also prove that the minimum objective value sequence converges to the vicinity of the global optimal value with probability 1.

## 1. Introduction

Many problems in, practically, all fields of physical science as well as in technology, economics, logistics, travel scheduling and in the design of microprocessor circuitry involve global optimization: the determination of the global minimum of a function of an arbitrary number of independent variables. Global optimization is almost always very difficult because of the omnipresence of local minima, the number of which rapidly increases with the size of the problem. Any effective optimization method must be able to avoid entrapment in local minima and to continue the search to give the optimal solution whatever maybe the initial conditions.

In 1995, using ideas from taboo (or tabu) search (TS), a stochastic optimization method originally developed by Glover (1989, 1990), we extended the method to continuous-valued functions (Cvijović & Klinowski 1995). TS has attracted much attention because it successfully solves many difficult problems such as the vehicle routing problem (Alfredo *et al*. 2006), the graph colouring problem (Macready *et al*. 1996), the structure of clusters (Wales & Doye 1997), molecular docking (Westhead *et al*. 1997) and conformational energy optimization of oligopeptides (Morales *et al*. 2000). Several improved TS algorithms have been devised in order to make optimization of continuous functions more efficient (Battiti & Tecchiolli 1996; Siarry & Berthiau 1997; Chelouah & Siarry 2000). We investigated the potential of TS for searching for global minima of functions of many variables and many local minima (Cvijović & Klinowski 1995). The algorithm avoids entrapment in local minima and gives the optimal final solution. It is superior to other methods of global optimization, generally applicable, easy to implement, conceptually simple and open to further improvement.

However, little is known about the convergence properties of TS, a problem which is both topical and important. Hanafi (2000) and Glover (2002) studied the convergence of TS for combinational optimization problems. Ji & Tang (2004) proposed a class of TS based on memory for continuous optimization problems and proved its convergence. Here, we give an appropriate convergence condition and prove that the early version of TS for solving continuous optimization problems (Cvijović & Klinowski 1995) converges to the vicinity of the optimal solution with probability 1. We also prove that the minimum objective value of the function converges to the vicinity of the global optimal value with probability 1. Section 2 gives a full description and implementation of TS. The convergence of the search for solving the global problem (*P*) is proved in §3, while §4 gives the conclusions.

## 2. Taboo search for continuous global optimization

Consider the following continuous global optimization problem:where is a compact subset of Lebesgue measure space and *f* is a real-valued continuous function defined on *Ω*. TS for solving problem (*P*) proceeds as follows:

*Step 1*. Generate an initial solution ; Set .*Step 2*. Stop if a prescribed termination condition is satisfied. Otherwise generate random vectors by using the generation probability density function. After evaluating the objective values of the function for random vectors, we select the random vector which gives the lowest value as*y*.*Step 3*. If , then and . Else, if or*y*does not satisfy the taboo conditions, then , else . Set and return to step 2.

### (a) Generation of an initial solution

An initial solution, *x*_{0}, is generated randomly. For example, we take an arbitrary but fixed number, say 10*n*, solutions in *Ω* using uniform distribution, and select the best solution as the initial solution.

### (b) Generation of random vectors

We introduce a neighbourhood structure in continuous space *Ω*. *Ω* is partitioned into disjunct cells by dividing the coordinate intervals along the axes into parts. The parameter determines a unique partition of *Ω* into cells. At each iterative step, we select *n*_{x} sample points from a uniform distribution over *n*_{c} randomly chosen cells. The values of *n*_{x} and *n*_{c} depend on the nature of the function being optimized. These points are the neighbourhood of the current solution *x*_{k}. The size of the neighbourhood *N*(*x*_{k}) is .

### (c) Taboo conditions

At a particular iterative step, *y* is considered taboo if it produces a solution which lies in the taboo region of the solution space *Ω* consisting of cells visited during *l* preceding iterations. *l* is known as the size of the taboo list, which may be fixed or variable. At the first, second and finally the *l*th iteration starting from the current solution *x*_{k}, a certain move from cell to cell is accepted producing a new solution , where and belong to cells and , respectively. is then added to the taboo list, in sequence from position 1 to *l*. At the iteration, the address of the current cell is added at the beginning of the list after the last element of the list has been removed and the others have been translated.

### (d) Termination condition

The termination condition is satisfied when a fixed number, *N*, of successive iterations fails to improve the best value.

In theory, the convergence of TS can be proved. In practice, a fixed number *N*, the size of the neighbourhood and size of the taboo list *l*, must be appropriate for the nature of the actual optimization problem.

## 3. Convergence of the algorithm

In order to prove the convergence of TS, we introduce the following definitions (Laha & Rohatgi 1979).

Let be a sequence of random numbers defined on a probability space. We say that converges in probability to the random number *ξ*, when for any *ϵ*>0, we have

Let be a sequence of random numbers defined on a probability space. We say that converges to the random number *ξ* with probability 1, when for any *ϵ*>0 we haveorConvergence with probability 1 is clearly stronger than convergence in probability.

*Let* *be a sequence on a probability space, and set* . *Then* *implies that* . *If* *and the* *are independent, then* .

Lemma 3.4 and the theorems which follow guarantee global convergence of the objective value sequence induced by TS for solving problem (*P*). *f* must have a global minimum . For any *ϵ*>0, we denote , .

*Set* *and* . *If y satisfies the uniform distribution, then**and*

Based on TS and is a Lebesgue measure space, we know that the generation probability density function . The acceptance probability is

Because is the probability from the point to , we haveandThus,Another consequence is ▪

With an appropriate convergence condition, we thus obtain the very interesting result that the objective value sequence converges to the global optimal value with probability 1.

*If TS satisfies the following conditions*:

*l is non-decreasing and converges to**when*;*is the equivalent infinitesimal of*,*i.e.*,*and*;*y satisfies the uniform distribution*;

*then*

Sincewe arrive atFrom lemma 3.4 we haveandleading toandThereforeFor any *ϵ*>0,LetWe haveThusandFrom lemma 3.4, we obtainFrom conditions (1), (2) andwe obtainand thusFrom theorem 3.3, we havewhich implies that ▪

In practice, TS can meet conditions (1) and (3) for solving continuous optimization problems. Condition (2) is easily satisfied, because is generally faster than . However, because global continuous optimization algorithms are never guaranteed to find the global optimum in finite time, theorem 3.5 guarantees the convergence of TS when .

Theorem 3.6 ensures the minimum objective value sequence of TS converges to the vicinity of the global optimal objective value.

*If y satisfies the uniform distribution, then* . *In other words*, *converges with probability* 1 *to the global optimal solution of problem (P)*.

, LetSince , we haveandSimilar to theorem 3.5, we haveThereforeFrom theorem 3.3, we know thatUsing definition 3.2, we obtain the desired result. ▪

## 4. Conclusions

Although TS has been successfully applied to solving global optimization problems, the question of its convergence for continuous optimization problems was rarely addressed. We provide an appropriate convergence condition and prove that TS converges to the vicinity of the global optimal objective value with probability 1. In addition, we have examined the convergence of the minimum objective value of TS. This ensures successful application of the algorithm to continuous optimization problems.

## Acknowledgements

We are grateful to the Royal Society for an International Fellowship to M.-J.J. and to Prof. Imre Leader and the referees for their constructive comments.

## Footnotes

- Received November 28, 2005.
- Accepted January 16, 2006.

- © 2006 The Royal Society