A New Heuristic for the Quadratic Assignment Problem

We propose a new heuristic for the solution of the quadratic assignment problem. The heuristic combines ideas from tabu search and genetic algorithms. Run times are very short compared with other heuristic procedures. The heuristic performed very well on a set of test problems.


Introduction
The quadratic assignment problem is considered one of the most difficult optimization problems.It requires the location of n facilities on a given set of n sites, one facility at a site.A distance matrix {d ij } between sites and a cost matrix {c ij } between facilities are given.The cost f to be minimized over all possible permutations, calculated for an assignment of facility i to site p(i) for i = 1, . . ., n, is: The first heuristic algorithm proposed for this problem was CRAFT [1].More recent algorithms use metaheuristics such as tabu search [2], [14], [16], simulated annealing [4], [6], [19] and genetic algorithms [8].For a complete discussion and list of references see [3], [17], [5].

The Algorithm
A starting solution is selected as the center solution.A "distance" ∆p is defined for each solution (permutation p of the center solution).The distance ∆p is the number of facilities in p that are not in their center solution site.The following are trivial properties of ∆p: 144 Z. DREZNER 1.For all single pair exchanges of the center solution ∆p = 2.
2. There are no permutations with distance ∆p = 1.
3. ∆p ≤ n because no more than n facilities can be out of place.
4. If an additional pair exchange is applied on a permutation p, then ∆∆p (the change in ∆p) is between -2 and +2.Note that only the additional pair of exchanged facilities affect the value of ∆∆p.Therefore, ∆∆p is calculated in O(1) We use this distance from the center solution as a tabu mechanism.We search the solution space starting from the center solution, and each step we search solutions with increasing distance from the center solution, until we reach a prespecified search depth d ≤ n.We do not move back towards the center solution unless a solution better than the best found one is encountered.We perform all pair-wise exchanges of the center solution and keep the best K permutations, where K is a parameter of the algorithm.All these K permutations are different from the center solution in two sites and therefore their distance from the center solution is ∆p = 2. Pair-wise exchanges of any of these K permutations lead to permutations which are different from the center one in either two or fewer sites, or three sites, or four sites.We generate two lists.One consists of the best K permutations with distance ∆p = 3, and the other is a list of the best K permutations with ∆p = 4. Next we check all pairwise exchanges of the best found K permutations with ∆p = 3, updating possibly the list with ∆p = 4, and creating a list of the best K permutations with distance ∆p = 5, and so on, until we scan the best K permutations which are different from the original one in d sites.If throughout the process a solution better than the best found one is encountered, the center solution is replaced by the newly found best solution, and the process starts from the beginning.If no better solution is found throughout the scanning, the iteration is complete.

Description of One Iteration
A center solution is selected.Let p be a permutation of the center solution by successive pair exchanges.We keep three lists of solutions which we term list 0 , list 1 , and list 2 .list 0 contains solutions with distance ∆p.Similarly, list 1 and list 2 contain solutions whose distance is ∆p + 1, and ∆p + 2, respectively.The depth of the search is set to d ≤ n.It is recommended to apply d = n or very close to it.3. Go over all the solutions in list 0 .For each solution p: • evaluate all pair exchanges for that solution.
• If the exchanged solution is better than the best found solution, update the best found solution and proceed to evaluate the rest of the exchanges.This is necessary in order to be able to apply the short cut (see the appendix).
• If the distance of an exchanged solution is ∆p or lower, ignore the permutation and proceed with scanning the exchanges of permutation p.
• If its distance is ∆p + 1 or ∆p + 2, check it for inclusion in the appropriate list.This is performed as follows: -If the list is shorter than K or the solution is better than the worst list member, check if it is identical to a list member by first comparing its value of the objective function to those of list members, and only if it is the same as that of a list member the whole permutation is compared.
-If an identical list member is found, ignore the permutation and proceed with scanning the exchanges of permutation p. -Otherwise, * If the list is of length K, the permutation replaces the worst list member, * If the list is shorter than K, the permutation is added to the list.-A new worst list member is identified.

4.
If a new best found solution is found by scanning all the exchanges of permutation p, set the center solution to the new best found solution and go to Step 2.
5. Once all solutions in list 0 are exhausted, move list 1 to list 0 , list 2 to list 1 , and empty list 2 .

Z. DREZNER
The process of an iteration applies concepts from tabu search and genetic algorithms [10], [13].In tabu search we disallow backward tracking thus forcing the search away from previous solutions.In our approach we proceed farther and farther away from the center solution because ∆p increases throughout the iteration.As in tabu search we can go back only when going back leads to a solution better than the best found one.It resembles genetic algorithms because a "population of K members" is maintained during the iteration and populations of offspring, that replace adults in the population are generated.However, there is no merging of parents in our procedure.Our population is unisex as in primitive forms of life.Progression and improvement occur only due to mutations.
We constructed an algorithm that repeats the iterations several times.Once an iteration is completed, (i.e., the scanning of all lists did not yield a solution better than the best found one), we can start a new iteration.We can use either the best solution found throughout the last scan (which is not better than the previous center solution) as the new center solution, or the best solution in the last list (of distance ∆p = d).The first option is similar to a steepest descent approach or an iteration of a tabu search with an empty tabu list.The second approach is similar to the diversification idea in tabu search, i.e. moving to a "different region" in the search space.We opted to combine the two and alternate between selecting the best one throughout the search, and the best one in the last list for the next iteration.
The Algorithm 1. Select a random center solution.It is also the best found solution.
As will be seen from the results reported below, the new algorithm is very fast, and we believe it is much faster than other algorithms reported in the literature (problems with n = 30 facilities were solved 120 times with K = 1 in 3-4 seconds, i.e., 0.03 seconds per run.Problems with n = 64 facilities were solved 120 times with K = 1 in less than a minute, or less than half a second per run.)We also plan to use this algorithm as part of a genetic algorithm.These run times are the times required to generate an initial population of 120 members using K = 1.Preliminary results of such a genetic algorithm are very promising.Comparing and evaluating the advantage of our algorithm over other algorithms is very difficult.We therefore, compared the performance of the algorithm using different values of K.The results reported below are of about the same quality as those reported in [17] but the run times are much faster.
We ran the algorithm for K = 1 − 6, 8, 10.We ran each variant 120 K times and report the best solution found in these runs so that run times will be about the same.We chose "120" because it divides evenly the selected set of K's.
The results are summarized in Tables 1-2.In Table 3 we give the information about the instances in which the best known solution was not found for completeness of the presentation.
By examining tables 1-3 we conclude that • Run times are very similar for various K's.Generally, they decrease with K.
• In most cases the best average solution is obtained for K = 1.
• For most problems, especially the larger ones, the average solution was within 0.2% of the best known solution.This is about the quality of the solutions for such problems obtained by slower algorithms [17].
• There were 12 instances (out of 152) in which the best known solution was not identified even once (see Table 3).However, in all 12 cases the best solution found was within 0.02% of the best known solution.

Conclusions
We proposed a new heuristic algorithm for the solution of the Quadratic Assignment problem.The algorithm combines features of tabu search and genetic algorithms.We plan to investigate the application of this algorithm as part of a genetic algorithm.Preliminary experiments with genetic algorithms incorporating the proposed algorithm are very promising.The proposed algorithm is very fast compared with other heuristics.It is therefore difficult to compare it with other algorithms.However, we obtained solutions of similar quality to other algorithms in shorter run times.This demonstrates the superiority of the proposed algorithm over existing ones.

Appendix: The Short Cuts
The following short cuts are explained in [17].Since we experimented only with symmetric problems, we present this short cut for symmetric problems with zero diagonal (i.e., the cost between a facility and itself, and the distance between identical locations is zero).
Let ∆f rs be the change in the cost f by exchanging the sites of facilities r and s.This is a similar concept to the derivative of f .There are n(n−1) 2 ∆f rs 's.It can be easily verified by Equation ( 1) that: Calculating ∆f rs by using Equation (2) requires only O(n) time rather than O(n 2 ) time required to calculate f by Equation (1).
Taillard [17] points to a faster formula to calculating ∆f rs .Define ∆ uv f rs the change in the value of the objective function between the exchanged permutation by rs, and an additional exchanged pair uv.This is a similar concept to the second derivative of f .This change in the value of the objective function can be calculated in O(1) (starting from the second iteration) if the pairs rs and uv are mutually exclusive.The formula is based on ∆f uv (the change in the value of the objective function from the previous permutation by exchanging the pair uv).Therefore, one needs to keep all the values of ∆f ij for all i, j, and K three times for a total of 3 n(n−1)K 2 values.Since, Then, by checking which terms change if an exchange uv is performed on a permutation where rs were exchanged: which leads to: Note that only 2n − 3 pairs are not mutually exclusive and formula (2) can be used in these cases to evaluate ∆ uv f rs .Therefore, evaluating the change in the value of the objective function for all n(n−1) 2 possible pair exchanges (which is required for one step of the algorithm) requires O(n 2 ) time.Therefore, one pass through all O(n) distances from the center using the short cut requires O(n 3 K) time.

Call for Papers
As a multidisciplinary field, financial engineering is becoming increasingly important in today's economic and financial world, especially in areas such as portfolio management, asset valuation and prediction, fraud detection, and credit risk management.For example, in a credit risk context, the recently approved Basel II guidelines advise financial institutions to build comprehensible credit risk models in order to optimize their capital allocation policy.Computational methods are being intensively studied and applied to improve the quality of the financial decisions that need to be made.Until now, computational methods and models are central to the analysis of economic and financial decisions.
However, more and more researchers have found that the financial environment is not ruled by mathematical distributions or statistical models.In such situations, some attempts have also been made to develop financial engineering models using intelligent computing approaches.For example, an artificial neural network (ANN) is a nonparametric estimation technique which does not make any distributional assumptions regarding the underlying asset.Instead, ANN approach develops a model using sets of unknown parameters and lets the optimization routine seek the best fitting parameters to obtain the desired results.The main aim of this special issue is not to merely illustrate the superior performance of a new intelligent computational method, but also to demonstrate how it can be used effectively in a financial engineering environment to improve and facilitate financial decision making.In this sense, the submissions should especially address how the results of estimated computational models (e.g., ANN, support vector machines, evolutionary algorithm, and fuzzy models) can be used to develop intelligent, easy-to-use, and/or comprehensible computational systems (e.g., decision support systems, agent-based system, and web-based systems) This special issue will include (but not be limited to) the following topics: • Computational methods: artificial intelligence, neural networks, evolutionary algorithms, fuzzy inference, hybrid learning, ensemble learning, cooperative learning, multiagent learning

1 . 2 .
A center solution is selected.THE QUADRATIC ASSIGNMENT PROBLEM145 Start with ∆p = 0.At the start, list 0 has one member (the center solution) and the other two lists are empty.The best found solution is the center solution.

3 . 4 .
Select d randomly in [n − 4, n − 2].Perform an iteration on the center solution.If the iteration improved the best found solution go to step 2 5. Otherwise, advance the counter c = c + 1, and • If c = 1, 3 use the best solution in the list with ∆p = d as the new center solution and go to Step 3. • If c = 2, 4 use the best solution found throughout the scan (not including the old center solution) as the new center solution and go to Step 3. • If c = 5 stop.THE QUADRATIC ASSIGNMENT PROBLEM 147 3. Computational Experiments Number of times out of 120 that best known solution obtained ‡ Percentage of average solution over the best known solution * Time in seconds per run (Each run consists of 120 K individual runs)

Table 3 .
Number of times out of 120 that best known solution obtained ‡ Percentage of average solution over the best known solution * Time in seconds per run (Each run consists of 120 Instances Where the Best Known Solution was not Found Number of times out of 120 that the minimum solution obtained ‡ Percentage of minimum solution over the best known solution -Sko49 achieved the highest number for K = 6;

•
Application fields: asset valuation and prediction, asset allocation and portfolio selection, bankruptcy prediction, fraud detection, credit risk management • Implementation aspects: decision support systems, expert systems, information systems, intelligent agents, web service, monitoring, deployment, implementation