The paper presents a novel hybrid evolutionary algorithm that combines Particle Swarm Optimization (PSO) and Simulated Annealing (SA) algorithms. When a local optimal solution is reached with PSO, all particles gather around it, and escaping from this local optima becomes difficult. To avoid premature convergence of PSO, we present a new hybrid evolutionary algorithm, called HPSO-SA, based on the idea that PSO ensures fast convergence, while SA brings the search out of local optima because of its strong local-search ability. The proposed HPSO-SA algorithm is validated on ten standard benchmark multimodal functions for which we obtained significant improvements. The results are compared with these obtained by existing hybrid PSO-SA algorithms. In this paper, we provide also two versions of HPSO-SA (sequential and distributed) for minimizing the energy consumption in embedded systems memories. The two versions, of HPSO-SA, reduce the energy consumption in memories from 76% up to 98% as compared to Tabu Search (TS). Moreover, the distributed version of HPSO-SA provides execution time saving of about 73% up to 84% on a cluster of 4 PCs.
Several optimization algorithms have been developed over last few decades for solving real-world optimization problems. Among them, we have many heuristics like Simulated Annealing (SA) [
Particle Swarm Optimization (PSO) is based on the social behavior of individuals living together in groups. Each individual tries to improve itself by observing other group members and imitating the better ones. This way, the group members are performing an optimization procedure which is described in [
The rest of the paper is organized as follows. Section
SA [
The calculation of this probability relies on a parameter
Thus, at the start of SA most worsening moves may be accepted, but in the end only improving ones are likely to be allowed, which can help the procedure jump out of a local minimum. The algorithm may be terminated after a certain volume fraction of the structure has been reached or after a prespecified runtime.
PSO is a population based stochastic optimization technique developed by [
PSO algorithm is based on an idea that particles move through the search space with velocities that are dynamically adjusted according to their historical behaviors. Therefore, the particles have the tendency to move towards the better and better search area over the course of search process. PSO algorithm starts with a group of random (or not) particles (solutions) and then searches for optima by updating each generation. Each particle is treated as a volume-less particle (a point) in the The first value is the best solution (fitness) a particle has achieved so far (the fitness value is also stored). This value is called The second value is the best value tracked by the particle swarm optimizer so far (by any particle) in the population. This best value is a global best and is called
At each iteration, these two best values are combined to adjust the velocity along each dimension, which is then used to compute a new iteration step for the particle. A portion of adjustment to the velocity is influenced by the individual's previous best position (
Movement of each particle.
In (
This section presents a new hybrid HPSO-SA algorithm which combines the advantages of both PSO (that has a strong global-search ability) and SA (that has a strong local-search ability). Other applications of hybrid PSO and SA algorithm can be found [
This hybrid approach makes full use of the exploration capability of both PSO and SA and offsets the weaknesses of each. Consequently, through application of SA to PSO, the proposed algorithm is capable of escaping from a local optimum. However, if SA is applied to PSO at each iteration, the computational cost will increase sharply and at the same time the fast convergence ability of PSO may be weakened. In order to flexibly integrate PSO with SA, SA is applied to PSO every
The hybrid HPSO-SA algorithm works as illustrated in Algorithm
(1) iter (2) stop_criterion (3) (4) (5) (6) Update current value as the new (7) (8) (9) Choose the particle with the best fitness value in the neighborhood ( (10) (11) Update particle velocity according to Equation ( (12) Enforce velocity bounds (13) Update particle position according to Equation ( (14) Enforce particle bounds (15) (16) (17) (18) (19) Update global best solution (20) (21) (22) (23) (24) iterSA (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) iterSA (36) Update (global_best_solution) (37) (38) Update( (39) Update ( (40) (41) (42) iter (43)
In order to compare the performance of HPSO-SA hybrid algorithm with those described in [
Standard benchmark functions adopted in this work.
Function | Problem | Range | Classification | |
---|---|---|---|---|
Sphere | 0 | Unimodal | ||
Rastrigin | 0 | Multimodal | ||
Griewank | 0 | Multimodal | ||
Rosenbrock | 0 | Unimodal | ||
Quartic | 0 | Noisy | ||
Schwefel | 0 | Multimodal | ||
Ackley | 0 | Multimodal | ||
Michalewicz | — | Multimodal | ||
Himmelblau | Multimodal | |||
Shubert | Multimodal |
3D mathematical benchmark functions.
Rastrigin
Sphere
Griewank
Rosenbrock
Schwefel
Ackley
Michalewicz
Himmelblau
Shubert
To verify the efficiency and effectiveness of HPSO-SA hybrid algorithm, the experimental results of HPSO-SA approach are compared with those obtained by [
In this section we compare HPSO-SA approach with TL-PSO method [
Comparison of hybrid HPSO-SA algorithm with TL-PSO approach [
Mean | Best | Worst | ||||
---|---|---|---|---|---|---|
TL-PSO | TL-PSO | TL-PSO | ||||
Performance of four Particle Swarm Optimization algorithms, namely classical PSO, Attraction-Repulsion based PSO (
In order to make a fair comparison between classical PSO, ATREPSO, QIPSO, GMPSO and HPSO-SA approach, we fixed, as indicated in [ All the algorithms outperform the classical Particle Swarm Optimization. HPSO-SA algorithm gives much better performances in comparison to PSO, QIPSO, ATREPSO, and GMPSO, out of the Sphere’s and Ackley’s functions. On Sphere’s function, QIPSO obtains better results than those obtained by HPSO-SA approach. But when the maximum number of iterations is fixed to The analysis of the results, obtained for Ackley's function, shows that QIPSO obtains better mean result than HPSO-SA algorithm. However, HPSO-SA has a much smaller standard deviation.
Comparison of mean/standard deviation of solutions obtained by using hybrid HPSO-SA algorithm and approaches described in [
Rastrigin | |||||
15.932042 | |||||
Sphere | |||||
Griewank | |||||
Rosenbrock | |||||
Noisy | |||||
Schwefel | |||||
Ackley | |||||
Michalewizc | |||||
Himmelblau | |||||
Shubert | |||||
In this section four benchmark functions are used to compare the relative performance of HPSO-SA algorithm with SUPER-SAPSO, SAPSO, and PSO algorithms described in [
For all comparisons, the number of particles was set to 30. HPSO-SA algorithm uses a linearly decreasing inertia weight
Performance results of HPSO-SA, SUPER-SAPSO, SAPSO, and PSO algorithms on benchmark functions.
Function | Algorithm | Number of iterations | Average Error |
---|---|---|---|
Rastrigin | PSO | 2989 | 0.097814 |
SAPSO | 2168 | 0.07877 | |
SUPER-SAPSO | 5 | 0.0 | |
HPSO-SA | 2022.43 | 0.0 | |
Sphere | PSO | 805 | 0.094367 |
SAPSO | 503 | 0.085026 | |
SUPER-SAPSO | 4 | 0.0 | |
HPSO-SA | 501.15 | ||
Griewank | PSO | 2004 | 0.082172 |
SAPSO | 1517 | 0.075321 | |
SUPER-SAPSO | 3 | 0.0 | |
HPSO-SA | 1496.35 | ||
Ackley | PSO | 4909 | 0.099742 |
SAPSO | 3041 | 0.099461 | |
SUPER-SAPSO | 5 | 0.0 | |
HPSO-SA | 3020.65 |
In all above experiments, HPSO-SA algorithm obtains better results in comparison to those obtained by both the standard PSO and SAPSO algorithm [
SUPER-SAPSO uses an expression for the particle movements (
In this section performances of HPSO-SA are compared with these of PSOSA [
Table
Performance comparison between PSOSA, Sid.GA, Hybrid and HPSO-SA for benchmark functions [
Sphere | |||||
Rastrigin | |||||
Griewank | |||||
Rosenbrock | |||||
To make a fair comparison, the maximum number of function evaluations allowed was set to 20000, 30000 and 40000 for HPSO-SA and PSOSA algorithms when the number of particle was set to 20. HPSO-SA algorithm uses a linearly decreasing inertia weight
The numerical results given in Table Over four benchmark functions, HPSO-SA and PSOSA do better than standard GA and hybrid algorithm [ For Sphere, Rastrigin and Griewank functions, HPSO-SA and PSOSA algorithms obtain optimal solutions within specified constrains (number of objective function evaluations). For Rosenbrock function, PSOSA obtains better results than HPSO-SA for dimension 20, but for dimensions 10 and 30, HPSO-SA does better and has smaller standard deviation.
According to trends in [
In order to compute energy cost of the system, we propose an energy consumption estimation model, for our memory architecture composed by an SPM, an instruction cache and a DRAM. Equation (
In this model, we distinguish between the two cache write policies: Write-Through (
Equations (
List of terms.
Term | Meaning |
---|---|
Energy consumed during a reading from SPM. | |
Energy consumed during a writing into SPM. | |
Reading access number to SPM. | |
Writing access number to SPM. | |
Energy consumed during a reading from instruction cache. | |
Energy consumed during a writing into instruction cache. | |
Reading access number to instruction cache. | |
Writing access number to instruction cache. | |
Energy consumed during a reading from DRAM. | |
Energy consumed during a writing into DRAM. | |
Reading access number to DRAM. | |
Writing access number to DRAM. | |
The considered cache write policy: WT or WB. In case of WT, | |
Dirty Bit used in case of WB to indicate during the access | |
Type of the access |
As SPM has got a lot of advantages, it is clearly preferable to put as much data as possible in it. In other words, we must maximize terms
This section should be considered as an attempt to use hybrid evolutionary algorithms for reducing energy consumption in embedded systems. Here, the focus is on the use of HPSO-SA algorithm designed in previous sections. Since the problem under consideration is dicrete and has specific features, HPSO-SA needs changes.
A solution can be represented by an array having size equal to the number of the data. Each element from this array denotes whether a data is included in the SPM (“1”) or not (“0”). The HPSO-SA algorithm starts with an initial swarm which is randomly initialized.
It is the objective function that we want to minimize in the problem. It serves for each solution to be tested for suitability to the environment under consideration
Each dimension
SA uses a notion of neighborhood relation. Let with probability equal to 0.03, the value of each element of validate solution: while
the key idea in the SA approach is the function if for fixed
The function
For Distributed hybrid HPSO-SA (
In order to compute the energy cost of studied memory architecture composed by an SPM, an instruction cache and a DRAM, we proposed an energy consumption estimation model which is explained in [
List of Benchmarks.
Benchmark | Suite | Description |
---|---|---|
ShaCE | MiBench | The secure hash algorithm that produces a 160- bit message digest for a given input. |
BitcountCE | MiBench | Tests the bit manipulation abilities of a processor by counting the number of bits in an array of integers. |
FirCE | SNU-RT | Finite impulse response filter (signal processing over a 700 items long sample). |
JfdctintCE | SNU-RT | Discrete-cosine transformation on |
AdpcmCE | Mälardalen | Adaptive pulse code modulation algorithm. |
CntCE | Mälardalen | Counts nonnegative numbers in a matrix. |
CompressCE | Mälardalen | Data compression using lzw. |
DjpegCE | Mediabenchs | JPEG decoding. |
GzipCE | Spec 2000 | Compression. |
NsichneuCE | Wcet Benchs | Simulate an extended Petri net. |
StatemateCE | Wcet Benchs | Automatically generated code. |
In experiments, 30 different executions for each heuristic are performed and the best and average results obtained on these 30 executions are recorded. In this case, the best and the average solutions give similar results. Figure
Energy consumed by benchmarks studied in this work.
As
Execution time used by HPSO-SA algorithms on benchmarks studied in this work.
In this paper, we have designed a hybrid algorithm (HPSO-SA) that combines the exploration ability of PSO with the exploitation ability of SA, and is capable of preventing premature convergence. Compared with QIPSO, ATREPSO and GMPSO [
In addition, we will compare HPSO-SA algorithm with other hybrid algorithms (PSO-GA, PSO-MDP, PSO-TS) whose design is in progress by the authors. Comparison will also be done on additional benchmark functions and more complex problems including functions with dimensionality larger than 30.
The authors are grateful to anonymous referees for their pertinent comments and suggestions. Dawood Khan helped the authors with the intricaties of the english language. The work of M. Idrissi Aouad is supported by the French national research agency (ANR) in the Future Architectures program.