Quantumbehaved particle swarm optimization (QPSO) is an improved version of particle swarm optimization (PSO) and has shown superior performance on many optimization problems. But for now, it may not always satisfy the situations. Nowadays, problems become larger and more complex, and most serial optimization algorithms cannot deal with the problem or need plenty of computing cost. Fortunately, as an effective model in dealing with problems with big data which need huge computation, MapReduce has been widely used in many areas. In this paper, we implement QPSO on MapReduce model and propose MapReduce quantumbehaved particle swarm optimization (MRQPSO) which achieves parallel and distributed QPSO. Comparisons are made between MRQPSO and QPSO on some test problems and nonlinear equation systems. The results show that MRQPSO could complete computing task with less time. Meanwhile, from the view of optimization performance, MRQPSO outperforms QPSO in many cases.
With the development of information science, more and more data is stored, such as web content and bioinformatics data. For this reason, many basic problems have become more and more complex, which makes great troubles to current intelligent algorithm. As one of the most important issues in artificial intelligence, optimization problem in realworld applications also becomes harder and harder to be solved.
In the past 30 years, evolutionary algorithm is becoming one of the most effective intelligent optimization methods. In order to face the new challenge, distributed evolutionary algorithms (dEAs) have been blossomed rapidly. The paper [
Quantum mechanics and trajectory analysis gained extensive attention of scholars recently and sparkled in many areas, such as image segmentation [
In order to follow this trend and enhance the capabilities of a standard QPSO, the MapReduce quantumbehaved particle swarm optimization is developed. The MRQPSO transplants the QPSO on MapReduce model and makes the QPSO parallel and distributed through partitioning the search space. Through the comparisons of MRQPSO and standard QPSO, it could be found that the proposed MRQPSO decreases the time of same function evaluations. And on some test problems QPSO increases the performance of solution and is more robust than QPSO.
The rest of this paper is organized as follows. Section
Inspired by bird and fish flocks, Kennedy and Eberhart proposed PSO algorithm in 1995 [
From the above equations, it could be found that few parameters are used in PSO, which makes PSO easy to be controlled and used. Meanwhile, it has better convergence performance and quicker convergence speed. These advantages make the PSO algorithm gain a lot of research attention. However, the PSO is not a global optimization algorithm [
To overcome the shortcoming of the original PSO algorithm, Sun et al. proposed the quantumbehaved particle swarm optimization (QPSO) in 2004 [
According to uncertainty principle, the velocity and position of a particle cannot be determined simultaneously. In quantum space, a probability function of the position where a particle appears could be obtained from Schrodinger equation. The true position of one particle could be measured by Monte Carlo method. Based on these ideas, in QPSO, a local attractor is constructed by particle best solutions and global best solution as (
The position of the particle is updated by
In QPSO, first step is initializing the population randomly, which concludes the position of each particle, particle best value, and global best value. Next, calculate the mean position of
Although the QPSO algorithm is superior to PSO, it still has some disadvantages. Because the particles in the QPSO fly discretely, the narrow area where the optimum is may be missed. In the case of too much computation, QPSO may spend too much time.
MapReduce [
In this model, the computation takes a set of key/value pairs. The
Because Google has not released the system to the public, Hadoop, the Apache Lucene project developed, has been used generally. This Javabased opensource platform is a clone of MapReduce infrastructure, and we can use it to design and implement our distributed computation.
The particle swarm optimization algorithm [
Although the QPSO has satisfying progress on premature phenomenon, it has not been prepared to challenge of problems with complex landscape or needing huge computation to be solved. Due to the particles of the QPSO flying discretely, they may miss the narrow area where the global optimum is. And as the problem is getting complex, the computational cost increases. So we implement the QPSO parallel and distributed by transplanting the algorithm on MapReduce model and we name this algorithm MRQPSO. The framework of MRQPSO could be described as in Algorithm
solution on this subspace;
Flowchart of MRQPSO.
The proposed MRQPSO partitions the search space into many subspaces. For
Algorithm
After being processed by mappers, the immediate key/value pairs change to denote the information of
function mapper (key, value)
The reduce function is in charge of merging and integrating the information which the mapper emitted. As Algorithm
function reducer (key, value)
To validate the proposed MRQPSO algorithm, we selected 8 functions to evaluate the ability of solving complex problems firstly. The scalable optimization problems are proposed in the CEC 2013 Special Session on RealParameter Optimization [
Benchmark functions used in this paper.
Function  Description  

Composition functions  F1  Composition function 1 ( 
F2  Composition function 2 ( 

F3  Composition function 3 ( 

F4  Composition function 4 ( 

F5  Composition function 5 ( 

F6  Composition function 6 ( 

F7  Composition function 7 ( 

F8  Composition function 8 ( 
Some parameter settings and environment are listed as follows.
We compared our proposed MRQPSO with original QPSO algorithm to test the optimization performance. Each function is run for 20 independent times and all the results are recorded in Tables
QPSO: this algorithm transforms search space from classic space to quantum space. In quantum space, particles can appear at any position and avoid premature convergence to some degrees. The population size of QPSO is 10.
MRQPSO: this algorithm is a QPSO implementation on the MapReduce model, which achieves the parallel and distributed QPSO. The population sizes of MRQPSO were 10, 20, and 30, respectively, denoted by s on Table
Performance of MRQPSO.
Fun 

Min value  Max value  Mean function value  St. d  Mean running time (ms) 

F1  10 




52098 
20 




55393  
30 




63067  


F2  10 




57123 
20 




59785  
30 




68330  


F3  10 




58311 
20 




61245  
30 




68463  


F4  10 




942736 
20 




978615  
30 




1108999  


F5  10 




903781 
20 




984774  
30 




1134965  


F6  10 




982763 
20 




1053224  
30 




1183599  


F7  10 




970225 
20 




1056708  
30 




1239350  


F8  10 




76354 
20 




81578  
30 




89552 
Comparison between MRQPSO and QPSO on the optimum.
Fun  Min value  Max value  Mean function value  St. d  

MR  QPSO  MR  QPSO  MR  QPSO  MR  QPSO  
QPSO  QPSO  QPSO  QPSO  
F1 








F2 








F3 








F4 








F5 








F6 








F7 








F8 








Comparison between MRQPSO and QPSO on the running time.
Fun  Mean running time (ms)  

MRQPSO  QPSO  
F1 

60704 
F2 

71371 
F3 

72962 
F4 

1811935 
F5 

1739438 
F6 

1855480 
F7 

1945936 
F8 

104103 
Comparison between MRQPSO and QPSO on nonlinear equation systems.
Fun 1  Fun 2  Fun 3  

QPSO  MRQPSO  QPSO  MRQPSO  QPSO  MRQPSO  
Value  Mean 






Max 







Min 









Time  Mean  152362 

123579 

115000 

Max  160675 

129658 

116815 


Min  144371 

119379 

110133 

All experiments are run on VMware Workstation virtual machines version 12.0.0: one processor, 1.0 GB RAM. Hadoop version 1.1.2 and Java 1.7 were used in MapReduce experiments; we used three virtual machines while serial algorithm used one. CPU is core i7. Programming language is Java.
In Table
The results of MRCPSO are compared with QPSO algorithm in Tables
The notable advantage in time is presented in Table
To summarize, we can discover that the MRQPSO has better solution performance and cost less running time. The proposed MRQPSO is more suitable and effective for dealing with complex problems.
Nonlinear equation systems arise in many areas, such as economics [
Generally, a nonlinear equation system could be described as [
In order to obtain the optimal solutions of a nonlinear equation system, an optimization problem like (
or
In this article, optimization problem like (
Fun 1:
Fun 2:
Fun 3:
Some parameters and environment used on solving nonlinear equation system are listed as follows. Each algorithm is performed 20 times on each problem independently. All experiments are run for 2^{10}
Two aspects are considered in the comparisons. One is running time of the two algorithms. The other is the obtained minimized objective function value. The results are reported in Table
From Table
But from the view of time cost, it is clear that MRQPSO outperformed QPSO on all the cases. And the advantage is significant. Because three virtual devices are used to evaluate solutions in feasible space at the same time, the computing task could be completed with less time.
This paper developed a MRQPSO algorithm and implemented serial QPSO into the MapReduce model, speculative parallelization, and distribution of QPSO. The proposed method was applied to solve the composition benchmark functions and nonlinear equation systems and got satisfactory solutions basically. Moreover, from the comparisons between MRQPSO and QPSO, the results showed us the parallel one outperformed the serial one on both quality of solution and time cost. MRQPSO can be considered as a suitable algorithm to solve largescale and complex problems. In order to solve more complex practical problems, a cluster with more servers is needed to be constructed and used to test the performance of MRQPSO, which would be a further work.
Informed consent was obtained from all individual participants included in the study.
The authors declare that they have no conflicts of interest.
This work was supported by the National Natural Science Foundation of China (nos. 61272279, 61272282, 61371201, and 61203303), the Program for New Century Excellent Talents in University (no. NCET120920), the Program for New Scientific and Technological Star of Shaanxi Province (no. 2014KJXX45), the National Basic Research Program (973 Program) of China (no. 2013CB329402), the Program for Cheung Kong Scholars and Innovative Research Team in University (no. IRT_15R53), and the Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project) (no. B07048).