A Novel Teaching-Learning-Based Optimization with Laplace Distribution and Experience Exchange

Teaching-Learning-Based Optimization (TLBO) algorithm is an evolutionary powerful algorithm that has better global searching capability. However, in the later period of evolution of the TLBO algorithm, the diversity of learners will be degraded with the increasing iteration of evolution and the smaller scope of solutions, which lead to a trap in local optima and premature convergence. This paper presents an improved version of the TLBO algorithm based on Laplace distribution and Experience exchange strategy (LETLBO). It uses Laplace distribution to expand exploration space. A new experience exchange strategy is applied to make good use of experience information to identify more promising solutions to make the algorithm converge faster. The experimental performances verify that the LETLBO algorithm enhances the solution accuracy and quality compared to original TLBO and various versions of TLBO and is very competitive with respect to other very popular and powerful evolutionary algorithms. Finally, the LETLBO algorithm is also applied to parameter estimation of chaotic systems, and the promising results show the applicability of the LETLBO algorithm for problem-solving.

Recently, Rao et al. [14] proposed the Teaching-Learning-Based Optimization (TLBO) algorithm. In the TLBO algorithm, the optimal individual is marked as "teacher" and the other individuals are "students", which simulates two behaviors of "Teacher Stage" and "Learner Stage" in a class and has advantages both simple computation and few controlling parameters except for the common controlling parameters to make it easy to implement and fast convergence speed [15]. TLBO has been paid much more attention and had a successful application to many real-world optimization problems [16][17][18][19][20][21].
However, relevant research shows that the higher the dimension of the problem to be optimized is, the easier the algorithm is to slow convergence. According to the above reason, a lot of TLBO variants are proposed in recent years. Venkata Rao et al. [22] incorporate an elitist strategy into the TLBO algorithm to identify its e ect on the exploration and exploitation capacity. Feng Zou [23] maintains the diversity of the population in the teacher phase by using a ring neighborhood search. Farahani [14] presented the mutation operations of the di erential and interactive to improve the exploration capability and maintain diversity. Debao [24] combines to renew individuals according to a random probability, and the remaining individuals have their positions renewed by learning knowledge from the best individual, the worst individual, and another random individual of the current generation. Feng Zou et al. [25] uses di erential evolution (DE) operators to increase the diversity and repulsion learning method to make learners search knowledge from di erent directions. Sai H C [26] uses a con ned TLBO (CTLBO) to eliminate the teaching factor and introduces eight new mutation strategies to the teacher phase and four new mutation strategies to the student phase to enhance the algorithm's exploitation and exploration capabilities. Jiang et al. [27] designed the neighborhood topology and the fitness-distance-ratio mechanism to maintain the exploration ability of the population.
Although the aforementioned TLBO variants have shown a better performance than the original TLBO, there still exists a smaller scope of solution in the later stages of evolution. In addition, the blindness of the random selflearning method from another random learner in the Learner Stage of the original TLBO weakens the exploitation ability of the individuals. To address these issues, this paper proposes a novel version of TLBO that is augmented with Laplace distribution and Experience exchange strategy (LETLBO). e major contributions are as follows in this paper: (i) Laplace distribution is introduced instead of standard Laplace distribution, which improves the mutation ability and broadens the scope of solutions in the later stage of evolution (ii) Experience exchange strategy is designed in Learner Stage, which decreases the blindness of random selflearning method and improves the exploitation ability of the individuals e paper firstly introduces the background and application of the original TLBO algorithm, then analyses the strengths and weaknesses of the original TLBO and the variant TLBO algorithms, and finally proposes a novel version of TLBO. In Section 2, the original TLBO algorithm is described. Section 3 presents TLBO with Laplace distribution and Experience exchange strategy (LETLBO). In Section 4, the results of LETLBO and related optimization algorithms are analysed via a comparative study. Section 5 applies the LETLBO algorithm to parameter estimation of chaotic systems. is paper concludes with Section 6.

Teaching-Learning-Based Optimization
e section aims to give a brief description of the TLBO algorithm proposed by Rao. e TLBO algorithm is a successful human-inspired method to mimic the teachinglearning ability between teacher and learners. e working of this algorithm is carried out during two key parts: Teacher Stage and Learner Stage. e Teacher Stage is referred to as learning from the teacher while the Learner Stage is considered as a mutual learning process between learners.

Teacher Stage.
In Teacher Stage, the purpose is to increase their average grades depending on the teacher of the class to enhance the mean grade of the whole class. e fresh learner operator is recognized as follows: where X i (i � 1, 2, ..., N, N is the number of learners) is a vector of learner; X i,new is a newly generated individual according to X i ; X teacher is the best individual of the current population, X mean is the mean of the learners; r indicates an arbitrary number which is distributed randomly in [0,1]; the parameter TF is a teaching factor deciding the value of the X mean to be changed. e value of T F is either 1 or 2, indicating the learner learns something or nothing from the teacher, respectively. It is obtained as rule in 2.2. Learner Stage. In Learner Stage, the learners obtain their knowledge by interacting with each other. A fresh learner X i,new is evaluated through a random learner X j using the following expression: where i and j are mutually exclusive integers selected from 1 to N, N is the population size, r still indicates an arbitrary number which is distributed randomly in [0,1], and f is the fitness function, which is focused on minimum targets. A learner X i learns something new if the other learner X j has more knowledge than him or her.
where μ is the position parameter; b is the scale parameter. It is the standard Laplace distribution when the parameter μ equals zero while the parameter b equals 1. Figure 2 shows the probability density curves of standard Gauss distribution, standard Uniform distribution, and standard Laplace distribution, respectively. As can be seen from Figure 2, the peak of Laplace distribution at the origin is the highest of three different distributions while the velocity of the long flat shape near to zero is the slowest. erefore, if the variation ability of Laplacian distribution is used in the teacher stage and the learner stage, its interference ability or self-regulation ability is the strongest in the three different distributions; then the basic TLBO algorithm is more likely to jump out of local optimum. e updating equation of the Learner Phase is as follows: 3.3. Experience Exchange Strategy. It can be seen from (3) that each learner only learns randomly from other learners in the Learner Phase. is will lead to a certain degree of blindness in the learning direction, thus degrading the algorithm's accuracy. It is well known that exchange experience is an important way to learn. Experience exchange ensures that each learner not only incorporates his own experience but also contributes his experience to the entire group to maximize the utility of that experience. e updating equation of the Learner Phase is as follows: where ψ is the exchange learning factor and denotes a vector whose elements are distributed randomly in the range [0,1.5] [29], X mp is the mean of the current experience of all of the learners, and X pbi is also the experience of the i th learner. e X mp updating equation of the Learner Phase is as follows: 3.4. Flowchart of Distribution of LETLBO Algorithm. As explained above, the flowchart of the novel version of TLBO with the Laplace distribution and Experience exchange strategy (LETLBO) is shown in Figure 1. Random select a X j for every X i in the class

Analysis of Computational Complexity.
Computational complexity [30] is usually considered as a method of measuring the input data set size. ere are four kinds of operations including population initialization, Teacher Stage, Learner Stage, and Experience Exchange strategy for the LETLBO algorithm. Here, the total computational complexity time of the LETLBO algorithm in the iteration is as follows: where n denotes the larger value of N and D, and According to the above analysis, the total time complexity consumed of the LETLBO algorithm in one cycle is no more than O(n 2 )T max while the total time complexity consumed by the original TLBO algorithm in one cycle is also O(n 2 )T max . It shows that the LETLBO algorithm hardly increases the time complexity of the original TLBO algorithm.

Experimental Results and Discussion
e section first investigated control parameter population size about the proposed algorithm. en, the performance of the LETLBO algorithm is evaluated by comparing with that of other variant TLBO algorithms and nine original intelligence optimization algorithms. e experimental results verify that the LETLBO algorithm is very competitive in terms of the solution accuracy and quality.

Experimental Designing.
In order to study the performance of the proposed LETLBO algorithm, six benchmark functions [31] are numerically simulated in this experiment and listed in Table 1. Windows XP operating system with MATLAB 7.14.0 (R2012a) performed all test experiments in this paper, and the experiments were repeated 30 times independently with a Celoron 2.26 GHz CPU and 2 GB memory.
Parameters settings of all comparative experiments are listed. e dimension (D) of benchmark functions is set to 30. e others parameters of the compared algorithms are set the same value as the recommended value original paper. In addition, the stopping criterion is set to 300,000 Function Evaluations (FEs) [32].
To verify whether the overall optimization performance of various optimization algorithms is significantly different, the statistical method Wilcoxon Rank-Sum Test [33] with a significance levelα � 0.05 is conducted in this paper. e Wilcoxon Rank-Sum Test assesses whether the mean value (F mean ) of two solutions from any two algorithms are statistically different from each other. ese marks "-", "+", and "≈" denote that the performance of the proposed algorithm is significantly worse than, significantly better than, and similar to that of LETLBO, respectively. e others parameters of the compared algorithms are set the same value as the recommended value original paper.

Population Size N Influence on LETLBO Performance.
e population is now set from 10 to 100 in increments of 10 with the other parameters the same as previously, and the influence on LETLBO algorithm performance is investigated. All experiments are conducted on test functions F 1 -F 6 . Table 2 shows the results for different population sizes. From the statistical results in Table 2, it can be seen that the mean value (F mean ) of the LETLBO (N � 30) algorithm is better than that of other cases on functions F 1 , F 2 , F 3 , F 4 , and F 6 (5 out of the 6 functions). For other functions such as F 5 , the mean value (F mean ) of the LETLBO (N � 30) algorithm is similar to other cases. In summary, LETLBO (N � 30) algorithm wins first place when compared with other cases according to the above analyses. As for the reason, the statistical results experiment shows that the smaller population size may lead to premature convergence and the larger population size will greatly decrease the probability of finding the correct search direction. erefore, the population size N of the LETLBO algorithm is recommended to be set as the value 30.

Comparison LETLBO with the Original TLBO, LTLBO, and ETLBO Algorithm.
e LETLBO is compared to the original version of TLBO and Teaching-Learning-based Optimization based on Laplace distribution (LTLBO)which is considered as only added Laplace distribution as well as Teaching-Learning-based Optimization based on Experience exchange strategy (ETLBO) which is considered as only added Experience exchange strategy for implementing conveniently. e parameter for the LTLBO and ETLBO are the same as LETLBO. Each algorithm runs independently 30 times, and the statistical results of F mean and SD are provided in Table 3, the last three rows of which show the experimental results, and the best results are marked in bold. e evolution plots of TLBO, LTLBO, ETLBO, and LETLBO are illustrated in Figure 3. In addition, the semi-logarithmic convergence plots are used to analyze the relationship of the mean errors of the functions.
In this section, the LETLBO is compared to the original version of TLBO and LTLBO as well as ETLBO. From Table 3, it can be seen that LETLBO performs much better in most cases than TLBO, LTLBO, and ETLBO. In specific, LETLBO outperforms TLBO, LTLBO, and ETLBO five, four, and five test functions out of six test functions, respectively. For functions F 5 , LETLBO performs the same as the TLBO, LTLBO, and ETLBO in terms of the statistical mean value (F mean ). Furthermore, Figure 3 reveals the convergence behaviors for TLBO, LTLBO, ETLBO, and LETLBO algorithms. As illustrated in Figure 3, according to these evolution plots, it can be seen that the LETLBO algorithm shows the fastest convergence rates for functions F 1 , F 2 , F 3 , and F 5 .
erefore, referring to the statistical results of F mean and SD, the overall performance of LETLBO is significantly better than the original version of TLBO and LTLBO as well as ETLBO algorithms.
Considering the above-mentioned situations, the main reason is that Laplace distribution can expand the searching space and Experience exchange strategy to avoid detours to achieve a more accurate solution, helping to identify a more promising solution. us, exploration and exploitation are balanced better in the LETLBO. erefore, it can be concluded that LETLBO performs most effectively for accuracy among original version TLBO of and LTLBO as well as ETLBO algorithms.

Comparison with Other Improved TLBO Variants.
In this section, we compare the LETLBO with four other improved TLBO variants including ETLBO [22], NSTLBO [23], TLBMO [34], and TLBODE [14]. e parameters for four other improved TLBO variants are taken from their reference listed above. Each algorithm runs independently 30 times, and the statistical results of F mean and SD are provided in Table 4, the last three rows of which show the experimental results. e best results are shown in bold.
From the statistical mean value (F mean ) given in Table 4, the overall performance of LETLBO is significantly better than that of the other four algorithms. More specifically, the LETLBO outperforms ETLBO, NSTLBO TLBMO, and TLBODE on four, five, five, and four out of six test functions, respectively. As can be seen from the statistical mean value (F mean ) in Table 4, LETLBO is better than the other four algorithms for functions F 1 , F 2 , F 3 , and F 4 . LETLBO performs the same as the other four algorithms in terms of the statistical mean value (F mean ) for functions F 5 . LETLBO performs the same as the TLBODE in terms of the statistical mean value (F mean ) for functions F 6 . erefore, referring to the statistical results of F mean and SD, the overall performance of the LETLB algorithm is significantly better than other improved TLBO variants: ETLBO, NSTLBO, TLBMO, and TLBODE algorithms.

Comparison of LETLBO Algorithm with Nine Original
Intelligence Optimization Algorithms. In this section, LETLBO algorithm is compared with nine original intelligence optimization algorithms such as PSO [2], DE [3], GSO [4], ABC [5], WCA [7], CS [8], DSA [9], BSA [11], and ISA [12]. From the statistical mean value (F mean ) of Table 5, it can be seen that the LETLBO algorithm performs better    than the other nine original intelligence optimization algorithms based on the Wilcoxon Rank Sum Test results. More specifically, the LETLBO algorithm outperforms PSO, DE, ABC, CS, GSO, WCA, DSA, BSA, and ISA algorithms on two, four, five, six, five, six, five, three, and six, out of six test functions, respectively. For functions F 2 , F 3, and F 4 , the LETLBO algorithm especially outperforms above other nine original intelligence optimization algorithms. Table 5 reveals the standard deviation (SD) of ten algorithms in which the LETLBO algorithm is superior to all the other methods including nine original intelligence optimization algorithms on functions F 1 , F 2 , F 3, and F 4 .
is indicates that the robustness of the LETLBO algorithm outperforms nine original intelligence optimization algorithms on functions F 1 , F 2 , F 3, and F 4 . erefore, our approach is effective for solving optimization problems compared with PSO, DE, ABC, CS, GSO, WCA, DSA, BSA, and ISA algorithms.

Application LETLBO to Parameter Estimation of Chaotic System
In this section, we use the above LETLBO algorithm to solve the well-known Lorenz chaotic system and have an estimation of the unknown parameters of the chaotic system. Suppose that the chaotic system [35][36][37] is n-dimensional and described as follows: where respectively, represent the state vector and initial state, and denotes the set of the real structure parameters of the chaotic system.
While estimating the parameters of the chaotic system, it is presumed to be known the structure. erefore, the estimated system can be denoted as follows: where x i ∈ [− 10 3 , 10 3 ] and Y 0 , respectively, represent the state vector and initial state and denotes the set of the estimated parameters of the chaotic system. Assume that the observation data and the state of the estimated systems are represented as x i ∈ [0, 1], e total number of the observation data is denoted by Alphabet M. It is obvious that the value of J would be denoted as follows [35] if all the estimated parameters are equal to their real values: It is obvious that this problem is a multidimensional optimization problem. e unknown system parameter is the decision parameter x i ∈ [− 4, 4] and minimizing is the optimization goal. Figure 4 represents the principle of parameters for a chaotic system via an optimization algorithm. Traditional optimization methods usually have a lot of calculational cost and cannot obtain the global optima or satisfactory solution. In this section, we use the above LETLBO algorithm to solve the well-known Lorenz chaotic system to estimate the unknown parameters in the chaotic system. e Lorenz system is as follows with equations: where the parameters ) sin(10πx 1 )), g(x) � 1 + 9 n i�2 x i /(n − 1) of the system decide this system behavior x i /(n− 1)] 0.25 are the real values of the original system parameters, respectively. Running trajectories of the Lorenz chaotic system in each plane is as shown in Figure 5.
In this section, LETLBOETLBO, TLBODE, TLBMO, and NSTLBO algorithms are used to estimate parameters. e parameter searching ranges are set as follows: new X i � X i + Mutation vector i , min X∈S f(X), X ∈ R n . In the experiment, the population N of the LETLBO algorithm is set to 50, and each comparison algorithm runs independently 30 times with 12000 function evaluations. e setting of other parameters is the same as that of the original algorithms. Table 6 shows a lot of lists about the best result, the worst result, the average result, and the estimation results of each algorithm. Figure 6 illustrates the evolution running trajectories of the Lorenz chaotic system on each parameter.
As can be seen from Table 6, parameter estimation values of the Lorenz chaotic system obtained by the LETLBO algorithm are very close to the true values, and the parameter estimation accuracy is high. As for the best result, the worst result, and the average result, the LETLBO algorithm is better than other algorithms. From Figure 6, it is shown that the LETLBO algorithm performs better than four version TLBO algorithms in terms of searching quality and convergence rate. is shows effectiveness and robustness of the LETLBO algorithm for the parameter estimation of the Lorenz chaotic system.

Summary and Conclusions
In this paper, a new version of the TLBO algorithm, namely the LETLBO algorithm, is proposed to solve unconstrained optimization problems. Laplace distribution and Experience exchange strategy are incorporated in the proposed algorithms.
e influence of control parameters on the performance of these proposed algorithms is also analysed in detail. By comparing with that of other variant TLBO algorithms and nine original intelligence optimization algorithms, the performance of the LETLBO algorithm is evaluated. e experimental results verify that the LETLBO algorithm is superior to other variant TLBO algorithms in terms of the quality solution in most cases. In addition, the proposed algorithms have clear significant advantages compared with nine original intelligence optimization algorithms. Besides that, we also applied it to parameter estimation of chaotic systems and it can be regarded as a new choice that has high practical utility for solving parameter estimation of the Lorenz chaotic system.
Our future work will focus on the real-world application of our proposed LETLBO algorithm in areas of power system, material structure design, unmanned aerial vehicle route (UAV), and robotic path planning. ese are all of great value for future research.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Mathematical Problems in Engineering
Conflicts of Interest e authors declare that they have no conflicts of interest.