Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.
Many real-life optimization problems are becoming more and more complex and difficult with the development of scientific technology. So how to resolve these complex problems in an exact manner within a reasonable time cost is very important. The traditional optimization algorithms are difficult to solve these complex nonlinear problems. In recent years, nature-inspired optimization algorithms which simulate natural phenomena and have different design philosophies and characteristics, such as evolutionary algorithms [
As a stochastic search scheme, TLBO [
The remainder of this paper is organized as follows. The TLBO algorithm is introduced in Section
Rao et al. [
A good teacher is one who brings his or her learners up to his or her level in terms of knowledge. But in practice this is not possible and a teacher can only move the mean of a class up to some extent depending on the capability of the class. This follows a random process depending on many factors. Let
Learners increase their knowledge by two different means: one through input from the teacher and the other through interaction between themselves. A learner interacts randomly with other learners with the help of group discussions, presentations, formal communications, and so forth. A learner learns something new if the other learner has more knowledge than him or her. Learner modification is expressed as
As explained above, the pseudocode for the implementation of TLBO is summarized in Algorithm
(1) (2) Initialize (3) Initialize learners (4) Donate the best learner as (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) Update the (27) (28)
In this section, we only presented a brief overview of some recently proposed bare-bones algorithms.
PSO is a swarm intelligence-based algorithm, which is inspired by the behavior of birds flocking [
Based on the convergence characteristic of PSO, Kennedy [
Kennedy [
Inspired by the BBPSO and DE, Omran et al. [
Based on the idea that the Gaussian sampling is a fine tuning procedure which starts during exploration and is continued to exploitation, Wang et al. [
To balance the global search ability and convergence rate, Wang et al. [
The bare-bones PSO utilizes this information by sampling candidate solutions, normally distributed around the formally derived attractor point. That is, the new position is generated by a Gaussian distribution for sampling the search space based on the
Based on a previous explanation, a new bare-bones TLBO (BBTLBO) with neighborhood search is proposed in this paper. In fact, for TLBO, if the new learner has a better function value than that of the old learner, it is replaced with the old one in the memory. Otherwise, the old one is retained in the memory. In other words, a greedy selection mechanism is employed as the selection operation between the old and the candidate one. Hence, the new teacher and the new learner are the global best
Flow chart showing the working of BBTLBO algorithm.
It is known that birds of a feather flock together and people of a mind fall into the same group. Just like evolutionary algorithms themselves, the notion of neighborhood is inspired by nature. Neighborhood technique is an efficient method to maintain diversity of the solutions. It plays an important role in evolutionary algorithms and is often introduced by researchers in order to allow maintenance of a population of diverse individuals and improve the exploration capability of population-based heuristic algorithms [
For the implementation of grouping, various types of connected distances may be used. Here we have used a ring topology [
Ring neighborhood topology with three members.
To balance the global and local search ability, a modified interactive learning strategy is proposed in teacher phase. In this learning phase, each learner employs an interactive learning strategy (the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning) based on neighborhood search.
In BBTLBO, the updating formula of the learning for a learner
In the BBTLBO, there is a
At the same time, in the learner phase, a learner interacts randomly with other learners for enhancing his or her knowledge in the class. This learning method can be treated as the global search strategy (shown in (
In this paper, we introduce a new learning strategy in which each learner learns from the neighborhood teacher and the other learner selected randomly of his or her corresponding neighborhood in learner phase. This learning method can be treated as the neighborhood search strategy. Let
In BBTLBO, each learner is probabilistically learning by means of the global search strategy or the neighborhood search strategy in learner phase. That is, about 50% of learners in the population execute the learning strategy of learner phase in the standard TLBO (shown in (
Moreover, compared to the original TLBO, BBTLBO only modifies the learning strategies. Therefore, both the original TLBO and BBTLBO have the same time complexity
As explained above, the pseudocode for the implementation of BBTLBO is summarized in Algorithm
(1) (2) Initialize (3) Initialize learners (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22)
In this section, to illustrate the effectiveness of the proposed method, 20 benchmark functions are used to test the efficiency of BBTLBO. To compare the search performance of BBTLBO with some other methods, other different algorithms are also simulated in the paper.
The details of 20 benchmark functions are shown in Table
Details of numerical benchmarks used.
Function | Formula |
|
Range | Optima |
---|---|---|---|---|
Sphere |
|
30 |
|
0 |
Sum Square |
|
30 |
|
0 |
Quadric |
|
30 |
|
0 |
Step |
|
30 |
|
0 |
Schwefel 1.2 |
|
30 |
|
0 |
Schwefel 2.21 |
|
30 |
|
0 |
Schwefel 2.22 |
|
30 |
|
0 |
Zakharov |
|
30 |
|
0 |
Rosenbrock |
|
30 |
|
0 |
Ackley |
|
30 |
|
0 |
Rastrigin |
|
30 |
|
0 |
Weierstrass |
|
30 |
|
0 |
Griewank |
|
30 |
|
0 |
Schwefel |
|
30 |
|
0 |
Bohachevsky1 |
|
2 |
|
0 |
Bohachevsky2 |
|
2 |
|
0 |
Bohachevsky3 |
|
2 |
|
0 |
Shekel5 |
|
4 |
|
−10.1532 |
Shekel7 |
|
4 |
|
−10.4029 |
Shekel10 |
|
4 |
|
−10.5364 |
All the experiments are carried out on the same machine with a Celoron 2.26 GHz CPU, 2 GB memory, and Windows XP operating system with Matlab 7.9. For the purpose of reducing statistical errors, each algorithm is independently simulated 50 times. For all algorithms, the population size was set to 20. Population-based stochastic algorithms use the same stopping criterion, that is, reaching a certain number of function evaluations (FEs).
The hybridization factor u is set to
Comparisons mean ± std of the solutions using different
Fun | BBTLBO ( |
BBTLBO ( |
BBTLBO ( |
BBTLBO ( |
BBTLBO ( |
BBTLBO ( |
BBTLBO ( |
---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Comparison of the performance curves using different
The comparisons in Table
In this section, we compare BBTLBO with five other recently proposed three bare-bones DE variants and two bare-bones PSO algorithms. Our experiment includes two series of comparisons in terms of the solution accuracy and the solution convergence (convergence speed and success rate). We compared the performance of BBTLBO with other similar bare-bones algorithms, including BBPSO [
In our experiment, the maximal FEs are used as ended condition of algorithm, namely, 40,000 for all test functions. The results are shown in Table
Comparisons mean ± std of the solutions using different algorithms.
Fun | BBPSO | BBExp | BBDE | GBDE | MGBDE | BBTLBO |
---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Comparison of the performance curves using different algorithms.
From Table
In order to compare the convergence speed and successful rate (SR) of different algorithms, we select a threshold value of the objective function for each test function. For other functions, the threshold values are listed in Table
The mean number of FEs and SR with acceptable solutions using different algorithms.
Fun |
|
BBPSO | BBExp | BBDE | GBDE | MGBDE | BBTLBO | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MFEs | SR | MFEs | SR | MFEs | SR | MFEs | SR | MFEs | SR | MFEs | SR | ||
|
|
15922 |
|
17727 |
|
11042 |
|
19214 |
|
11440 |
|
|
|
|
|
17515 | 54 | 19179 | 94 | 12243 |
|
20592 | 90 | 12634 |
|
|
|
|
|
NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 | NaN |
|
|
|
11710 | 24 | 8120 | 84 | 3634 | 6 | 7343 | 40 | 4704 | 34 |
|
|
|
|
NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 |
|
|
|
|
NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 |
|
|
|
|
17540 | 6 | 21191 | 90 | 17314 |
|
22684 | 94 | 15322 | 98 |
|
|
|
|
NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 |
|
|
|
|
17073 | 62 | 18404 | 42 | 14029 | 24 | 18182 | 52 | 17200 | 80 | NaN |
|
|
|
24647 | 26 | 27598 | 90 | 18273 | 26 | 29172 | 82 | 18320 | 84 |
|
|
|
|
NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 |
|
|
|
|
NaN | 0 | 25465 | 50 | NaN | 0 | 27317 | 64 | 19704 | 24 |
|
|
|
|
16318 | 32 | 21523 | 58 | 11048 | 16 | 22951 | 64 | 14786 | 58 |
|
|
|
|
NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 | NaN | 0 | NaN |
|
|
|
|
|
1176 | 100 | 1274 |
|
1251 |
|
1206 |
|
799 |
|
|
|
|
98 | 1251 |
|
1294 |
|
1343 |
|
1308 |
|
813 |
|
|
|
995 |
|
2626 |
|
1487 |
|
2759 |
|
1921 |
|
|
|
|
|
1752 | 34 | 6720 | 44 | 2007 | 52 | 4377 | 32 | 8113 | 64 |
|
|
|
|
2839 | 34 | 8585 | 48 |
|
42 | 6724 | 50 | 3056 | 66 | 2215 |
|
|
|
1190 | 36 | 8928 | 74 |
|
40 | 6548 | 76 | 5441 | 80 | 2822 |
|
From Table
Comparisons mean ± std of the solutions using different algorithms.
Fun | jDE | SaDE | PSOcfLocal | PSOwFIPS | TLBO | BBTLBO |
---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13/3/4 | 12/4/4 | 13/4/3 | 12/4/4 | 11/6/3 |
|
In this section, we compared the performance of BBTLBO with other optimization algorithms, including jDE [
The comparisons in Table
In this section, to show the effectiveness of the proposed method, the proposed BBTLBO algorithm is applied to estimate parameters of two real-world problems.
The artificial neural network trained by our BBTLBO algorithm is a three-layer feed-forward network and the basic structure of the proposed scheme is depicted in Figure
BBTLBO-based ANN.
For neural network training, the aim is to find a set of weights with the smallest error measure. Here the objective function is the mean sum of squared errors (MSE) over all training patterns which is shown as follows:
In this example, a three-layer feed-forward ANN with one input unit, five hidden units, and one output unit is constructed to model the curve of a nonlinear function which is described by the following equation [
In this case, activation function used in the output layer is the sigma function and activation function used in the output layer is linear. The number (dimension) of the variables is 16 for BBTLBO-based ANN. In order to train the ANN, 200 pairs of data are chosen from the real model. For each algorithm, 50 runs are performed. The other parameters are the same as those of the previous investigations. The results are shown in Table
Comparisons between BBTLBO and other algorithms on MSE.
Algorithm | Training error | Testing error | ||
---|---|---|---|---|
Mean | Std | Mean | Std | |
TLBO |
|
|
|
|
BBTLBO |
|
|
|
|
Comparison of the performance curves using different algorithms.
Convergence curves
Approximation curves
Error curves
The continuous form of a discrete-type PID controller with a small sampling period
For an unknown plant, the goal of this problem is to minimize the integral absolute error (IAE), which is given as follow [
To avoid overshooting, a penalty value is adopted in the cost function. That is, once overshooting occurs, the value of overshooting is added to the cost function, and the cost function is given as follows [
In our simulation, the formulas for the plant examined are given as follows [
The system sampling time is
In the simulations, the step response of PID control system tuned by the proposed BBTLBO is compared with that tuned by the standard genetic algorithm (GA) and the standard PSO (PSO). The population sizes of GA, PSO, and BBTLBO are 50, and the corresponding maximum numbers of iterations are 50, 50, and 50, respectively. In addition, the crossover rate is set as 0.90 and the mutation rate is 0.10 for GA.
The optimal parameters and the corresponding performance values of the PID controllers are listed in Table
Comparisons of parameters of PID controllers using different algorithms.
Algorithm |
|
|
|
Overshoot (%) | Peak time (s) | Rise time (s) | Cost function | CPU time (s) |
---|---|---|---|---|---|---|---|---|
GA | 0.11257 | 0.02710 | 0.28792 | 2.90585 | 1.65000 | 1.05000 | 16.34555 | 7.05900 |
PSO | 0.11772 | 0.01756 | 0.27737 | 1.04808 | 1.65000 | 0.65000 | 11.60773 | 6.91000 |
BBTLBO | 0.11605 | 0.01661 | 0.25803 | 0.34261 | 1.80000 | 0.70000 | 11.34300 | 7.04500 |
Performance curves using different methods.
Step response curves using different methods.
In this paper, TLBO has been extended to BBTLBO which uses the hybridization of the learning strategy in the standard TLBO and Gaussian sampling learning to balance the exploration and the exploitation in teacher phase and uses a modified mutation operation so as to eliminate the duplicate learners in learner phase. The proposed BBTLBO algorithm is utilized to optimize 20 benchmark functions and two real-world optimization problems. From the analysis and experiments, the BBTLBO algorithm significantly improves the performance of the original TLBO, although it needs to spend more CPU time than the standard TLBO algorithm in each generation. From the results compared with other algorithms on the 20 chosen test problems, it can be observed that the BBTLBO algorithm has good performance by using neighborhood search more effectively to generate better quality solutions, although the BBTLBO algorithm does not always have the best performance in all experiments cases of this paper. It can be also observed that the BBTLBO algorithm gives the best performance on two real-world optimization problems compared with other algorithms in the paper.
Further work includes research into neighborhood search based on different topological structures. Moreover, the algorithm may be further applied to constrained, dynamic, and noisy single-objective and multiobjective optimization domain. It is expected that BBTLBO will be used to more real-world optimization problems.
The authors declare that there is no conflict of interests regarding the publication of this paper.
This research was partially supported by the National Natural Science Foundation of China (61100173, 61100009, 61272283, and 61304082). This work is partially supported by the Natural Science Foundation of Anhui Province, China (Grant no. 1308085MF82), and the Doctoral Innovation Foundation of Xi’an University of Technology (207-002J1305).