Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies’ concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times.
In the practice of engineering, there are a wide variety of complex optimization problems to be solved, such as multimodal optimization, high-dimensional optimization, and dynamic optimization of time-varying parameters. These problems are manifested in the form of minimization of energy consumption, time, or risk, or maximization of the quality or efficiency, and usually can be expressed by getting the maximum or minimum of multivariable functions with a series of equations and (or) inequality constraints. In order to solve such problems, optimization theories and technologies have been rapidly developed, and its impact on society is also increasing.
Current research focus of optimization algorithms is evolutionary computation methods represented by genetic algorithms (GAs) [
Artificial immune system (AIS) is one of bionic intelligent systems inspired by biological immune system (BIS), and is new frontier research in artificial intelligence areas. The study of AIS has four major aspects, including negative selection algorithms (NSAs), artificial immune networks (AINEs), clonal selection algorithms (CLONALGs), the danger theory (DT), and dendritic cell algorithms (DCAs) [
Existing artificial immune optimization algorithms have maintained many merits of BIS, such as fine diversity, strong robustness, and implicit parallelism, but also reflect a number of shortcomings, such as premature convergence and poor local search ability [
The remainder of this paper is organized as follows. The principles of artificial immune theories and influential artificial immune based optimization algorithms are described in Section
In this section, three artificial immune theories being adopted in this paper are introduced, including the clonal selection, the immune network, and the danger theory. And three influential artificial immune based optimization algorithms, including CLONALG, opt-aiNet, and dopt-aiNet which are to be compared with the proposed algorithm in the experiments are described.
From the humoral immune response in the biological immune mechanism, the main idea of the clonal selection [
The main idea of the immune network [
The danger theory [
CLONALG [
opt-aiNet [
(1) Initialize. Randomly generate the initial network population; (2) While (termination conditions are not meet) do Begin (2.1) While (changes of the average fitness of the population compared with that of the last generation is greater than the specified value) do Begin (2.1.1) Compute the fitness of every individual in the population; (2.1.2) Clone the same number for every individual, and get clone groups; (2.1.3) Mutate the clone groups, and get mutated groups; (2.1.4) Compute the fitness of every clone in the mutated groups; (2.1.5) Select the clone with highest fitness in every mutated group, and form a new population; (2.1.6) Compute the average fitness of the population; End; (2.2) Compute the distance between any two individuals; if the distance is less than the threshold, retain one; (2.3) Randomly generate a certain number of antibodies; End;
opt-aiNet includes two loops. At first, the algorithm enters into the first loop. Implant a specific number of antibodies (real-valued vectors) in the definition domain of the objective function, constituting the artificial immune network. Then, the algorithm enters into the second loop. In order to obtain the local optimal solution, perform the clonal selection to every antibody in the network. The process continues until the average fitness of the population is close to that of the previous generation, which means that the network is stabilized. Then, the algorithm jumps out of the second loop. Antibodies in the network interact with each other, and the network suppression occurs. At last, randomly introduce new antibodies. Repeat the process until the termination conditions are met. Due to the nested loops, the algorithm increases unnecessary function evaluation times. The algorithm maintains the diversity of the population, but has disadvantages of slow convergence and low search accuracy [
dopt-aiNet extends opt-aiNet to deal with time-varying fitness functions [
This section describes the basic idea of dt-aiNet. The flow of the algorithm is described in Section
In this paper, the danger theory is introduced into the optimization algorithm and the clonal selection theory and the immune network theory are integrated. All the antibodies which interact with each other form the immune network. First, the algorithm defines the danger zone to calculate danger signals for each antibody and then adjusts antibodies’ concentrations through its own danger signals. Second, the algorithm performs the clonal proliferation operation, generating clone groups by duplicating a certain number of random antibodies, and then mutates each clone, but keeps the parent antibody. Third, the algorithm selects the antibody with highest fitness which is in the parent antibody’s danger zone and selects antibodies with higher fitness than the parent antibody which are not in the parent antibody’s danger zone. Fourth, the algorithm adds randomly generated antibodies to adjust the population size, recalculates danger signals for all antibodies, and then removes antibodies whose concentration equals to zero. All the individuals in the population constitute the immune network which improves the affinities of the population in constant evolution. The network makes antibodies with low concentration and low affinity dead, and survival antibodies are viewed as memory individuals. When the number of memory individuals does not change, these individuals are the optimization solutions of the multimodal function. Therefore, the algorithm composes of seven elements, danger signals and concentrations calculation, clonal selection (
1. Initialize. Randomly generate the initial network population within the definition domain, and set initial concentrations; 2. While (termination conditions are not meet) do Begin 2.1. Compute the affinity and danger signals of each antibody in the population; 2.2. Select better individuals to clone, and make them active. The number of clones is related to concentrations; 2.3. Perform the mutation operation to the clones, and then affinity mutation occurs. The mutation rate is related to affinities and can be adaptively adjusted; 2.4. Perform the clonal suppression, and select better individuals to add into the network; 2.5. Update the fitness, danger signals and concentrations of the population, and perform the network suppression; 2.6. Randomly generate a certain number of antibodies, and add them into the network; End; 3. Update the fitness, danger signals and concentrations of the population, and perform the network suppression; 4. Output the population.
The optimization function is expressed as
The affinity between antibody and antigen is the binding strength between antibody and antigen, which is the solution fitness to the problem. It is expressed by
The affinity between antibody and antibody represents the similarity degree between the two antibodies and is expressed by
This section describes some of the steps in the process of the algorithm, which are different from the influential artificial immune based optimization algorithms.
Because danger signals are associated with the environment, we use the proximity measurement to simulate the danger zone. The concentrations of antibody populations in the danger zone reflect the environment condition for the optimization problem. According to the danger theory [
Interactions between antibodies within
The antibody concentration is dynamic and is related to the danger signal of the antibody and the affinity between the antibody and antigens. These two factors are the main reasons for the dynamically changing of antibody concentration.
When the surroundings change, the antibody concentration will change. If the danger signal of an antibody is not zero; that is to say, there are better solutions around the antibody and the danger signal will inhibit the antibody, the concentration of the antibody will decay with the evolution. The greater the danger signal is, the greater the impact on the environment of the antibody is. When the surroundings do not change, the antibody is in a relatively stable environment; that is to say, there are not better solutions around the antibody. So, the antibody is regarded as a candidate peak point, and the concentration of the antibody will increase with the evolution.
The affinity between the antibody and antigens will affect the antibody’s concentration as well. The greater the affinity is, the better the fitness of the antibody as a solution is. When the antibody is regarded as a candidate peak point, the increment of the antibody’s concentration will be proportional to the affinity. When the danger signal of the antibody exists, the attenuation of the antibody’s concentration will be inversely proportional to the affinity.
The concentration
For the initial population, each antibody is set an initial concentration con0. When the danger signal of the antibody exists, the antibody’s concentration will gradually decrease and ultimately to zero. When it does not exist, the antibody’s concentration will gradually increase and up to 1. Therefore,
The mutation operation simulates high-frequency variation mechanism in the immune response. And this operator generates antibodies with higher affinities and enhances the diversity of antibody population. The algorithm of opt-aiNet [
There are certain shortcomings in this method. For different functions,
Changing curves of
In artificial immune optimization algorithms, the suppression operations are divided into two kinds, which are clonal suppression and network suppression.
Performing the clone operation to every antibody in the population will produce clone groups. Then, variations of clone groups will create antibodies with higher affinity. The clone suppression means retaining antibodies with higher affinity from clone groups, and giving up the rest of the clone individuals. In opt-aiNet, clonal suppression means selecting the antibody with highest affinity from the temporary set which is composed of the parent antibody and its clonal group to join the network. dt-aiNet still chooses this way to add antibodies into the network, and meanwhile selects antibodies into the network which have higher affinity than the parent antibody and are not in the parent antibody’s danger zone. So, clonal suppression operation
Network suppression operation simulates the immune network regulation principle, which reduces the redundant antibodies and eliminates similar solutions. In dt-aiNet, this operation deletes antibodies with concentrations equaling to zero. An antibody’s concentration is zero indicates that the danger signals of this antibody always exist, and there are better individuals around this antibody. This antibody is redundant. Network suppression operation
This section analyzes the algorithm from three aspects, including the computational complexity, the convergences and the robustness.
The computational complexity of dt-aiNet is
As shown in the algorithm flow, dt-aiNet consists of six major components: the clonal selection operation, the cloning operation, the mutation operation, the suppression operations, the population updating operation, and the danger signals and concentrations adjusting operations. In iteration
Supposing the population size is
Therefore, if the total number of iterations is
Similarly, the calculation complexities of CLONALG and opt-aiNet can be analyzed. Table
Calculation complexities of the algorithms.
Algorithms | Complexities |
---|---|
CLONALG |
|
opt-aiNet |
|
dt-aiNet |
|
From the running mechanism of dt-aiNet, each generation of the population consists of two parts. One is the memory antibodies from the previous generation, and another is the new antibodies randomly added. Antibodies with higher affinities from the mutation operation are mainly in the neighborhood of the parent antibody. After the clonal suppression operation, population affinities will be higher than those of the previous generation. The antibodies with higher affinities will change the surrounding environments and then make danger signals of antibodies with lower affinities in the danger zone stronger and their concentrations lower. As the generation increases, if antibodies with lower affinities cannot escape from the danger zone under strengthened danger signals, their concentrations will decay to zero and then they will die. Antibodies with high affinities will retain in the memory population due to the unchanged environments. In this mechanism, antibodies in the memory population basically have high affinities and are peak points. It will be ensured that new antibodies randomly added to the population in each generation are not in the danger zone of memory antibodies. So, they will develop a new search space, and then the algorithm will eventually find all the peaks with the evolution.
Same as before, we assume that
For any distribution of the initial population, dt-aiNet is the weak convergence of probability, that is to say,
Known from the total probability formula,
After operations of selection, clone, mutation, and suppressions, affinities of population
So,
From the above equation, we have
Suppose
Known from the induction,
So,
that is,
The algorithm contains a number of parameters. Most of them have little effect on the search performance and can be set conventionally. But the two parameters
Here are three definitions to more clearly explain the evaluation indicators [
Given the parameters and the max iterative times to be allowed, if the function error between the optimal solution and the best solution gained from running the algorithm is not greater than
It means the success ratio in tests of
Given the parameters and the max iterative times to be allowed, the average number of evaluation times is the average times of computing the objective function in tests of
We choose the ninth function
Given
As can be seen from Figure
Charts of changes of parameter robustness.
This section applies the algorithm to the benchmark functions, which run in 2-dimensional spaces and 10-dimensional spaces. The selection of functions and the evaluation criteria of algorithms are described in Section
For that the performance evaluation criteria of optimizing algorithms are not uniform, Suganthan et al. [
The termination conditions are that FEs reach
We select the influential optimization algorithms based on artificial immune to do the experiments, including CLONALG, opt-aiNet, and dopt-aiNet. The accuracies of the optimization functions are shown in Table
Accuracies of functions.
Functions | Accuracies |
---|---|
|
|
|
|
|
|
|
|
The parameters of dt-aiNet are
The parameters of CLONALG are
The parameters of opt-aiNet are
The parameters of dopt-aiNet are
The algorithms run in 2-dimensional space and 10-dimensional space for the above functions in order to accurately assess the performances.
Table
Results (errors) in 2-dimensional spaces.
Function errors of the optimal solution | Number of peaks | ||
---|---|---|---|
dt-aiNet |
|
1 (0) | |
|
CLONALG |
|
1 (1.46) |
opt-aiNet |
|
5 (1.42) | |
Dopt-aiNet |
|
2.41 (1.2) | |
| |||
dt-aiNet |
|
1 (0.2) | |
|
CLONALG |
|
3.6 (2.21) |
opt-aiNet |
|
5.8 (2.2) | |
dopt-aiNet |
|
3.69 (1.3) | |
| |||
dt-aiNet |
|
82.54 (8.22) | |
|
CLONALG |
|
45.6 (20.28) |
opt-aiNet |
|
60.11 (23.87) | |
dopt-aiNet |
|
32.09 (12.7) | |
| |||
dt-aiNet |
|
7.22 (0.43) | |
|
CLONALG |
|
4.6 (3.10) |
opt-aiNet |
|
5 (1.43) | |
dopt-aiNet |
|
8.67 (1.33) |
Table
Results (success rates) in 2-dimensional spaces.
dt-aiNet | CLONALG | opt-aiNet | dopt-aiNet | |||||
---|---|---|---|---|---|---|---|---|
Success rates | Success performance | Success rates | Success performance | Success rates | Success performance | Success rates | Success performance | |
|
100% |
|
0% | — | 0% | — | 0% | — |
|
100% |
|
0% | — | 0% | — | 0% | — |
|
100% |
|
0% | — | 0% | — | 0% | — |
|
100% |
|
0% | — | 0% | — | 0% | — |
Figure
Convergence graphs in 2-dimensional spaces.
Tables
Results (errors) in 10-dimensional spaces.
Function errors of the optimal solution | Number of peaks | ||
---|---|---|---|
dt-aiNet |
|
1 (0) | |
|
CLONALG |
|
57.80 (7.26) |
opt-aiNet |
|
13.76 (6.81) | |
dopt-aiNet |
|
46.49 (1.33) | |
| |||
dt-aiNet |
|
1 (0.1) | |
|
CLONALG |
|
123.6 (11.02) |
opt-aiNet |
|
5 (1.17) | |
dopt-aiNet |
|
12.2 (2.32) | |
| |||
dt-aiNet |
|
188.33 (0.31) | |
|
CLONALG |
|
45.6 (20.28) |
opt-aiNet |
|
433.55 (3.43) | |
dopt-aiNet |
|
52.5 (4.67) | |
| |||
dt-aiNet |
|
193.5 (5.65) | |
|
CLONALG |
|
376.67 (5.19) |
opt-aiNet |
|
379.43 (0.33) | |
dopt-aiNet |
|
41.61 (2.26) |
Results (success rates) in 10-dimensional spaces.
dt-aiNet | CLONALG | opt-aiNet | dopt-aiNet | |||||
---|---|---|---|---|---|---|---|---|
Success rates | Success performance | Success rates | Success performance | Success rates | Success performance | Success rates | Success performance | |
|
100% |
|
0% | — | 0% | — | 0% | — |
|
100% |
|
0% | — | 0% | — | 0% | — |
|
100% |
|
0% | — | 0% | — | 0% | — |
|
93% |
|
0% | — | 0% | — | 0% | — |
Figure
Convergence graphs in 10-dimensional spaces.
This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet, for solving multimodal optimization problems. In order to increase the solution quality and the population diversity, the proposed algorithm introduces the danger theory into the optimization algorithms and integrates the clone selection theory and the immune network theory. It simulates the danger zones and the danger signals and adopts concentrations to comprehensively evaluate antibodies. Experimental results show that compared with influential optimization algorithms based on artificial immune, including CLONALG, opt-aiNet, and dopt-aiNet, the proposed algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified FEs. However, the algorithm cannot apply to any kind of optimization problems, and with the increase of dimension, the success rates of the algorithm are not always 100%. The next steps will be improving the efficiency of the algorithm in the high-dimensional space and extending the application scopes, such as dynamic optimization, combinatorial optimization, and constrained optimization.
This work has been supported by the National Natural Science Foundation of China under Grants nos. 61173159 and 60873246 and the Cultivation Fund of the Key Scientific and Technical Innovation Project, Ministry of Education of China under Grant no. 708075.