Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP). This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number

The running time to optimum is a key factor in determining the success of an evolutionary programming (EP) approach. Ideally, an implementation of an EP approach should run for a sufficient number of generations when the probability of achieving an optimum is greater than some desired value. However, few results on the running time of EP approaches can be found in the current literature.

As a technique of finite state machine, EP was first proposed for continuous optimization [

Arguably, the first EP algorithm that was widely considered successful was the one with Gauss mutation that has been termed as

CEP [

The performances of EP approaches such as CEP, FEP, and LEP have oftentimes been verified experimentally rather than theoretically. The theoretical foundations of EP have been an open problem ever since it was first put forward [

Previous convergence studies only considered whether an EP algorithm is able to find an optimum within infinite iteration but did not mention the speed of convergence, that is, lacking running time analysis. To date, running time analyses have mainly focused on Boolean individual EAs like (

As summarized above, the majority of runtime analyses are limited to discrete search space; analyses for continuous search space require a more sophisticated modeling and remain relatively few, which is unsatisfactory from a theoretical point of view. Jägersküpper conducted a rigorous runtime analysis on (

This section introduces the two EP algorithms CEP [

Generate individuals randomly.

Calculate the fitness of each individual.

Generate offspring by mutation.

Evaluate the offspring.

Select individuals by a tournament rule.

If the terminal condition is satisfied, output the best-so-far solution found and exit; otherwise, go back to Step (3).

In Step (1), the generated

In Step (3), a single

In Step (5), for each individual in the set of all parents and offsprings,

Without loss of generality, we assume that the EP approaches analyzed in this study aim to solve a minimization problem in a continuous search space, defined as follows.

Let

Without loss of generality, we can assume that

The following properties are assumed, and we will make use of them in our analyses:

The subset of optimal solutions in

Let

The first assumption describes the existence of optimal solutions to the problem. The second assumption presents a rigorous definition of optimal solution for continuous minimization optimization problems. The third assumption indicates that there are always solutions whose objective values are distributed continuously and arbitrarily close to the optimal, which makes the minimization problem solvable.

Our running time analyses are based on representing the EP algorithms as Markov chain. In this section, we explain the terminologies and notations used throughout the rest of this study.

The stochastic process of an evolutionary programming algorithm EP is denoted by

The stochastic status

The status space of EP is

The optimal status space of EP is the subset

Hence, all members of

A stochastic process

The stochastic process

The proof is given in the appendix.

We now show that the stochastic process

A Markov chain

The stochastic process

The proof is given in the appendix.

This property implies that once an EP algorithm attains an optimal state, it will never leave optimality.

The analysis of EP algorithms has usually been approximated using a simpler measure known as the first hitting time [

If an EP algorithm is modeled as an absorbing Markov chain, the running time of the EP can be measured by its first hitting time

Let

If

The expected first hitting time can also be expressed as

The proof is given in the appendix.

The conclusions of Theorem

If

The proof is given in the appendix.

Corollary

In this section, we use the framework proposed in Section

CEP [

According to the running time analysis framework, the probability

Let the stochastic process of CEP be denoted by

For fixed individual

The right part of the inequality of conclusion (1) is maximum when

The proof is given in the appendix.

In Theorem

Theorem

Supposed conditions of Theorem

The proof is given in the appendix.

Corollary

According to (

Moreover, we can give an approximate condition under which CEP can converge in time polynomial to

Suppose (

FEP was proposed by Yao et al. [

Let

The right part of the inequality of conclusion (1) is maximum when

The proof is given in the appendix.

Similar to Theorem

Supposed conditions of Theorem

The proof is given in the appendix.

Corollary

If

Hence, the running time of FEP is nearly

Moreover, we can give an approximate condition where FEP can converge in polynomial time

In this paper, we propose a running time framework to calculate the mean running time of EP. Based on this framework, the convergence and mean running time of CEP and FEP with constant variation are studied. We also obtain some results at the worst running time of the considered EP, although the results show that the upper bounds can be tighter if the variation

However, it is possible to make an improvement on the running time framework and analysis given in this study. The deduction process in the proofs for Theorems

The proposed framework and results can be considered as a first step for analyzing the running time of evolutionary programming algorithms. We hope that our results can serve as a basis for further theoretical studies.

Recall that

Observe that

Let

According to Corollary

(1) For fixed individual

Given

Note that

Noticing that

(3) By the property of

(1) Based on the total probability equation, we have

(2) Given

(1) For fixed individual

In a manner similar to the proof of Theorem

(2) Letting

(3) Hence,

(1) According to Theorem

Thus,

(2) Since

The authors declare that there is no conflict of interests regarding the publication of this paper.

This work is supported by Humanity and Social Science Youth Foundation of Ministry of Education of China (14YJCZH216) and National Natural Science Foundation of China (61370177).