Multiobjective programming problems have been widely applied to various engineering areas which include optimal design of an automotive engine, economics, and military strategies. In this paper, we propose a noninterior path following algorithm to solve a class of multiobjective programming problems. Under suitable conditions, a smooth path will be proven to exist. This can give a constructive proof of the existence of solutions and lead to an implementable globally convergent algorithm. Several numerical examples are given to illustrate the results of this paper.
1. Introduction
In this paper, the following conventions will be used. If x,y∈Rn, then
x≦y if and only if xi≤yi,i=1,…,n;
x<y if and only if xi<yi,i=1,…,n;
x≤y if and only if xi≤yi,i=1,…,n, with strict inequality holding for at least one i;
x=y if and only if xi=yi,i=1,…,n.
Multiobjective programming problems have been widely applied to various engineering areas which include optimal design of an automotive engine, economics, and military strategies. Consider the following multiobjective program (MOP):
(1)minx∈Rnf(x)s.t.g(x)≦0,h(x)=0,
where f:Rn→Rp, g:Rn→Rm, and h:Rn→Rl are assumed to be three times continuously differentiable. In this paper, the nonnegative and positive orthants of Rm are denoted as R+m and R++m, respectively.
In the literature, solutions for a multiobjective programming problem are referred to variously as efficient, Paretooptimal, and nondominated solutions. In this paper we will refer to a solution of a multiobjective programming problem as an efficient solution. It is well-known that if x is an efficient solution of MOP problems, under some constraint qualifications (the Kuhn and Tucker constraint qualification [1] or the Abadie constraint qualification [2]), then the following Karush-Kuhn-Tucker (KKT) condition at x for MOP problems holds [3, 4]:
(2)∇f(x)λ+∇g(x)u+∇h(x)v=0,h(x)=0,Ug(x)=0,g(x)≦0,u≧0,
where ∇f(x)=(∇f1(x),…,∇fp(x))∈Rn×p, ∇g(x)=(∇g1(x),…,∇gm(x))∈Rn×m, ∇h(x)=(∇h1(x),…,∇hl(x))∈Rn×l, λ∈R+p∖{0}, u∈Rm, v∈Rl, and U=diag(u)∈Rm×m. We say that x is a KKT point of MOP problems if it satisfies the KKT condition.
To solve linear programming, in 1984, Karmarkar [5] proposed a projective scaling algorithm, which is the first efficient polynomial-time algorithm in practice and hence competitive with the widely used simplex algorithm, which has no polynomiality although it is also efficient for linear programming. It was noted that Karmarkar’s projective scaling algorithm is equivalent to a projected Newton barrier algorithm [6]. Based on Karmarkar’s projective scaling algorithm, ones developed various central path following algorithms (or, in other terms, interior point methods and homotopy methods, see [7–12], etc.), which replaced projective scaling transformation of Karmarkar’s algorithm with affine scaling transformation. This modification can relax the particular assumptions on the simplex structure by Karmarkar’s algorithm. Later, the central path following algorithms were extended to solve convex nonlinear programming problems (see [13–16], etc.). It should be pointed out that all these central path following algorithms are globally convergent, but their global convergence results were obtained under the assumptions that the logarithmic barrier function is strictly convex and the solution set is nonempty and bounded.
Since Kellogg et al. [17] and Smale [18] proposed the notable homotopy method, this method has become a powerful solution tool with global convergence in finding solutions for various nonlinear problems, for example, zeros or fixed points of maps; see [19–24] and so forth. Furthermore, in [25], for convex nonlinear programming problems, by using the ideas of homotopy methods, Lin et al. proposed a new interior point method, which is called the combined homotopy interior point (CHIP) method, for solving convex programming problems. In that paper, the authors removed the convexity condition of the logarithmic barrier function and the nonemptiness of the solution set. In [26], by taking a piecewise technique, under the commonly used conditions in the literature, Yu et al. obtained the polynomiality of the CHIP method. Their results show that the efficiency of the CHIP method is also very well. The advantages mentioned above attract more and more researchers’ attention and the CHIP method has been applied to various areas such as fixed point problems [27, 28], variational inequalities [29, 30], and bilevel programming problems [31]. Furthermore, in 2008, for a class of nonconvex MOP problems, Song and Yao developed a new CHIP method [32]. In that paper, the authors constructed a new combined homotopy and thus obtained the existence of an interior path from a known interior point to a KKT point of (1).
It is well-known that the choice of initial points plays an important role in the computational efficiency of the predictor-corrector algorithms (for a good introduction and a complete survey about the predictor-corrector algorithms, one can refer to the books [33, 34], etc.) resulting from the CHIP algorithm. Here it should be pointed out that, for parametric programming (see [35–42], etc., for related works), the predictor-corrector algorithms also had successful applications (see [35, 43], etc.). But in [32], initial points are generally confined in the interior of the feasible set, which is not easily localized for many cases; hence it is essential to enlarge the scope of choice of initial points. To this end, in this paper, we apply proper perturbations to the constraint functions and hence develop a noninterior path following algorithm. With the new approach, we are capable of choosing initial points more easily. This can improve the computational efficiency of the predictor-corrector algorithms greatly compared to before.
Another purpose of this paper is to solve MOP problems in a broader class of nonconvex sets than those in [32]. To complete this work, we introduce C2 mappings ξi(x,ui)∈Rn(i=1,…,m) and ηj(x,vj)∈Rn(j=1,…,l), which can make us extend the results in [32] to more general nonconvex sets.
In this paper, under the commonly used conditions in the literature, a bounded smooth homotopy path from a given initial point to a KKT point of (1) can be proven to exist. This forms the theoretical base of the noninterior path following algorithm. Numerically tracing the smooth path can lead to an implementable globally convergent algorithm for MOP problems. An explicit advantage of the noninterior path following algorithm is that the induced predictor-corrector algorithm has global convergence, compared with some locally convergent algorithms, for example, the notable Newton’s algorithms [33, 34]. Although the usual continuation methods (see [44–46], etc.) are globally convergent, they require that the partial derivative of the mapping H in (9) with respect to w is nonsingular. This requirement is often not easily satisfied in practice (see [33, 34], etc.). However, by the parameterized Sard theorem, the noninterior path following algorithm only requires that the mapping H in (9) is of full row rank. This is another advantage of the algorithm presented in this paper. In addition, compared with the results in [32], we can solve MOP problems on more general nonconvex sets, and we also enlarge the scope of choice of initial points to the exterior of the feasible set.
This paper is organized as follows. In Section 2, we apply proper perturbations to the constrained functions, based on which, we construct a new combined homotopy and hence develop a noninterior path following algorithm. In Section 3, we use the predictor-corrector algorithm resulting from the noninterior path following algorithm to compute some experimental examples to illustrate the results of this paper. Finally, we make some conclusions in Section 4.
2. Theoretical Analysis of the Noninterior Path following Algorithm
In this section, let Ω={x∈Rn:g(x)≦0,h(x)=0}, Ω0={x∈Rn:g(x)<0,h(x)=0}, Λ+={λ∈R+p:∑q=1pλq=1}, Λ++={λ∈R++p:∑q=1pλq=1},B(x)={i∈{1,…,m}:gi(x)=0}, and U(0)=diag(u(0))∈Rm×m. In addition, ∥·∥ stands for the Euclidean norm.
In [32], Song and Yao developed a new CHIP method to solve the KKT point of (1) in a class of nonconvex sets; the main result of that paper is formulated as follows.
Theorem 1.
Suppose that
Ω0 is nonempty and Ω is bounded;
for any x∈Ω the matrix {∇gi(x),∇h(x):i∈B(x)} is of full column rank;
(the normal cone condition of Ω) for any x∈Ω the normal cone of Ω at x only meets Ω at x; that is, for any x∈Ω, one has
(3){x+∑i∈B(x)ui∇gi(x)+∇h(x)v:ui≧0fori∈B(x)}⋂Ω={x}.
Then for almost all (x(0),u(0),v(0),λ(0))∈Ω0×R++m×{0}×Λ++, there is a regular solution curve of the homotopy
(4)((1-μ)(∇f(x)λ+∇g(x)u)+∇h(x)v+μ(x-x(0))h(x)Ug(x)-μU(0)g(x(0))(1-μ)(1-∑q=1pλq)ep-μ(λ-λ(0)))=0,
where ep=(1,…,1)T∈Rp, μ∈(0,1]. When μ→0, the limit set T⊂Ω×R+m×Rl×Λ+×{0} is nonempty and the x-component of any point in T is a KKT point of (1).
However, in [32], initial points are confined in the interior of Ω. This point may reduce the computational efficiency of predictor-corrector algorithms greatly. To enlarge the scope of choice of initial points, in this paper, we apply proper perturbations to the constrained functions g(x), h(x) and introduce the parameters
(5)γi={2gi(x(0)),gi(x(0))>0,1,gi(x(0))=0,0,gi(x(0))<0,i=1,…,m,θj={1,hj(x(0))≠0,0,hj(x(0))=0,j=1,…,l.
Then let em=(1,…,1)T∈Rm, γ=(γ1,…,γm)T∈Rm, θ=(θ1,…,θl)T∈Rl, Ω(μ)={x∈Rn:g(x) − μγ(g(x(0))+em)≦0,h(x)-μθh(x(0))=0}, Ω0(μ)={x∈Rn:g(x)-μγ(g(x(0))+em)<0,h(x)-μθh(x(0))=0}, I(x,μ)={i∈{1,…,m}:gi(x)-μγi(gi(x(0))+1)=0}.
At the same time, to solve MOP problems in more general nonconvex sets, we assume that there exist continuous mappings ξ(x,u)=(ξ1(x,u1),…,ξm(x,um))∈Rn×m and η(x,v)=(η1(x,v1),…, ηl(x,vl))∈Rn×l such that the following assumptions hold.
Ω0(μ) is nonempty and Ω(μ) is bounded.
ξi(x,0)=0,i=1,…,m, ηj(x,0)=0,j=1,…,l; besides, for any x∈Ω(μ), if ∥(y,z,u,v)∥→∞, then
(6)∥∑i∈I(x,μ)(yi∇gi(x)+ξi(x,ui))+∇h(x)z+∑j=1lηj(x,vj)∥⟶∞.
For any x∈Ω(μ), if
(7)∑i∈I(x,μ)(yi∇gi(x)+ξi(x,ui))+∇h(x)z+∑j=1lηj(x,vj)=0,hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhyi≧0,ui≧0,
then yi=0, ui=0, ∀i∈I(x,μ), z=0, vj=0,j=1,…,l.
When μ=0,1, for any x∈Ω(μ), we have
(8){x+∑i∈I(x,μ)ξi(x,ui)+∑j=1lηj(x,vj):ui≧0fori∈I(x,μ)∑j=1l}⋂Ω(μ)={x}.
Next, we construct a new combined homotopy as follows:
(9)H(w,μ)=((1-μ)(∇f(x)λ+(1-μ)∇g(x)u)+(1-μ)∇h(x)v+∑i=1mξi(x,μ(1-μ)ui)+∑j=1lηj(x,μvj)+μ(x-x(0))+μ(1-μ)αh(x)-μθh(x(0))U(g(x)-μγ(g(x(0))+em))+μβ(1-μ)(1-∑q=1pλq)ep-μ(λ-λ(0)))=0,
where w=(x,u,v,λ)∈Rn+m+l×Λ+, α∈Rn, and β∈R++m.
Remark 2.
(1) In [32], the initial point x(0) is a strictly feasible point and has to satisfy the following constraints: g(x(0))<0 and h(x(0))=0. However, it is not easy to choose such an initial point in practice when the constraint functions g(x) and h(x) are complex. For example, when the feasible set is Ω={(x1,x2,x3,x4)∈R4:x12+x22+x32+x42+x1-x2+x3-x4-8≦0,x12+2x22+x32+2x42-x1 − x4-10≦0,2x12+x22+x32+2x1-x2-x4-5≦0,x1-5≦0,x2-5≦0} (see [25, Example 4.1]), it is difficult to choose the initial point x(0) satisfying all the constraints for this example. This difficulty may reduce the computational efficiency of the algorithm in [32]. In this paper, we can choose the initial point x(0) arbitrarily in Rn, by selecting proper parameters γ and θ according to the signs of g(x(0)) and h(x(0)). Since the initial point for the example mentioned above can be chosen arbitrarily in Rn, this modification can improve the computational efficiency of algorithms greatly compared to before. In addition, compared with some locally convergent methods, for example, the notable Newton’s methods, the method proposed in this paper is a globally convergent method, whose initial points can be chosen more easily.
(2) In [32], the authors required that the feasible set must satisfy the so-called normal cone condition, which is a generalization of the convexity condition (Figure 1). If the feasible set is a convex set, then it satisfies the normal cone condition. On the other hand, if the feasible set satisfies the normal cone condition, then the outer normal cone of the feasible set at a boundary point x can not meet the interior of the feasible set but meets the feasible set only at x. In this paper, we extend the results in [32] to more general nonconvex sets. If the feasible set satisfies the normal cone condition, let ξi(x,ui)=∇gi(x)ui, i=1,…,m, ηj(x,vj)=∇hj(x)vj, j=1,…,l; then it necessarily satisfies assumptions (C1)–(C4). Conversely, the conclusion does not hold. This point can be illustrated by Examples 1–4 in Section 3.
The nonconvex set satisfying the normal cone condition.
For a given x(0), the zero-point set of H(w,μ) is denoted by
(10)H-1(0)={(w,μ)∈Rn+m+l×Λ+×(0,1]:H(w,μ)=0}.
Lemma 3.
Let H be defined as in (9) and let assumptions (C1)–(C4) hold. Then the equation H(w,1)=0 has a unique solution.
Proof.
When μ=1, the homotopy equation (9) becomes
(11)∑j=1lηj(x,vj)+x-x(0)=0,h(x)-θh(x(0))=0,U(g(x)-γ(g(x(0))+em))+β=0,λ-λ(0)=0.
From the second and third equations in (11), we get that x∈Ω0(1). Then assumption (C4), together with the first equation in (11), yields that x=x(0). By assumption (C2), we get v=0. So it follows from the third equation in (11) that u=-[diag(g(x(0))-γ(g(x(0))+em))]-1β. At last, from the fourth equation in (11), we obtain λ=λ(0). Therefore (11) has a unique solution w=w(0)=(x(0),-[diag(g(x(0))-γ(g(x(0))+em))]-1β,0,λ(0)).
In the following, we recall some basic definitions and results from differential topology, which will be used in our main result of this paper.
The inverse image theorem (see [47]) tells us that if 0 is a regular value of the map H, then H-1(0) consists of some smooth curves. The regularity of H can be obtained by the following lemma.
Lemma 4 (transversality theorem, see [<xref ref-type="bibr" rid="B21">21</xref>]).
Let Q, N, and P be smooth manifolds with dimensions q,m, and p~, respectively. Let W⊂P be a submanifold of codimension p (i.e., p~=p+ dimension of W). Consider a smooth map Φ:Q×N→P. If Φ is transversal to W, then, for almost all a∈Q, Φa(·)=Φ(a,·):N→P is transversal to W. Recall that a smooth map h:N→P is transversal to W if
(12){Range(Dh(x))}+{TyW}=TyP,hhhhhhhhiwhenevery=h(x)∈W,
where Dh is the Jacobi matrix of h and TyW and TyP denote the tangent spaces of W and P at y, respectively.
In this paper, W={0}, so the transversality theorem is corresponding to the parameterized Sard theorem on smooth manifolds.
Lemma 5 (parameterized Sard theorem).
Let V⊂Rn, U⊂Rm be open sets and let Φ:V×U→Rk be a Cr map, where r>max{0,m-k}. If 0∈Rk is a regular value of Φ, then, for almost all a∈V, 0 is a regular value of Φa≡Φ(a,·).
With the preparation of the previous lemmas, we can prove the following main theorem on the existence and boundedness of a smooth path from a given point x(0) in Rn to a KKT point of (1). This implies the global convergence of the noninterior path following algorithm.
Theorem 6.
Let H be defined as in (9) and let assumptions (C1)–(C4) hold. Then, for almost all w(0)∈Rn×R++m×{0}×Λ++, there exists a C1 curve (w(s),μ(s)) of dimension 1 such that
(13)H(w(s),w(0),μ(s))=0,(w(0),μ(0))=(w(0),1).
When μ(s)→0, w(s) tends to a point w*=(x*,u*,v*,λ*). The x-component of w* is a KKT point of (1).
Proof .
When x(0), λ(0), and α are considered as variables, we denote H(w,μ) by H¯(w,x(0),α,λ(0),μ). Let the Jacobian matrix of H¯(w,x(0),α,λ(0),μ) be denoted by DH¯(w,x(0),α,λ(0),μ), for any μ∈(0,1],(14)∂H¯(w,x(0),α,λ(0),μ)∂(x(0),α,u,λ(0))=(-μIμ(1-μ)I(1-μ)2∇g(x)+μ(1-μ)∇uξ(x,μ(1-μ)u)0-μθ∇h(x(0))T000-μγU∇g(x(0))T0diag(g(x)-γ(g(x(0))+em))0000μI),where ∇uξ(x,μ(1-μ)u)=(∇u1ξ1(x,μ(1-μ)u1),…,∇umξm(x,μ(1-μ)um)). Since ∇h(x)T is a matrix of full row rank, ∂H¯(w,x(0),α,λ(0),μ)/∂(x(0),α,u,λ(0)) is of full row rank. Therefore DH¯(w,x(0),α,λ(0),μ) is also of full row rank, and 0 is a regular value of H¯(w,x(0),α,λ(0),μ). By Lemma 5, for almost all (x(0),α,λ(0))∈Ω0(1)×Rn×Λ++, 0 is a regular value of map H:Rn×R+m×Rl×Rp×(0,1]→Rn+m+l+p. By the inverse image theorem, H-1(0) consists of some smooth curves. Since H(w(0),1)=0, then a C1 curve (w(s),μ(s)) of dimension 1, denoted by Γw(0), is starting from (w(0),1).
In the following, we furthermore assume that ∇h(x)T∇vη(x,v) is nonsingular. By the classification theorem of one-dimensional smooth manifolds, Γw(0) is diffeomorphic either to a unit circle or to a unit interval. For any w(0)∈Ω0(1)×R++m×{0}×Λ++, it is easy to show that ∂H(w(0),1)/∂w is nonsingular; thus Γw(0) cannot be diffeomorphic to a unit circle. That is, Γw(0) is diffeomorphic to a unit interval.
Let (w*,μ*) be a limit point of Γw(0); then the following cases may occur:
By Lemma 3, the equation H(w,w(0),1)=0 has a unique solution (w(0),1) in Ω0(1)×R++m×Rl×Λ++×{1}, so case (b) is impossible.
It follows from Theorem 3.2 in [32] that the projection of the smooth curve Γw(0) onto the λ-plane is bounded.
Let I(x*)={i∈{1,…,m}:limk→∞ui(k)=∞} and J(x*)={j∈{1,…,l}:limj→∞vj(k)=∞}. If J(x*)≠∅, from the first equation in (9), we have
(15)(1-μk)(∇f(x(k))λ(k)+∑i∉I(x*)((1-μk)∇gi(x(k))ui(k)+ξihhhhhhhhhhhhhhhhhhhhhhhh×(x(k),μk(1-μk)ui(k)))∑i∉I(x*))+μk(x(k)-x(0))+(1-μk)μkα+(1-μk)∇h(x(k))v(k)+∑j=1lηj(x(k),μkvj(k))+∑i∈I(x*)((1-μk)∇gi(x(k))ui(k)hhhhhhhhhi+ξi(x(k),μk(1-μk)ui(k)))=0.
By assumptions (C2) and (C3), the fourth, fifth, and sixth parts in the left-hand side of (15) tend to infinity as k→∞, but the other three parts are bounded; this is impossible. Therefore the projection of the smooth curve Γw(0) onto the v-plane is also bounded.
If case (c) holds, then there exists a sequence of points {(w(k),μk)}⊂Γw(0) such that ∥(w(k),μk)∥→∞. Since Ω(μ), Λ+, and (0,1] are bounded, hence there exists a subsequence of points (denoted also by {(w(k),μk)}) such that x(k)→x*, u(k)→∞, v(k)→v*, λ(k)→λ*, and μk→μ* as k→∞. From the second equation in (9), we have
(16)g(x(k))-μkγ(g(x(0))+em)=-μk(U(k))-1β.
When μ*>0, the active index set is
(17)I(x*,μ*)={i∈{1,…,m}:limk→∞ui(k)=∞}.
When μ*=0, the index set is
(18)I0(x*,0)={i∈{1,…,m}:limk→∞ui(k)=∞}⊂I(x*,0).
(1) If μ*=1, from the first equation in (9), we obtain
(19)∑i∈I(x*,1)[(1-μk)(1-μk)ui(k)∇gi(x(k))hhhhhh+ξi(x(k),μk(1-μk)ui(k))]+(x(k)-x(0))+(1-μk)∇h(x(k))v(k)+∑j=1lηj(x(k),μkvj(k))+(1-μk)μkα=-∑i∉I(x*,1)[(x(k),μk(1-μk)ui(k))(1-μk)(1-μk)ui(k)∇gi(x(k))hhhhhhhhhhi+ξi(x(k),μk(1-μk)ui(k))]-(1-μk)∇f(x(k))λ(k)+(1-μk)(x(k)-x(0)).
By assumptions (C2), (C3), and (19), we have
(20)limk→∞(1-μk)ui(k)=ui*,
where ui*≧0. Therefore by (19) and (20), we get
(21)x*+∑i∈I(x*,1)ξi(x*,ui*)+∑j=1lηj(x*,vj*)=x(0),
which contradicts assumption (C4).
(2) If 0<μ*<1, from the first equation in (9), we conclude
(22)∑i∈I(x*,μ*)[(1-μk)(1-μk)ui(k)∇gi(x(k))hhhhhhh+ξi(x(k),μk(1-μk)ui(k))]=-(1-μk)∇f(x(k))λ(k)-μk(x(k)-x(0))-(1-μk)∇h(x(k))v(k)-∑j=1lηj(x(k),μkvj(k))-∑i∉I(x*,μ*)[(x(k),μk(1-μk)ui(k))(1-μk)(1-μk)ui(k)∇gi(x(k))hhhhhhhhhhhh+ξi(x(k),μk(1-μk)ui(k))]-(1-μk)μkα.
When k→∞, since Ω(μ*) and ui(k),i∉I(x*,μ*), are bounded, the right-hand side of (22) is bounded. But by assumption (C2), if ui(k)→∞,i∈I(x*,μ*), then the left-hand side of (22) is infinite. This results in a contradiction.
(3) If μ*=0, since the nonempty index set I0(x*,0)⊂I(x*,0), the proof is similar to (2).
By the above discussion, we obtain that case (a) is the only possible case, and thus the x-component of w* is a KKT point of (1).
For almost all w(0)∈Rn×R++m×{0}×Λ++, by Theorem 6, the homotopy generates a C1 curve Γw(0); by differentiating the first equation in (13), we get the following theorem.
Theorem 7.
The homotopy path Γw(0) is determined by the following initial value problem to the ordinary differential equation:
(23)DH(w(s),μ(s))(w˙(s)μ˙(s))=0,(w(0),μ(0))=(w(0),1),
where s is the arclength of the curve Γw(0).
Based on Theorems 6 and 7, various predictor-corrector procedures for numerically tracing the smooth homotopy path Γw(0) can be given (see [33, 34] and references therein).
3. Numerical Results
By using the homotopy (9) and the predictor-corrector algorithm, several numerical examples are given to illustrate the work in this paper. To illustrate that our result is an extension of the work in [32], we choose some examples whose feasible sets do not satisfy the normal cone condition but satisfy assumptions (C1)–(C4). In addition, the initial points are chosen not certainly to be in the interior of the feasible sets. In each example, we set ϵ1=1·e-3, ϵ2=1·e-6, and h0=0.02. The behaviors of homotopy paths are shown in Figures 2, 3, 4, and 5. Computational results are given in Table 1, where x(0) denotes the initial point, IT the number of iterations, H the value of ∥Hw(0)(w(k),μk)∥ when the algorithm stops, and x* the KKT point.
Numerical results of Examples 1–4.
Example
x(0)
IT
μ*
H
x*
f(x*)
Example 1
(1.5, 0.8)
23
0.000000
0.000000
(−0.123076, 0.690768)
(−1.774478, 5.888987)
(0.9, 1.2)
26
0.000000
0.000000
(−0.123064, 0.690781)
(−1.773676, 5.888962)
Example 2
(2, 3)
17
0.000000
0.000000
(−0.000812, −2.000000)
(0.000001, 0.003248)
(2, −3)
21
0.000000
0.000000
(0.000861, −2.000000)
(0.000001, −0.003444)
Example 3
(5, 19)
25
0.000000
0.000000
(3.148448, 0.274224)
(13.160018, 0.987923)
(4.5, 12)
22
0.000000
0.000000
(3.148456, 0.274232)
(13.160109, 0.987978)
Example 4
(10, 2)
19
0.000000
0.000000
(2.587977, 0.618034)
(7.079591, 15.269080)
(20, 7)
20
0.000000
0.000000
(2.587984, 0.618042)
(7.079637, 15.305130)
The discrete homotopy pathways of Example 1.
The discrete homotopy pathways of Example 2.
The discrete homotopy pathways of Example 3.
The discrete homotopy pathways of Example 4.
Example 1 (adapted from [<xref ref-type="bibr" rid="B25">25</xref>, Example 3.1]).
f(x), g(x), and h(x) are defined as in problem (1). The objective functions are given by
(24)f1(x)=2(x1-3)3+(x2+7)2,f2(x)=(x1-2)2+2x2.
The feasible set is given by
(25)Ω={(x1,x2)∈R2:x12+x22-4≦0,-(x1+2)2-x22+4≦0,(x1-0.6)2+x22-1=0}.
Since ∇h(x)=(2(x1-0.6),2x2)T, it is easy to see assumption (A3) is not satisfied at most points in Ω0. Therefore, the method presented in [32] can not be used to solve this example. However, if we introduce C2 mappings ξ1(x,u1)=(2x1u1,2x2u1)T, ξ2(x,u2)=(-2(x1+2)u2,-2x2u2)T, and η(x,v)=((-10+2(x1-0.6))v,2x2v)T, then it is easy to verify that assumptions (C1)–(C4) are satisfied in this example. Thus, we are able to solve this example via the algorithm presented in this paper. In addition, we choose two initial points arbitrarily which are not confined in the interior of the feasible sets; this is another improvement of the results in [32], which require that initial points should be confined in the interior of the feasible sets.
Example 2 (adapted from [<xref ref-type="bibr" rid="B48">48</xref>, problem 42]).
f(x), g(x), and h(x) are defined as in problem (1). The objective functions are given by
(26)f1(x)=x12+2(x2+2)2,f2(x)=2x1x2.
The feasible set is given by
(27)Ω={(x1,x2)∈R2:x2-1.5≦0,-x2-4≦0,x12+x22-4=0}.
Since ∇h(x)=(2x1,2x2)T, it is easy to see assumption (A3) is not satisfied at most points in Ω0. Therefore, the method presented in [32] can not be used to solve this example. However, if we introduce C2 mappings ξ1(x,u1)=(0,u1)T, ξ2(x,u2)=(0,-u2)T, and η(x,v)=(2x1v,(8+2x2)v)T, then it is easy to verify that assumptions (C1)–(C4) are satisfied in this example. Thus, we are able to solve this example via the algorithm presented in this paper. In addition, we choose two initial points arbitrarily which are not confined in the interior of the feasible sets; this is another improvement of the results in [32], which require that initial points should be confined in the interior of the feasible sets.
Example 3 (adapted from [<xref ref-type="bibr" rid="B48">48</xref>, problem 56]).
f(x), g(x), and h(x) are defined as in problem (1). The objective functions are given by
(28)f1(x)=x12+2(x2+1)2,f2(x)=x12+x22.
The feasible set is given by
(29)Ω={(x1,x2)∈R2:x1-2π≦0,π-x1≦0,x2-20sin(2x1)=0(x1,x2)∈R2}.
Since ∇h(x)=(-40cos(2x1),1)T, it is easy to see assumption (A3) is not satisfied at most points in Ω0. Therefore, the method presented in [32] can not be used to solve this example. However, if we introduce C2 mappings ξ1(x,u1)=(u1,0)T, ξ2(x,u2)=(-u2,0)T, and η(x,v)=(0,10v)T, then it is easy to verify that assumptions (C1)–(C4) are satisfied in this example. Thus, we are able to solve this example via the algorithm presented in this paper. In addition, we choose two initial points arbitrarily which are not confined in the interior of the feasible sets; this is another improvement of the results in [32], which require that initial points should be confined in the interior of the feasible sets.
Example 4 (adapted from [<xref ref-type="bibr" rid="B48">48</xref>, problem 79]).
f(x), g(x), and h(x) are defined as in problem (1). The objective functions are given by
(30)f1(x)=x12+x22,f2(x)=2x12+(x2-2)2.
The feasible set is given by
(31)Ω={(x1,x2)∈R2:-x2-6≦0,x2-6≦0,3x1-2x22-7=0}.
Since ∇h(x)=(3,-4x2)T, it is easy to see assumption (A3) is not satisfied at most points in Ω0. Therefore, the method presented in [32] can not be used to solve this example. However, if we introduce C2 mappings ξ1(x,u1)=(0,-u1)T, ξ2(x,u2)=(0,u2)T, and η(x,v)=(10v,0)T, then it is easy to verify that assumptions (C1)–(C4) are satisfied in this example. Thus, we are able to solve this example via the algorithm presented in this paper. In addition, we choose two initial points arbitrarily which are not confined in the interior of the feasible sets; this is another improvement of the results in [32], which require that initial points should be confined in the interior of the feasible sets.
Although the feasible sets of Examples 1–4 do not satisfy the normal cone condition in [32] and initial points are chosen not in the interior of the feasible sets, the numerical results for these examples can illustrate that the algorithm presented in this paper still works well.
4. Conclusions
In this paper, we apply proper perturbations to the constraint functions and hence develop a noninterior path following algorithm for solving a class of MOP problems. Our results extend the results in [32] to more general nonconvex sets and make initial points of the algorithm be chosen more easily than before. Since MOP problems have wide applications in engineering, management, economics, and so on, our results may be useful to propose a powerful solution tool for dealing with these practical problems. In the future, we want to propose new techniques to extend our results to more general nonconvex sets. In addition, we want to present a set of suitable unboundedness conditions to remove the boundedness assumptions on the feasible set so that we are able to solve MOP problems in unbounded sets.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors sincerely thank the editor and Professor Yong Li for the kind help provided. They are extremely grateful to the editor and the anonymous reviewer for their invaluable suggestions and helpful comments, which improved the paper greatly. This project was supported by NSFC-Union Science Foundation of Henan (no. U1304103), National Nature Science Foundation of China (no. 11201214), Natural Science Foundation of Henan Province (no. 122300410261), and Foundation of Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education (no. 93K172012K07).
KuhnH. W.TuckerA. W.NeymanJ.Nonlinear programmingProceedings of the 2nd Berkeley Symposium on Mathematical Statistics and Probability1951Berkeley, Calif, USAUniversity of California PressGuignardM.Generalized Kuhn-Tucker conditions for mathematical programming problems in a Banach spaceLinC. Y.DongJ. L.MaedaT.Second-order conditions for efficiency in nonsmooth multiobjective optimization problemsKarmarkarN.A new polynomial-time algorithm for linear programmingGillP. E.MurrayW.SaundersM. A.TomlinJ. A.WrightM. H.On projected Newton barrier methods for linear programming and an equivalence to Karmarkar's projective methodGonzagaC. C.Path-following methods for linear programmingKojimaM.MizunoS.YoshiseA.MegiddoN.A primal-dual interior point algorithm for linear programmingMegiddoN.Pathways to the optimal set in linear programmingMonteiroR. D. C.AdlerI.Interior path following primal-dual algorithms. I. Linear programmingMonteiroR. D. C.AdlerI.Interior path following primal-dual algorithms, Part II: convex quadratical programmingRenegarJ.A polynomial-time algorithm, based on Newton's method, for linear programmingKortanekK. O.PotraF.YeY.On some efficient interior point methods for nonlinear convex programmingMcCormickG. P.The projective SUMT method for convex programmingMonteiroR. D. C.AdlerI.An extension of Karmarkar type algorithm to a class of convex separable programming problems with global linear rate of convergenceZhuJ.A path following algorithm for a class of convex programming problemsKelloggR. B.LiT. Y.YorkeJ.A constructive proof of the Brouwer fixed-point theorem and computational resultsSmaleS.A convergent process of price adjustment and global Newton methodsAlexanderJ. C.YorkeJ. A.The homotopy continuation method: numerically implementable topological proceduresCarciaC. B.ZangwillW. I.An approach to homotopy and degree theoryChowS. N.Mallet-ParetJ.YorkeJ. A.Finding zeroes of maps: homotopy methods that are constructive with probability oneLiY.LinZ. H.A constructive proof of the Pincare-Birkhoff theoremMillerJ.Finding the zeros of an analytic functionWatsonL. T.An algorithm that is globally convergent with probability one for a class of nonlinear two-point boundary value problemsLinZ. H.YuB.FengG. C.A combined homotopy interior point method for convex nonlinear programmingYuB.XuQ.FengG.On the complexity of a combined homotopy interior method for convex programmingYuB.LinZ.Homotopy method for a class of nonconvex Brouwer fixed-point problemsXuQ.DangC.ZhuD.Generalizations of fixed point theorems and computationLinZ.LiY.Homotopy method for solving variational inequalitiesXuQ.YuB.FengG.DangC.Condition for global convergence of a homotopy method for variational inequality problems on unbounded setsZhuD. L.XuQ.LinZ.A homotopy method for solving bilevel programming problemSongW.YaoG. M.Homotopy method for a general multiobjective programming problemAllgowerE. L.GeorgK.WangZ. K.GaoT. A.GuddatJ.VazquezF. G.JongenH. T.JongenH. T.JonkerP.TwiltF.JongenH. T.JonkerP.TwiltF.JongenH. T.JonkerP.TwiltF.One-parameter families of optimization problems: equality constraintsJongenH. Th.JonkerP.TwiltF.Critical sets in parametric optimizationJongenH. T.RückmannJ.WeberG.One-parametric semi-infinite optimization: on the stability of the feasible setJongenH. T.TwiltF.WeberG.Semi-infinite optimization: structure and stability of the feasible setJongenH. T.WeberG.On parametric nonlinear programmingLundbergB. N.PooreA. B.Numerical continuation and singularity detection methods for parametric nonlinear programmingGeogeK.AllgowerE.GlashoffK.PeitgenH. O.Numerical integration of the Davidenko equationden HeijerC.RheinboldtW. C.On steplength algorithms for a class of continuation methodsRheinboldtW. C.MelhemR. G.A comparison of methods for determining turning points of nonlinear equationsNaberG. L.HockW.SchittkowskiK.