By using slack variables and minimum function, we first reformulate the system of equalities and inequalities as a system of nonsmooth equations, and, using smoothing technique, we construct the smooth operator. A new noninterior continuation method is proposed to solve the system of smooth equations. It shows that any accumulation point of the iteration sequence generated by our algorithm is a solution of the system of equalities and inequalities. Some numerical experiments show the
feasibility and efficiency of the algorithm.
1. Introduction
In this paper, we consider the following system of equalities and inequalities:
(1)fI(x)≤0,fE(x)=0,
where I={1,…,m} and E={m+1,…,n}. Define f(x)=[f1(x),…,fn(x)] with fi:ℜn→ℜ, for any i∈{1,…,n}. Throughout this paper, we assume that f is continuously differentiable.
Problems taking the form (1) have been studied extensively due to its various applications in data analysis, set separation problems, computer aided design problems, and image reconstructions.
Recently, a class of popular numerical methods, namely, the so-called noninterior continuation methods, has been studied extensively for complementarity, variational inequality, and mathematical programming problems; see, for example, [1–7]. However, as we observe, there are few noninterior continuation methods available for the system of equalities and inequalities given by (1).
In this paper, we first reformulate (1) as a system of nonsmooth equations by using slack variables and minimum function, and, using smoothing technique, we construct the smooth equations. Then, a noninterior continuation method for (1) by modifying and extending the method of Huang [1] is proposed. Under suitable assumptions, we show that the proposed algorithm is globally linearly convergent. We also report some preliminary numerical results, which demonstrate that the algorithm is effective for solving (1).
The organization of this paper is as follows. In Section 2, we reformulate (1) as a system of smooth equations. In Section 3, we propose a noninterior continuation method for solving (1). Global convergence is analyzed in Section 4. Some preliminary computational results are reported in Section 5.
We introduce some notations. All vectors are column vectors, the superscript T denotes transpose, ℜ+n (resp., ℜ++n) denotes the nonnegative (resp., positive) orthant in ℜn. I denotes n×n identity matrix. For x∈ℜn, ∥x∥ denotes the 2-norm of x. For a continuously differentiable function F:ℜn→ℜm, we denote the Jacobian of F at x∈ℜn by F′(x).
2. Equivalent Smoothing Reformulation of (<xref ref-type="disp-formula" rid="EEq1.1">1</xref>)
In this section, we give the equivalent smoothing reformulation of (1) and discuss some associated properties of the reformulation. Firstly, we introduce the NCP function and the smoothing function. A function ϕ:ℜ2→ℜ is called an NCP function, if it possesses the following property:
(2)ϕ(a,b)=0⟺a≥0,b≥0,ab=0.
One well-known NCP function is the minimum function [9], which is defined as follows:
(3)ϕmin(a,b)=a+b-|a-b|.
Accordingly, the smoothing function associated with ϕmin is [4]
(4)ϕmin(a,b,μ)=a+b-(a-b)2+2μ2.
For (1), we introduce a slack variable s∈ℜm. Then, (1) is equivalent to the following system of equations:
(5)fE(x)=0,fI(x)+s=0,s≥0.
Based on the minimum function, we reformulate (5) into the following equivalent system of nonlinear equation:
(6)Φ(w)∶=(fE(x)fI(x)+sΦmin(0,s))=0,
where Φmin(0,s)=(ϕmin(0,s1),…,ϕmin(0,sm))T, w=(x,s)∈ℜn×m.
Since the function in (6) is nonsmooth, the noninterior continuation method cannot be directly applied to solve (6). In order to make (6) solvable by the noninterior continuation method, we will use the smoothing technique and construct the smooth approximation of Φ as Φμ. Consider
(7)Φμ(w)∶=(fE(x)+μxEfI(x)+s+μxIΦmin(0,s,μ)+μs),
where xI=(x1,x2,…,xm)T, xE=(xm+1,xm+2,…,xn)T, s∈ℜm, x=(xI,xE)∈ℜn, and Φmin(0,s,μ) = (ϕmin(0,s1,μ),…,ϕmin(0,sm,μ))T. Thereby, it is obvious that, if Φμ(w)=0 and μ=0, then x solves (1). It is not difficult to see that, for any w∈ℜn×ℜm, the function Φμ(w) is continuously differentiable. Let Φμ′(w) denote the Jacobian of the function Φμ(w); then, for any w∈ℜn×ℜm,
(8)Φμ′(w)∶=(fE′(x)+μV0(n-m)×mfI′(x)+μUIm0m×nΦmin′(0,s,μ)+μIm),
where U∶=[Im0m×(n-m)] and V∶=[0(n-m)×mIn-m]. Here, we use 0l to denote the l-dimensional zero vector and 0l×q to denote the l×q zero matrix for any positive integers l and q. Thus, we can solve approximately the smooth system Φμ(w)=0 by using Newton’s method at each iteration and then obtain a solution of Φ0(w)=0 by reducing the parameter μ to zero so that a solution of (1) can be found.
3. Algorithm
In this section, we propose a noninterior continuation algorithm. Some basic properties are given. In particular, we show that the algorithm is well defined.
Algorithm 1 (a noninterior continuation algorithm).
Consider the following.
Step 0. Choose δ,γ,σ∈(0,1). Take any (x0,s0)∈ℜ2n and μ0∈(0,∞); choose β≥n such that ∥Φμ0(x0,s0)∥≤βμ0. Set k∶=0.
Step 1. If μk=0, then stop..
Step 2. If Φμk(xk,sk)=0, then set (xk+1,sk+1)∶=(xk,sk) and θk∶=1, and go to Step 4; otherwise, compute (Δxk,Δsk)∈ℜ2n by
(9)Φμk′(xk,sk)(Δxk,Δsk)=-Φμk(xk,sk).
Step 3. Let θk be maximum of the values 1,δ,δ2,… such that
(10)∥Φμk(xk+θΔxk,sk+θΔsk)∥≤[1-σθk]∥Φμk(xk,sk)∥.
Set (xk+1,sk+1)∶=(xk+θΔxk,sk+θΔsk).
Step 4. Set the following:(11)μ¯k∶=(1-11+2(∥xk+1∥+∥sk+1∥+1)σθk)μk.
Let ηk be the minimum of the values 1,γ,γ2,… such that
(12)∥Φηkμ¯k(xk+1,sk+1)∥≤βηkμ¯k,
and set μk+1∶=ηkμ¯k. Set k∶=k+1 and go to Step 1.
Remark 2.
Algorithm 1 is a modified version of Huang’s algorithm in [1]. It is easy to see that, if Φμk(wk)=0, for any k, then Algorithm 1 does not solve the Newton equation (9) and does not perform the line search (10) in the (k+1)th iteration. Thus, Algorithm 1 only needs to solve at most a linear system of equations at each iteration. Algorithm 1 can be started easily. In fact, we can choose any (μ0,x0,s0)∈ℜ++×ℜn×ℜm as the starting point of our algorithm and then set
(13)β∶=max{n,∥Φμ0(x0,s0)∥μ0}.
Define f′(x)∶=[fE′(x)T,fI′(x)T]T. We will use the following assumption.
Assumption 3.
f′(x)+μIn is invertible for any x∈ℜn and μ∈ℜ++.
The next result plays an important role in establishing the well-definedness and the local quadratic convergence of Algorithm 1.
Lemma 4.
(i) ϕmin(·,·,·) is continuously differentiable at any (μ,a,b)∈ℜ3∖(0,0,0).
(ii) For any μ1,μ2>0 and (a,b)∈ℜ×ℜ, we have
(14)|ϕmin(a,b,μ1)-ϕmin(a,b,μ2)|≤2|μ1-μ2|.
Theorem 5.
Suppose that f is a continuously differentiable function and Assumption 3 is satisfied. Then Algorithm 1 is well defined.
Proof.
For any square matrix A, we use det(A) to denote the determinant of A. It is easy to see from (8) that det(Φμ′(w))=det(f′(x)+μIn)·det(Φmin′(0,s,μ)+μIm), for any μ>0 and w∈ℜn×ℜm. Furthermore, it is easy to see that Φmin′(0,s,μ) is positive semidefinite. Thus, by Assumption 3, we obtain that Φμ′(w) is nonsingular, for any μ>0 and w∈ℜn×ℜm. Hence, Step 2 is well defined.
Now we prove that Step 3 is well defined. For any α∈(0,1], define
(15)r(α)=Φμk(wk+αΔwk)-Φμk(wk)-αΦμk′(wk)Δwk.
From μk>0 and Lemma 4(i), we know that Φmin(w) is continuously differentiable at wk. Thus, by (15), we have
(16)∥r(α)∥=o(α).
Then by (9), (15) and (16),
(17)∥Φμk(wk+αΔwk)∥≤∥Φμk(wk)+αΦμk′(wk)Δwk∥+∥r(α)∥=(1-α)∥Φμk(wk)∥+o(α)=(1-σα)∥Φμk(wk)∥-(1-σ)α∥Φμk(wk)∥+o(α).
Since σ∈(0,1), then (1-σ)α∥Φμk(wk)∥>0. For α sufficient small, we can get ∥Φμk(wk+αΔwk)∥≤(1-σα)∥Φμk(wk)∥, this shows that Step 3 is well defined.
Next we show that Step 4 is well defined. If Φμk(wk)≠0, it follows from Lemma 4(ii) and (7) that
(18)∥Φμ1(w)-Φμ2(w)∥=∥((μ1-μ2)xΦmin(0,s,μ1)-Φmin(0,s,μ2)+(μ1-μ2)s)∥≤|μ1-μ2|∥x∥+|μ1-μ2|∥s∥+∥Φmin(0,s,μ1)-Φmin(0,s,μ2)∥≤|μ1-μ2|(∥x∥+∥s∥)+2n|μ1-μ2|=|μ1-μ2|(∥x∥+∥s∥+2n)≤2n|μ1-μ2|(∥x∥+∥s∥+1).
Then, by β≥n, (10)–(12), and (18), we have
(19)∥Φμ¯k(wk+1)∥≤∥Φμk(wk+1)∥+∥Φμk(wk+1)-Φμ¯k(wk+1)∥≤(1-σθk)∥Φμk(wk)∥+2n|μk-μk¯|(∥xk+1∥+∥sk+1∥+1)≤(1-σθk)βμk+2n(∥xk+1∥+∥sk+1∥+1)1+2(∥xk+1∥+∥sk+1∥+1)σθkμk≤(1-σθk)βμk+2(∥xk+1∥+∥sk+1∥+1)1+2(∥xk+1∥+∥sk+1∥+1)βσθkμk=(1-[1-2(∥xk+1∥+∥sk+1∥+1)1+2(∥xk+1∥+∥sk+1∥+1)]σθk)βμk=(1-11+2(∥xk+1∥+∥sk+1∥+1)σθk)βμk=βμ¯k.
If Φμk(wk)=0, similarly we also obtain that (19) holds. Thus, from (19), we know that there exists a minimal ηk∈(0,1] such that (12) holds; that is, Step 4 is well defined.
Therefore, Algorithm 1 is well defined.
4. Convergence of Algorithm <xref ref-type="statement" rid="algg3.1">1</xref>
In this section, we analyze the global convergence properties of Algorithm 1. We show that any accumulation point of the iteration sequence {wk} is a solution of the system Φ(w)=0.
Theorem 6.
Suppose that f is a continuously differentiable function and (w*,μ*)∶=(x*,s*,μ*) is an accumulation point of the iteration sequence {(wk,μk)} generated by Algorithm 1. If Assumption 3 is satisfied, then limk→∞μk=0, and hence w* is a solution of Φ(w)=0.
Proof.
Since the sequence {μk} is monotonically decreasing and bounded from below by zero, then μk→μ*≥0. If μ*=0, we obtain the desired result. Suppose μ*>0. Without loss of generality, we assume that limk→∞(wk,μk)=(w*,μ*). If there exists an infinite subset K1={k1,…,ki,ki+1,…} such that Φμk(wk)=0, k∈K1, then, by Step 2 of Algorithm 1, we have θki≡1, for any i≥1. It follows from Step 4 that
(20)μki+1≤μki+1=ηkiμ¯ki≤(1-σ1+2(∥xki+1∥+∥ski+1∥+1))μki.
Let i→+∞; we have that
(21)0<μ*≤(1-σ1+2(∥x*∥+∥s*∥+1))μ*,
which is a contradiction. Therefore, without loss of generality, we may assume that Φμk(wk)≠0 holds, for any k≥0. From the assumption, f is continuously differentiable and (7); it is not difficult to see that Φμ(w) is continuously differentiable in both w and μ for any μ>0. Since μk→μ*>0, by assumption, then we have
(22)limk→∞Φμk(wk)=Φμ*(w*),limk→∞Φμk′(wk)=Φμ*′(w*).
The steplength θ¯k∶=θk/δ does not satisfy (10); that is,
(23)∥Φμk(wk+θ¯kΔwk)∥-∥Φμk(wk)∥θ¯k>-σ∥Φμk(wk)∥.
By taking k→∞ in the above inequality, we have
(24)Φμ*(w*)TΦμ*(w*)Δw*≥-σ∥Φμ*(w*)∥2.
It follows from (9) that
(25)Φμ*(w*)TΦμ*(w*)Δw*=-∥Φμ*(w*)∥2.
By substituting (25) into (24), we obtain that ∥Φμ*(w*)∥≤σ∥Φμ*(w*)∥, which contradicts σ<1. This proves μ*=0.
Next, we prove that w* is a solution of Φ(w)=0. In view of the Algorithm 1, we have
(26)∥Φμk(wk)∥≤βμk.
Then, by taking the limit on both sides of (26) based on the continuity of Φμ(w), we have ∥Φ0(w*)∥=∥Φμ*(w*)∥≤βμ*. Hence, ∥Φ0(w*)∥=0.
5. Numerical Experiments
In this section, we implement Algorithm 1 for solving the system of equalities and inequalities in MATLAB in order to see the behavior of our noninterior continuation algorithm. All the program codes were written in MATLAB and run in MATLAB 7.5 environment. All numerical experiments were done at a PC with CPU of 1.6 GHz and RAM of 512 MB.
In numerical implementation, we adopt the similar strategy to [10]; the function Φμ(w) defined by (7) is replaced by
(27)Φμ(w)∶=(fE(x)+cμxEfI(x)+s+cμxIΦmin(0,s,μ)+cμs),
where c is a given constant. It is easy to see that such a change does not destroy any theoretical results obtained in this paper. In order to obtain an interior solution of (1), we solve the following system of equalities and inequalities:
(28)fI(x)+ɛe≤0,fE(x)=0,
where ɛ is a sufficiently small number and e is a vector of all ones. The parameters used in Algorithm 1 were as follows: σ=0.4, ɛ=0.00001, δ=γ=0.5, μ0={1,Φ0(w0)}, and β=max{n,∥Φμ0(w0)∥/μ0}; the parameter c and the starting point are chosen according to the ones listed in Tables 1, 2, 3, and 4. Set s0∶=-fI(x0) and w0=(x0,s0). We used μ≤10-6 as the stopping criterion.
Numerical results for Example 1′.
ST
c=102
c=103
IT
SOL
IT
SOL
(0,0,0)T
8
(0.8771,0.6720,0.5725)T
6
(0.8697,0.6680,0.5741)T
(-1,-1,-1)T
6
(0.8771,0.6721,0.5726)T
5
(0.8664,0.6665,0.5730)T
(1,1,1)T
8
(0.8771,0.6720,0.5725)T
6
(0.8697,0.6680,0.5741)T
(1,0,1)T
8
(0.8766,0.6718,0.5726)T
9
(0.8702,0.6687,0.5737)T
Numerical results for Example 2′.
ST
c=102
c=103
IT
SOL
IT
SOL
(0,0,0)T
12
(-0.8362,-0.8605,1.9566)T
10
(-0.8396,-0.8600,1.9558)T
(-1,-1,-1)T
12
(-0.8362,-0.8606,1.9565)T
11
(-0.8394,-0.8600,1.9558)T
(1,1,1)T
10
(-0.8364,-0.8605,1.9565)T
9
(-0.8446,-0.8592,1.9548)T
(0,1,0)T
11
(-0.8363,-0.8605,1.9565)T
13
(-0.8376,-0.8603,1.9562)T
Numerical results for Example 3′.
ST
c=102
c=103
IT
SOL
IT
SOL
(-1,-1,-1)T
5
(-0.0952,0.0952,0.4471)T
4
(-0.0946,0.0944,0.4474)T
(0,0,0)T
13
(-0.0953,0.0953,0.4471)T
10
(-0.0948,0.0946,0.4472)T
(1,1,1)T
5
(-0.0953,0.0952,0.4471)T
4
(-0.0947,0.0946,0.4476)T
(0,1,0)T
5
(-0.0952,0.0952,0.4471)T
5
(-0.0950,0.0949,0.4472)T
Numerical results for Example 4′.
ST
c=102
c=103
IT
SOL
IT
SOL
(0,0,0)T
18
(0.5769,0.4787,99.9981)T
22
(0.7944,0.3344,100.0022)T
(0,0,-1)T
18
(0.3516,0.7573,100.0028)T
14
(1.2045,0.0456,100.0051)T
(1,0,1)T
19
(0.8154,0.4056,99.9981)T
9
(1.2619,-1.2051,99.9994)T
(0,0,1)T
17
(0.6092,0.4122,99.9992)T
13
(1.0040,-1.0030,100.0533)T
We consider the following four examples.
Example 1.
Consider (1), where f∶=(f1,f2,f3)T with x∈ℜ3 and
(29)f1(x)∶=(x1-0.5)2+(x2-1)2-0.25≤0,f2(x)∶=-(x1-0.5)2-(x1-1.1)2+x22-0.26≤0,f3(x)∶=x2+x32-1≤0.
Example 2.
Consider (1), where f∶=(f1,f2,f3)T with x∈ℜ3 and
(30)f1(x)∶=x1+x2e0.8x3+e1.6≤0,f2(x)∶=x12+x22+x32-5.2675=0,f3(x)∶=x1+x2+x3-0.2605=0.
Example 3.
Consider (1), where f∶=(f1,f2,f3)T with x∈ℜ3 and
(31)f1(x)∶=0.8-ex1+x2+x32≤0,f2(x)∶=1.21ex1+ex2-2.2=0,f3(x)∶=x12+x22+x2-0.1135=0.
Example 4.
Consider (1), where f∶=(f1,f2,f3)T with x∈ℜ3 and
(32)f1(x)∶=x12+x22+x32-10000≤0,f2(x)∶=x1-0.7sinx1-0.2cosx2=0,f3(x)∶=x2-0.7cosx1+0.2sinx2=0.
The first example only contains inequalities; the other examples contain equalities and inequalities. Instead of these three examples, we use Algorithm 1 to solve the following problems.
Example 1^{
′}. Consider (1), where f∶=(f1,f2,f3)T with x∈ℜ3 and
(33)f1(x)∶=(x1-0.5)2+(x2-1)2-0.25+ɛ≤0,f2(x)∶=-(x1-0.5)2-(x1-1.1)2+x22-0.26+ɛ≤0,f3(x)∶=x2+x32-1+ɛ≤0.
Example 2^{
′}. Consider (1), where f∶=(f1,f2,f3)T with x∈ℜ3 and
(34)f1(x)∶=x1+x2e0.8x3+e1.6+ɛ≤0,f2(x)∶=x12+x22+x32-5.2675=0,f3(x)∶=x1+x2+x3-0.2605=0.
Example 3^{
′}. Consider (1), where f∶=(f1,f2,f3)T with x∈ℜ3 and
(35)f1(x)∶=0.8-ex1+x2+x32+ɛ≤0,f2(x)∶=1.21ex1+ex2-2.2=0,f3(x)∶=x12+x22+x2-0.1135=0.
Example 4^{
′}. Consider (1), where f∶=(f1,f2,f3)T with x∈ℜ3 and
(36)f1(x)∶=x12+x22+x32-10000+ɛ≤0,f2(x)∶=x1-0.7sinx1-0.2cosx2=0,f3(x)∶=x2-0.7cosx1+0.2sinx2=0.
The numerical results are listed in Tables 1, 2, 3, 4, and 5, where Exam denotes the tested examples, ST denotes the starting point x0, C denotes the value of the parameter c given in (27), CPU denotes the CPU time for solving the underlying problem in second, IT denotes the total number of iterations, − represents iteration number in more than 1000, and SOL denotes the solution obtained by Algorithm 1.
Numerical results for Examples 1′–4′.
Example
ST
Our algorithm
Algorithm in [8]
IT
CPU
IT
CPU
Example 1′
(0,0,0)T
8
0.007767
6
0.006365
(-1,-1,-1)T
6
0.006270
8
0.007373
(1,1,1)T
8
0.006578
7
0.006137
Example 2′
(0,0,0)T
12
0.016807
49
0.068499
(-1,-1,-1)T
12
0.017554
45
0.065420
(1,1,1)T
10
0.026884
54
0.187465
Example 3′
(0,0,0)T
13
0.011897
5
0.004939
(-1,-1,-1)T
5
0.007156
18
0.023658
(1,1,1)T
5
0.006765
11
0.010136
Example 4′
(0,0,0)T
18
0.030636
—
—
(0,0,1)T
17
0.036194
—
—
(1,0,1)T
19
0.043265
—
—
From Tables 1, 2, 3, and 4, it is easy to see that all problems that we tested can be solved efficiently. In Table 5, we compare our proposed algorithm with the algorithm in [8]. The numerical results illustrate that our algorithm is more effective.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work is supported by the National Natural Science Foundation of China (Grants nos. 11326186, 61101208, and 11241005), the Fundamental Research Funds for the Central Universities, and a Project of Shandong Province Higher Educational Science and Technology Program, China (no. J13LI05).
HuangZ.-H.The global linear and local quadratic convergence of a non-interior continuation algorithm for the LCPChenX.TsengP.Non-interior continuation methods for solving semidefinite complementarity problemsKanzowC.Some noninterior continuation methods for linear complementarity problemsChenB.HarkerP. T.A non-interior-point continuation method for linear complementarity problemsJianJ.A combined feasible-infeasible point continuation method for strongly monotone variational inequality problemsQiL.SunD.Improving the convergence of non-interior point algorithms for nonlinear complementarity problemsChiX.LiuS.A non-interior continuation method for second-order cone programmingZhangY.HuangZ.-H.A nonmonotone smoothing-type algorithm for solving a system of equalities and inequalitiesFischerA.A special Newton-type optimization methodHuangZ.-H.ZhangY.WuW.A smoothing-type algorithm for solving system of inequalities