MPEMathematical Problems in Engineering1563-51471024-123XHindawi Publishing Corporation31039110.1155/2010/310391310391Research ArticleA Filled Function Approach for Nonsmooth Constrained Global OptimizationWangWeixiang1ShangYoulin2ZhangYing3ChouJyh Horng1Department of MathematicsShanghai Second Polytechnic UniversityShanghai 201209Chinasspu.cn2Department of MathematicsHenan University of Science and TechnologyLuoyang 471003Chinahaust.edu.cn3Department of MathematicsZhejiang Normal UniversityJinhua 321004Chinazjnu.edu.cn20101810201020102804201004102010081020102010Copyright © 2010 Weixiang Wang et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A novel filled function is given in this paper to find a global minima for a nonsmooth constrained optimization problem. First, a modified concept of the filled function for nonsmooth constrained global optimization is introduced, and a filled function, which makes use of the idea of the filled function for unconstrained optimization and penalty function for constrained optimization, is proposed. Then, a solution algorithm based on the proposed filled function is developed. At last, some preliminary numerical results are reported. The results show that the proposed approach is promising.

1. Introduction

Recently, since more accurate precisions demanded by real-world problems, studies on global optimization have become a hot topic. Many theories and algorithms for global optimization have been proposed. Among these methods, filled function method is a particularly popular one. The filled function method was originally introduced in [1, 2] for smooth unconstrained global optimization. Its idea is to construct a filled function via it the objective function leaves the current local minimum to find a better one. The filled function method consists of two phase: local minimization and filling. The two phases are performed repeatedly until no better minimizer could be located. The filled function method was further developed in literature . It should be noted that these filled function methods deal only with smooth unconstrained or box constrained optimization problem. However, many practical problems could only be modelled as nonsmooth constrained global optimization problems. To address this situation, in this paper, we generalize the filled function proposed in  and establish a novel filled function approach for nonsmooth constrained global optimization. The key idea of this approach is to combine the concept of filled function for unconstrained global optimization with the penalty function for constrained optimization.

In general, there are two difficulties in global optimization: the first is how to leave the current local minimizer of f(x) to go to a better one; the second is how to check whether the current minimizer is a global solution of the problem. Just like other GO methods, the filled function method has some weaknesses discussed in . In particular, the filled function method cannot solve the second issue, so our paper focuses on the former issue.

The rest of this paper is organized as follows. In Section 2, some preliminaries about nonsmooth optimization and filled function are listed. In Section 3, the concept of modified filled function for nonsmooth constrained global optimization is introduced, a novel filled function is given, and its properties are investigated. In Section 4, an efficient algorithm based on the proposed filled function is developed for solving nonsmooth constrained global optimization problem. Section 5 presents some numerical results. Last, in Section 6, the conclusion is given.

2. Nonsmooth Preliminaries

Consider the following problem (P):minxSf(x), where S={xX:gi(x)0},f,gi:XR, iI={1,2,,m}, and XRn is a box set.

In this section, we first list some definitions and lemmas from , then we make some assumptions on f(x),gi(x),  iI, and finally we define filled function for problem (P).

Definition 2.1.

Letting f(x) be Lipschitz with constant L>0 at the point x, the generalized gradient of f at x is defined as f(x)={ξX:ξ,df0(x;d),dX}, where f0(x;d)=lim supyx,t0(f((y+td)-f(y))/t) is the generalized directional derivative of f(x) in the direction d at x.

Lemma 2.2.

Let f be Lipschitz with constant L at the point x, then

f0(x;d) is finite, sublinear and satisfies |f0(x;d)|Ld;

for all dX, f0(x;d)=max{ξ,d:ξf(x)}, and to any ξf(x), one has ξL;

Σsifi(x)Σsifi(x), for all siR.

Considering problem (P), throughout the paper, we need the following assumptions:

f(x) and gi(x), iI, are Lipschitz continuous with a common constant L>0.

the number of the different value of local minimizer of (P) is finite;

S,clS=S, where S denotes the interior of S,clS denotes the closure of S.

Now, we give the definition of filled function for problem (P) below.

Definition 2.3.

A function P(x,x*) is called a filled function of [P] at x* if all the following conditions are met:

x* is a strict local maximizer of P(x,x*) on X;

P(x,x*) has no stationary points in the set S1x*(XS), that is, 0P(x,x*);

If x* is not a global minimizer of problem (P), then there exists a point x1*S such that x1* is a local minimizer of P(x,x*) on X with f(x1*)<f(x*).

3. A New Filled Function and Its Properties

Consider the problem (P).

DefineF(x,x*,r)=η(x-x*)+r1+[min(0,max(f(x)-f(x*),gi(x),i=1,,m))]2, where r>0 is a parameter, η() is a differentiable function such that η(0)=1 and η(t)<0 for any t>0, and · indicates the Euclidean vector norm.

Next, we will prove that F(x,x*,r) is a filled function, where x* is the current local minimizer of problem (P).

Theorem 3.1.

x* is a strict local maximizer of F(x,x*,r) on X.

Proof.

Since x* is a local minimizer of (P), there exists a neighborhood N(x*,σ*) of x* with σ*>0 such that f(x)f(x*) for any xSN(x*,σ*). We consider the following two cases.Case 1 (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M91"><mml:mi>x</mml:mi><mml:mo>∈</mml:mo><mml:mi>N</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>*</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mi>σ</mml:mi></mml:mrow><mml:mrow><mml:mi>*</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∩</mml:mo><mml:mi>S</mml:mi><mml:mo>,</mml:mo></mml:math></inline-formula> and <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M92"><mml:mi>x</mml:mi><mml:mo>≠</mml:mo><mml:msup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>*</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula>).

In this case, note that min[0,max(f(x)-f(x*),gi(x),i=1,,m)]=0, then F(x,x*,r)=η(x-x*)-1+F(x*,x*,r)<F(x*,x*,r).

Case 2 (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M95"><mml:mi>x</mml:mi><mml:mo>∈</mml:mo><mml:mi>N</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>*</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mi>σ</mml:mi></mml:mrow><mml:mrow><mml:mi>*</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∩</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>X</mml:mi><mml:mo>∖</mml:mo><mml:mi>S</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>).

In this case, xx*; moreover, there exists at least one index i01,,n such that gi0(x)>0. It follows that min[0,max(f(x)-f(x*),gi(x),i=1,,m)]  =0,F(x,x*,r)=η(x-x*)+r<1+r=F(x*,x*,r). Therefore, x* is a strict local maximizer of F(x,x*,r).

Theorem 3.2.

For any x(S1x*)(XS), one has 0F(x,x*,r).

Proof.

For any x(S1x*)(XS), similar to the proof of Theorem 3.1, we have min[0,max(f(x)-f(x*),gi(x),i=1,,m)]=0. Since xx*, it follows that (min[0,max(f(x)-f(x*),gi(x),i=1,,m)])22min[0,max(f(x)-f(x*),gi(x),i=1,,m)]×min[0,max(f(x)-f(x*),gi(x),i=1,,m)]=0,F(x,x*,r)η(x-x*)-(min[0,max(f(x)-f(x*),gi(x),i=1,,m)])2×r(1+[min(0,max(f(x)-f(x*),gi(x),i=1,,m))]2)2=η(x-x*)x-x*x-x*. Therefore, we have that F(x,x*,r),x-x*x-x*η(x-x*)x-x*x-x*,x-x*x-x*=η(x-x*)<0. So, to any ξF(x,x*,r), one has ξT((x-x*)/x-x*)<0. Then 0F(x,x*,r).

Theorem 3.3.

Suppose that Assumptions (1)–(3.8) are satisfied. If x* is not a global minimizer, and r>0 is appropriately large, then there exists a point xr*S2 such that xr* is a minimizer of F(x,x*,r).

Proof.

Since x* is not a global minimizer, there exists another local minimizer x1* of (P) such that f(x1*)<f(x*),g(x1*)0. By Assumption (3.1), there exists one point x2*intX such that f(x2*)<f(x*),g(x2*)<0. Thus, we have F(x2*,x*,r)=η(x2*-x*)+r1+(max(f(x2*)-f(x*),gi(x2*),i=1,,m))2. On the other hand, to any xS, where S denotes the boundary of the set S, there exists at least one index i0{1,,m} such that gi0(x)=0, which yields F(x,x*,r)=η(x-x*)+r. Let M=maxx1,x2Xx2-x1>0,N0=η(x2*-x*)-η(M)0. To any xS, if r>0 is chosen to be appropriately large such that r>1+(max(f(x2*)-f(x*),gi(x2*),i=1,,m))2(max(f(x2*)-f(x*),gi(x2*),i=1,,m))2N0, then, we have that F(x2*,x*,r)-F(x,x*,r)=η(x2*-x*)-η(x-x*)-r((max(f(x2*)-f(x*),gi(x2*),i=1,,m))21+(max(f(x2*)-f(x*),gi(x2*),i=1,,m))2)η(x2*-x*)-η(M)-r((max(f(x2*)-f(x*),gi(x2*),i=1,,m))21+(max(f(x2*)-f(x*),gi(x2*),i=1,,m))2)N0-r((max(f(x2*)-f(x*),gi(x2*),i=1,,m))21+(max(f(x2*)-f(x*),gi(x2*),i=1,,m))2)<0.

Denotes xr*=argminxSF(x,x*,r). Then, if r>0 is appropriately large such that (3.10) is met, one has F(xr*,x*,r)=minxSF(x,x*,r)=minxSSF(x,x*,r)F(x2*,x*,r). Note that SS is an open bounded set, thus xr*SS and gi(xr*)<0, for i=1,,m. Moreover, we can easily prove that f(x0*)<f(x*). In fact, if it is not true, then F(xr*,x*,r)=η(xr*-x*)+r>F(x2*,x*,r), which contradicts with (3.12).

Therefore, one has xr*S2. This completes the proof.

4. Solution Algorithm

In the previous section, several properties of the proposed filled function are discussed. Now a solution algorithm based on these properties is described as follows.

Initialization Step

Choose a disturbance constant δ; for example, set δ:=0.1.

Choose an upper bound of r such that rU>0; for example, set rU:=108.

Choose a constant r̂>0; for example, r̂=10.

Choose direction ek,k=1,2,,k0 with integer k02n, where n is the number of variable.

Set k:=1.

Main Step

Start from an initial point x, minimize the primal problem (P) by implementing a nonsmooth local search procedure, and obtain the first local minimizer x1* of f(x).

Let r=1.

Construct the filled function: F(x,x1*,r)=η(x-x1*)+(r/1+[min(0,max(f(x)-f(x1*),gi(x),i=1,,m))]2).

If  k>k0, then go to (7). Else set x:=x1*+δek as an initial point, minimize the filled function problem by implementing a nonsmooth local search procedure, and obtain a local minimizer denoted xk.

If  xk¯X, then set k:=k+1, go to (4). Else go to next step.

If  xk satisfies f(xk)<f(x1*), then set x:=xk and k:=1, start from x as a new initial point, minimize the primal problem (P) by implementing a local search procedure, and obtain another local minimizer x2* of f(x) such that f(x2*)<f(x1*), set x1*:=x2*, go to (2). Else go to next step.

Increase r by setting r:=r̂r.

If  rrU, then set k:=1, go to (3). Else the algorithm is incapable of finding a better local minimizer. The algorithm stops and x1* is taken as a global minimizer.

The motivation and mechanism behind the algorithm are explained as below.

A set of m=2n initial points is chosen in Step (4) of the Initialization step to minimize the filled function. We set the initial points symmetric about the current local minimizer. For example, when n=2, the initial points are: For example, when n=2, the directions can be chosen as (1,0),(0,1),(-1,0),(0,-1).

In Step (1) and Step (6) of the Main step, we minimize the primal problem (P) by nonsmooth constrained local optimization algorithms such as penalty function method, bundle method, quasi-newton method and composite optimal method. In Step 4 of the Main step, we minimize the filled function problem by nonsmooth unconstrained local optimization algorithms such as cutting-planes method, powell method, and Hooke-Jeeve method. They are all effective methods.

Recall from Theorem 3.3 that the value of r should be selected large enough. Otherwise, there could be no minimizer of F(x,x1*,r) in set S2. Thus, r is increased successively in Step (7) of the solution process if no better solution is found when minimizing the filled function. If all the initial points have been used and r reaches its upper bound rU, but no better solution is found, then the current local minimizer is taken as a global one.

The proposed filled function method can also apply to smooth constrained global optimization.

5. Numerical Experiment

In this section, we perform a numerical test to give an initial feeling of the potential application of the proposed function approach in real-world problems. In our programs, the filled function is of the form F(x,x*,r)=exp(-x-x*)+r1+[min(0,max(f(x)-f(x*),gi(x),i=1,,m))]2. The proposed algorithm is programmed in Fortran 95. The composite optimal method is used to find local minimizers of the original constrained problem, and the Hooke-Jeeve method is used to search for local minimizers of the filled function problems.

The main iterative results of Algorithm NFFA applying on four test examples are listed in Tables 14. The symbols used in the tables are given as follows:

the iteration number in finding the kth local minimize;

the parameter to find the (k+1)th local minimize;

the kth initial point to find the kth local minimize;

the kth local minimize;

the function value of the kth initial point;

the function value of the kth local minimizer.

Numerical results for Problem 1.

krxkf(xk)xk*f(xk*)
1 (-15,-2)6.1184(-15.0000,0.0000)5.7164
21 (-1.0585,0.5165) 2.1433(0.0001,-0.2094)-0.3690
3 10(0.0007,-0.0435)-0.7470(0.0000,0.0000)-2.7183

Numerical results for Problem 2.

krxkf(xk)xk*f(xk*)
1 (-1.5,1.0,-0.75)0.8125(-1.9802,-0.0130,-0.0006)-1.9410
2 1 (1.1931,0.6332,-1.1931)-3.9140(1.9889, -0.0001,-0.0111)-5.9446

Numerical results for Problem 3.

krxkf(xk)xk*f(xk*)
1 (2,2,2,2)42.0000(0.0000,1.0000,0.0000,2.0000)-6.0000
2 1 (0.6078,2.0003,0.0003,0.0319)-22.9117(0.9289,0.8620,0.2453,0.0803)-35.9939
3 100(0.4012,0.2524,0.2288,0.0000)-49.4733(0.0000,1.0000,1.0000,1.0000)-65.0000

Numerical results for Problem 4.

krxkf(xk)xk*f(xk*)
1(2,2,2,2,2,2)-10.0000(5,1,5,6,5,4)-262.0000
2 1 (5,1,5,1.7581,5,4)-263.0269(5,1,5,0,5,4)-274.0000
3 1000(5,1,5,1.7579,5,10)-299.0286(5,1,5,0,5,10)-310.0000
Problem 1.

We haveminf(x)=-20exp(-0.2|x1|+|x2|2)-exp(cos(2πx1)+cos(2πx2)2)+20s.t.x12+x22300,2x1+x24,-30xi30,i=1,2.

Algorithm NFFA succeeds in finding a global minimizer x*=(0,0)T with f(x*)=-2.7183. The numerical results are listed in Table 1.

Problem 2.

We haveminf(x)=-x12+x22+x32-x1s.t.x12+x22+x32-40,min{x2-x3,x3}0.

Algorithm NFFA successfully finds an approximate global solution x*=(1.9889,-0.0001,-0.0111)T with f(x*)=-5.9446. Table 2 records the numerical results of Problem 2.

Problem 3.

We haveminf(x)=max{f1(x),f2(x),f3(x)}s.t.x12-x2-x420,0xi3,i=1,,4, where fi(x)=f0(x)+10·gi(x),i=1,2,3,f0(x)=x12+x22+2x32+x42-5x1-5x2-21x3+7x4,g1(x)=x12+x22+2x32+x42+x1-x2+x3-x4-8,g2(x)=x12+2x22+x32+2x42-x1-x4-10,g3(x)=x12+x22+x32+2x1-x2-x4-5.

Algorithm NFFA successfully finds a global solution x*=(0,1,1,1)T with f(x*)=-65. The computational results are listed in Table 3.

Problem 4.

We haveminf(x)=-25(x1-2)2-(x2-2)2-(x3-1)2-(x4-4)2-(x5-1)2-(x6-4)2s.t.(x3-3)2+x4300,(x5-3)2+x64,x1-3x22,-x1+x22,2x1+x26,0x16,0x28,1x35,0x46,1x55,0x610.

The proposed algorithm successfully finds a global solution x*=(5,1,5,0,5,10)T with f(x*)=-310. The main iterative results are listed in Table 4.

6. Conclusions

In this paper, we extend the concept of the filled function for unconstrained global optimization to nonsmooth constrained global optimization. Firstly, we give the definition of the filled function for constrained optimization and construct a new filled function with one parameter. Then, we design a solution algorithm based on this filled function. Finally, we perform some numerical experiments. The preliminary numerical results show that the new algorithm is promising.

Acknowledgment

The paper was supported by the National Natural Science Foundation of China under Grant nos. 10971053, 11001248.

GeR. P.A filled function method for finding a global minimizer of a function of several variablesMathematical Programming199046219120410.1007/BF015857371047374ZBL0694.90083GeR. P.QinY. F.A class of filled functions for finding global minimizers of a function of several variablesJournal of Optimization Theory and Applications198754224125210.1007/BF00939433895737ZBL0595.65072HorstR.PardalosP. M.ThoaiN. V.Introduction to Global Optimization19953Dordrecht, The NetherlandsKluwer Academic Publishersxii+318Nonconvex Optimization and Its Applications1350534XuZ.HuangH.-X.PardalosP. M.XuC.-X.Filled functions for unconstrained global optimizationJournal of Global Optimization2001201496510.1023/A:10112075128941836883ZBL1049.90092LiuX.XuW.A new filled function applied to global optimizationComputers & Operations Research2004311618010.1016/S0305-0548(02)00154-52009570ZBL1039.90099WangW.ShangY.ZhangL.A filled function method with one parameter for box constrained global optimizationApplied Mathematics and Computation20071941546610.1016/j.amc.2007.04.0112385828ZBL1193.90175ZhangL.-S.NgC.-K.LiD.TianW.-W.A new filled function method for global optimizationJournal of Global Optimization2004281174310.1023/B:JOGO.0000006653.60256.f62031131ZBL1061.90109WangW.XuY.Simple transformation functions for finding better minimaApplied Mathematics Letters200821550250910.1016/j.aml.2007.02.0332402843ZBL1144.90492ShangY.-l.PuD.-g.JiangA.-p.Finding global minimizer with one-parameter filled function on unconstrained global optimizationApplied Mathematics and Computation2007191117618210.1016/j.amc.2007.02.0742385515ZBL1193.90174WangW.ShangY.ZhangY.Finding global minima with a filled function approach for non-smooth global optimizationDiscrete Dynamics in Nature and Society201020101010.1155/2010/8436098436092600330ZBL1189.49024TörnA.ŽilinskasA.Global Optimization1989350Berlin, GermanySpringerx+255Lecture Notes in Computer Science988640ClarkeF. H.Optimization and Nonsmooth Analysis1983New York, NY, USAJohn Wiley & Sonsxiii+308Canadian Mathematical Society Series of Monographs and Advanced Texts709590