AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 10.1155/2014/408918 408918 Research Article Global Optimization for the Sum of Certain Nonlinear Functions http://orcid.org/0000-0002-9799-2151 Horai Mio 1 Kobayashi Hideo 1 Nitta Takashi G. 2 Rossi Julio D. 1 Faculty of Engineering, Graduate School of Engineering, Mie University, Kurimamachiyamachi, Tsu 514-8507 Japan mie-u.ac.jp 2 Department of Education, Mie University, Kurimamachiyamachi, Tsu 514-8507 Japan mie-u.ac.jp 2014 10112014 2014 13 04 2014 19 08 2014 20 08 2014 10 11 2014 2014 Copyright © 2014 Mio Horai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We extend work by Pei-Ping and Gui-Xia, 2007, to a global optimization problem for more general functions. Pei-Ping and Gui-Xia treat the optimization problem for the linear sum of polynomial fractional functions, using a branch and bound approach. We prove that this extension makes possible to solve the following nonconvex optimization problems which Pei-Ping and Gui-Xia, 2007, cannot solve, that the sum of the positive (or negative) first and second derivatives function with the variable defined by sum of polynomial fractional function by using branch and bound algorithm.

1. Introduction

The optimization problem is widely used in sciences, especially in engineering and economy . In 2007, Pei-Ping and Gui-Xia considered one global optimization problem in : (P)minω(x)=j=1Pcjbj(x)aj(x)s.t.Kgkx0,      xX, where X:={xRN0<x_ixix¯i<  (i=1,2,,N)}, aj(x), bj(x), gk(x) are given generalized polynomial. One has (1)ajxt=1Tjaβjtai=1Nxiγjtia,bjxt=1Tjbβjtbi=1Nxiγjtib,gkx:=t=1Tkgβktai=1Nxiγktig.

Sum of rations problems like (P) attract a lot of attention, and the reason is that these problems are applied to various economical problems .

Pei-Ping and Gui-Xia proposed the method to solve these problems globally by using branch and bound algorithm in . In the above problem, the objective function and constrained function are sums of generalized polynomial fractional functions. We extend these functions to more general functions like below: (P0)minw(x)=j=1Phjbjxajxs.t.Kgkx=j´=1Pkhkj´dkj´xckj´x0kkkkkj´=1,,PK,k=1,,M,kkkkkxX, where aj(x)>0, bj(x)>0, ckj´(x)>0, dkj´(x)>0  xX; that is, (2)aj(x)=t=1Tjaβjtai=1Nxiγjtia,bj(x):=t=1Tjbβjtbi=1Nxiγjtib,ckj´(x)=t=1Tkj´cβkj´tci=1Nxiγkj´tc,dkj´(x)=t=1Tkj´dβkj´tdi=1Nxiγkj´td,j=1,2,,P,j´=1,2,,Pk,k=1,2,,M, where Tja, Tjb, Tkj´c, Tkj´d are natural numbers, and βjta, βjtb, βkj´tc, βkj´td are real constants not zero, and γjtia, γjtib, γkj´tc, γkj´td are real constants.

We assume that hj(yj), hkj´(ykj´):RR are secondly differentiable functions and monotone increasing or monotone decreasing functions. We divide these functions to monotone increasing or monotone decreasing as follows: (3)hj>0    j=1,,K,hj<0    (j=K+1,,P),hkj´>0    j´=1,,Kk,hkj´<0    (j´=Kk+1,,Pk).

Furthermore, we assume the following conditions for the second derivatives: (4)hjexpzj′′>0,hjexpzj′′<0,hkj´expzkj´>0,hkj´expzkj´<0j=1,,P,  j´=1,,Pk,  k=1,,M. To solve the above problem (P0), we transform the problem (P0) to the equivalent problems (P1), (P2) and transform (P2) into the linear relaxation problem. We prove the equivalency of the problems under above assumption, and we calculate the equivalent problem using branch and bound algorithm corresponding to .

For example, according to this extension at approach, we can calculate the following global optimization problem: (5)minsinx12+3x2-2x22+1x12+x2+2kkkkk+cos-x22+2x1+2x2x1+2.5ks.t.kx12-x1x2-10kkWWx1+3x2x1-50kkkkkkX=x:1x13,1x23.

In this paper, we explain how to make equivalent relaxation linear problem from original problem in Section 2. In Section 3, we present the branch and bound algorithm and its convergence. In Section 4, we introduce numerical experiments result.

2. Equivalence Transformation and Linear Relaxation

In this section we firstly transform the problem (P0) to the equivalent problems (P1) and secondly transform (P1) to (P2). Thirdly we linearize the problem (P2) corresponding to .

2.1. Translation of the Problem <xref ref-type="disp-formula" rid="eq3"><italic><inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M42"><mml:mrow><mml:mo class="left">(</mml:mo><mml:mi>P</mml:mi><mml:mn>0</mml:mn><mml:mo class="right">)</mml:mo></mml:mrow></mml:math></inline-formula></italic></xref> into <xref ref-type="disp-formula" rid="eq11"><italic><inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M43"><mml:mrow><mml:mo class="left">(</mml:mo><mml:mi>P</mml:mi><mml:mn>1</mml:mn><mml:mo class="right">)</mml:mo></mml:mrow></mml:math></inline-formula></italic></xref>

For the problem (P0), we put new variables mj, lj, tkj´ and skj´, and the function ρ(l,m) and ξk(s,t) depending on hj, hkj´ in the original problem (P0): (6)ρl,m=j=1Phjmjljj=1,2,,P,ξk(s,t)=j´=1Pkhkj´tkj´skj´(j´=1,,Pk,  k=1,,M). Since aj(x), bj(x), ckj´(x), dkj´(x) are polynomials on closed interval X, it is easy to calculate the minimums and maximums of the functions on X; we denote them by a_j, a¯j, b_j, b¯j, c_kj´, c¯kj´, d_kj´, d¯kj´.

Let H be the closed interval: (7)H=d_kj´tkj´d¯kj´l,m,s,tRPsumkka_jlja¯j,  b_jmjb¯j,kkc_kj´skj´c¯kj´,  d_kj´tkj´d¯kj´, where Psum=2P+k=1MPk.

Let ZH be the following closed domain in X×H; that is, (8)ZHj´=Kk+1,,Pk,k=1,,Mx,l,m,s,tX×Hkkkξk(s,t)0k=1,,M,kkkklj-aj(x)0,bjx-mj0kkkkkkkkkkkkkkkkkj=1,,K,kkkkaj(x)-lj0,mj-bj(x)0kkkkkkkkkkkkkkkkkkj=K+1,,P,kkkkkkskj´-ckj´(x)0,dkj´(x)-tkj´0kkkkkkkkkkkkj´=1,,Kk,k=1,,M,kkkkkkckj´(x)-skj´0,tkj´-dkj´(x)0kkkkkkj´=Kk+1,,Pk,k=1,,M. We give the problem (P1) on ZH. Consider (P1)minρ(l,m)s.t.lξks,t0,k=1,,M,kkkkkl(x,l,m,s,t)ZH. Now we obtain Theorem 1 that proves the equivalence of (P0) and (P1).

Theorem 1.

The problem (P0) on X is equivalent to the problem (P1) on ZH.

Proof.

Let x* be the optimal solution for (P0); we denote (9)lj*:=aj(x*),mj*:=bj(x*),skj´*:=ckj´x*,tkj´*:=dkj´x*, and then (10)  j=1Phjbj(x*)aj(x*)=j=1Phjmj*lj*,j´=1Pkhkj´dkj´(x*)ckj´(x*)=j´=1Pkhkj´tkj´*skj´*.

Furthermore let (x,l,m,s,t) be the optimal solution for (P1). Then by the restricted condition we have the following:

for j=1,,K, 0<ljaj(x) and 0<bj(x)mj; that is, 0<bj(x)/aj(x)mj/lj;

for j=K+1,,P, 0<aj(x)lj and 0<mjbj(x); that is, 0<mj/ljbj(x)/aj(x);

for j´=1,,Kk and k=1,,M, 0<skj´ckj´(x) and 0<dkj´(x)tkj´; that is, 0<dkj´(x)/ckj´(x)tkj´/skj´;

for j´=Kk+1,,Pk and k=1,,M, 0<ckj´(x)skj´ and 0<tkj´dkj´(x); that is, 0<tkj´/skj´dkj´(x)/ckj´(x).

The conditions hj>0j=1,,K, or hj<0j=K+1,,P and hkj´>0(j´=1,,Kk), or hkj´<0(j´=Kk+1,,Pk) lead to (11)hjbj(x)aj(x)hjmjlj(j=1,,P),  hkj´dkj´(x)ckj´(x)hkj´tkj´skj´(j´=1,,Pk,k=1,,M). Therefore we obtain (12)j=1Phjbj(x)aj(x)j=1Phjmjlj,j´=1Pkhkj´dkj´(x)ckj´(x)j´=1Pkhkj´tkj´skj´. Now, (13)j´=1Pkhkj´dkj´(x)ckj´(x)j´=1Pkhkj´tkj´skj´0, that is, x satisfied constant for (P0).

Since the optimal solution for (P0) is x*, we obtain (14)hjbj(x*)aj(x*)hjbj(x)aj(x), so (15)j=1Phjmj*lj*=j=1Phjbjx*ajx*j=1Phjmjlj,j=1Phjmj*lj*j=1Phjmjlj. For the optimal solution x* of (P0), we denote (16)lj*:=aj(x*),mj*:=bj(x*),skj´*:=ckj´x*,tkj´*:=dkj´x*; then (17)ρl*,m*=wx*,ξk(s*,t*)=g(x*). The element (x*,l*,m*,s*,t*) satisfies the conditions for ZH. Since (x,l,m,s,t) is the optimal solution for (P1), it satisfies j=1Phjmj/ljj=1Phjmj*/lj*.

Hence j=1Phjmj/lj=j=1Phjmj*/lj*; that is, the two problems are equivalent.

2.2. Translation of the Problem <xref ref-type="disp-formula" rid="eq11"><italic><inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M131"><mml:mrow><mml:mo class="left">(</mml:mo><mml:mi>P</mml:mi><mml:mn>1</mml:mn><mml:mo class="right">)</mml:mo></mml:mrow></mml:math></inline-formula></italic></xref> into <xref ref-type="disp-formula" rid="eq25"><italic><inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M132"><mml:mrow><mml:mo class="left">(</mml:mo><mml:mi>P</mml:mi><mml:mn>2</mml:mn><mml:mo class="right">)</mml:mo></mml:mrow></mml:math></inline-formula></italic></xref>

We change the variables by the logarithmic function log. Since xi, lj, mj, skj´, tkj´ are positive, we can write xi, lj, mj, skj´, tkj´ as exp(yn) are using new variables yn  (n=1,,N+Psum); that is, yi:=lnxi, yN+j:=lnlj, yN+P+j:=lnmj, yN+2P+(k-1)Pk+j´:=lnskj´, and yN+2P+(M+k-1)Pk+j´:=lntkj´.

The closed domain ZH corresponds to the following S0, where (18)S0=yRN+Psumkklnx_iyilnx¯i,kkkklna_jyN+jlna¯j,kkkklnb_jyN+P+jlnb¯j,kkkklnc_kj´yN+2P+k-1Pk+j´lnc¯kj´,kkkklnd_kj´yN+2P+M+k-1Pkj´lnd¯kj´. Using such transformation of variables, the objective function and the restricted functions of (P1) are changed to the following: (19)j=1Phjmjlj=j=1PhjexpyN+P+j-yN+j,j´=1Pkhkj´tkj´skj´=j´=1Pkhkj´expyN+2P+M+k-1Pk+j´-yN+2P+k-1Pk-j´,lj-ajx=expyN+j-t=1Tjaβjtaexpi=1Nγjtiayi,bjx-mj=t=1Tjbβjtbexpi=1Nγjtibyi-expyN+P+j,skj´-ckj´x=expyN+2P+k-1Pk+j´-t=1Tjcβjtcexpi=1Nγjticyi,dkj´x-tkj´=t=1Tjdβjtdexpi=1Nγjtidyi-expyN+2P+M+k-1PK+j´.

Now ρ(l,m), ξk(s,t), lj-aj(x), bj(x)-mj, skj´-ckj´(x), dkj´(x)-tkj´, are represented as (20)t=1TmΨmtexpi=1N+Psumλmtiyi, where λmti is real number and Ψmt satisfies Ψmt(x)>0 or Ψmt(x)<0, and Ψmtexpy′′>0 or Ψmtexpy′′<0.

Let fmt(y) be Ψmtexp(y) and let μm(y) be t=1Tmfmti=1N+Psumλmtiyi.

Then the objective function ρ(l,m) and the restricted functions are changed functions which are changed to μ0(y) and μm(y)m=1,,M+2P+k=1MPk.

Now we put (21)Sμ0=m=1,2,,M+2P+k=1MPkyS0kkkkkμmy0m=1,2,,M+2P+k=1MPk. Then the problem (P1) is transformed naturally to the following problem (P2): (P2)minμ0(y)is.t.kySμ0.

2.3. Linearization of the Problem <xref ref-type="disp-formula" rid="eq25"><italic><inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M180"><mml:mrow><mml:mo class="left">(</mml:mo><mml:mi>P</mml:mi><mml:mn>2</mml:mn><mml:mo class="right">)</mml:mo></mml:mrow></mml:math></inline-formula></italic></xref>

The objective and restricted function for (P2) are nonlinear. On Sμ0, we approximate μm(y) to lower bounded linear functions, and we can transform (P2) into the linear optimization problem. The solution of it is lower bound of the optimal value on (P2). We denote y_i, y¯i, y_N+j, y¯N+j, y_N+P+j, y¯N+P+j, y_N+2P+k-1Pk+j´, y¯N+2P+(k-1)Pk+j´, y_N+2P+M+k-1Pk+j´, y¯N+2P+(M+k-1)Pk+j´ as minimums and maximums for lnx_i, lnx¯i, lna_j, lna¯j, lnb_j, lnb¯j, lnc_kj´, lnc¯kj´, lnd_kj´, lnd¯kj´j=1,,P,j´=1,,Pk,k=1,,M.

And we denote SqSμ0; that is, (22)Sq=yRN+PsumKKKKy_iy_iqyiy¯iqy¯iKKKKi=1,,N+PsumRN+Psum,YmtSq=i=1N+Psumλmtiyi,Y_mtSq=i=1N+Psumminλmtiy_iq,λmtiy¯iq,Y¯mtSq=i=1N+Psummaxλmtiy_iq,λmtiy¯iq,  m=0,1,2,,M+2P+k=1MPk,  t=1,,Tm.

Now, fmt(y)>0 or fmt(y)<0, and fmt′′(y)>0 or fmt′′(y)<0, and fmtYmtSq is monotonic convex function on Y_mtSq,Y¯mtSq. And there exist the upper and lower bounded linear functions (FmtSqYmtSq and GmtSqYmtSq) of fmtYmtSq.

We denote (23)FmtSqYmtSqfmtY¯mtSq-fmtY_mtSqY¯mtSq-Y_mtSqYmtSq-Y_mtSq+fmtY_mtSq. As fmtYmtSq is continuous on Y_mtSq,Y¯mtSq and differentiable on Y_mtSq,Y¯mtSq, there exists cmtSqY_mtSq,Y¯mtSq, such that (24)fmtcmtSq=fmtY¯mtSq-fmtY_mtSqY¯mtSq-Y_mtSq, by the mean value theorem.

Since fmt(y)>0 or fmt(y)<0, fmtYmtSq is monotonic function on Y_mtSq,Y¯mtSq, there exists the inverse function of fmtY_mtSq. Hence cmtSq is uniquely decided, fmt-1YmtSq, such that (25)cmtSq=fmt-1fmtY¯mtSq-fmtY_mtSqY¯mtSq-Y_mtSq, and we define (26)GmtSqYmtSq=fmtY¯mtSq-fmtY_mtSqY¯mtSq-Y_mtSqYmtSq-cmtSq+fmtcmtSq,LmtSq(YmtSq)=GmtSq(YmtSq)(fmt′′(y)>0)FmtSq(YmtSq)(fmt′′(y)<0). By the definition, fmt(YmtSq)Lmt(YmtSq).

For all ySq, μm(y):=t=1Tmfmt(y)t=1TmLmtSq(y).

Let LmSq(y):=t=1TmLmtSq(y)0mM+2P+k=1MPk. Then L0(y) is a linear function which is lower function for the convex envelope of μ0(y) on the rectangle.

LRP S q is the linear problem of (P2) by the lower bounded function of μm(y): LRPSqminL0Sq(y)is.t.kLmSq(y)0KKKkm=1,2,,M+2P+k=1MPk. By the definition LRPSq, any y in Sq satisfying the restricted condition of (P2) satisfy the restricted condition of LRPSq.

Lemma 2.

The value of LRPSq is less than the optimal value for the problem (P2) on Sq.

Proof.

The definition of LRPSq implies the statement naturally.

Lemma 3.

Assume that Sq+1SqS0RN+Psum, and q=0Sq={y*}. For each m=0,1,2,,M+2P+k=1MPk and t=1,2,,Tm, limqmaxySqFmtSq(y)-fmt(y)=0 and limqmaxySqGmtSq(y)-fmt(y)=0.

Proof.

Let y_q:={ySqmin(λmtiy_iq,λmtiy¯iq)  (i=1,,N+Psum)} and y¯q:={ySqmax(λmtiy_iq,λmtiy¯iq)  (i=1,,N+Psum)}.

Since q=0Sq={y*}, the values y_q and y¯q satisfy limqy_q=limqy¯q=y*.

Hence, Y¯mtSq-Y_mtSq=i=1N+Psumλmti(y¯iq-y_iq)q0.

Now, (27)FmtSqy-fmty=FmtSq(YmtSq)-fmt(YmtSq)=fmt(Y¯mtSq)-fmt(Y_mtSq)Y¯mtSq-Y_mtSq(YmtSq-Y_mtSq)kkkkk+fmt(Y_mtSq)-fmt(YmtSq)fmt(Y¯mtSq)-fmt(Y_mtSq)Y¯mtSq-Y_mtSq. The function FmtSq(y)-fmt(y) is concave on Y_mtSq,Y¯mtSq; therefore cmtSq attains the maximum value of FmtSq(y)-fmt(y):maxySqFmtSq(y)-fmt(y)(28)maxySqFmtSqy-fmty=FmtSq(cmtSq)-fmt(cmtSq)=fmtY¯mtSq-fmtY_mtSqY¯mtSq-Y_mtSqcmtSq-Y_mtSqkkkkk+fmtY_mtSq-fmtcmtSqfmtY¯mtSq-fmtY_mtSqY¯mtSq-Y_mtSq.

We denote (29)ImtSq=Y¯mtSq-Y_mtSq,cmtSq=Y_mtSq+θmtSqImtSq(0<θmtSq<1). Since ImtSq0 for q, (30)fmtY¯mtSq-fmtY_mtSqY¯mtSq-Y_mtSqcmtSq-Y_mtSqkk+fmtY_mtSq-fmtcmtSqfmtY¯mtSq-fmtY_mtSqY¯mtSq-Y_mtSq=fmt(Y_mtSq+ImtSq)-fmt(Y_mtSq)Y_mtSq+ImtSq-Y_mtSq(Y_mtSq+θmtSqImtSq-Y_mtSq)kkk+fmt(Y_mtSq)-fmt(Y_mtSq+θmtSqImtSq)fmt(Y_mtSq+ImtSq)-fmt(Y_mtSq)Y_mtSq+ImtSq-Y_mtSq=fmt(Y_mtSq+ImtSq)-fmt(Y_mtSq)Y_mtSq+ImtSq-Y_mtSq(θmtSqImtSq)kkk+fmt(Y_mtSq)-fmt(Y_mtSq+θmtSqImtSq)fmt(Y_mtSq+ImtSq)-fmt(Y_mtSq)Y_mtSq+ImtSq-Y_mtSq=fmt(Y_mtSq+ImtSq)θmtSq-fmt(Y_mtSq)θmtSq+fmt(Y_mtSq)kkk-fmt(Y_mtSq+θmtSqImtSq)qfmtY_mtSqθmtSq-fmtY_mtSqθmtSqkkkkkk+fmtY_mtSq-fmtY_mtSq=0. Thus (31)GmtSqy-fmty=GmtSq(YmtSq)-fmt(YmtSq)=fmtY¯mtSq-fmtY_mtSqY¯mtSq-Y_mtSqYmtSq-cmtSqkkkkk+fmtcmtSq-fmtYmtSqfmtY¯mtSq-fmtY_mtSqY¯mtSq-Y_mtSq. On the other hand GmtSq(y)-fmt(y) is a convex function by the same argument, and we obtain the following maxySqGmtSq(y)-fmt(y)0 for q.

Lemma 4.

Under the same assumption of Lemma 3  maxySqLmSqy-μmy0 for q and each m=0,1,2,,M+2P+k=1MPk.

Proof.

Lemma 3 and the definitions LmSq(y), μm(y) imply Lemma 4, standardly.

3. Branch and Bound Algorithm and Its Convergence

In Section 2, we transformed the initial problem (P0) into the equivalent problem (P2), and we make the linear relaxation problem (LRP) of (P2) to find the approximate value of (P2) easily. Now we get it by using branch and bound algorithm.

3.1. Branch and Bound Algorithm

We solve the linear relaxation problem on initial domain S0 to get the linear optimal value as lower bound of (P2) and upper bound of (P2). For preparing to separate the active domains, we let the active domains set be Qq and active domain Sq(k)S0. q presents the times of the cutting domains and stage number and k presents the number of active domains on stage q. If Sq(k) is active domain, we divide Sq(k) into half domains Sq(k)·1, Sq(k)·2 and linearize (P2) on each domain and solve the linear problems. After the above calculations, we get the lower and upper bound value of (P2). After the repeat calculations, we get the convergence for the sequences of the lower and upper bound values, and we get the optimal value and solution.

3.1.1. Branching Rule

We denote that Sq(k)={yy_nqkynqky¯nqk,n=1,,N+Psum}S0. We select the branching variable i such that i=n  max{y¯nqk-y_nqk,n=1,2,,N+Psum}, and we divide the interval y_iqk,y¯iqk into half intervals: y_iqk,y_iqk+y¯iqk/2   and y_iqk+y¯iqk/2,y¯iqk.

3.1.2. Algorithm Statement

Step 0. Firstly, we let q be 0 and let k be 1. And we set an appropriate ϵ-value as a convergence tolerance, the initial upper bound V*=, and Q0=S0(1). We solve LRP(S0(1)), and we denote the linear optimal solution and optimal value by y^(S0(1)) and LB0(1). If y^(S0(1)) is feasible for (P2), then update V*=μ0(y^(S0(1))) and we set the initial lower bound LB=LB0(1). If V*-LBϵ, then we get the ϵ-approximate optimal value μ0(y^(S0(1))) and optimal solution y^(S0(1)) of (P2), so we stop this algorithm. Otherwise, we proceed to Step 1.

Step 1. For all k, we divide Sq(k) to get two half domains, Sq(k)·1 and Sq(k)·2, according to the above branching rule.

Step 2. For all k and each domain Sq(k)·v  (v=1,2), we calculate (32)μ_mv=t=1,cmt>0Γmcmtexp(Y_mtSq(k)·v)+t=1,cmt<0ΓmcmtexpY¯mtSqk·v    kkkkkkkkkkm=1,,M+2P+k=1MPk, where cmt, Y_mtSq(k)·v, and Y¯mtSq(k)·v are defined in Section 2.3.

If there is the μ_m(v) that satisfy μ_m(v)>0 for some m{1,2,,M+2P+k=1MPk}, Sq(k)v is infeasible domain for (P2), then we delete the domain from Qq. If Sq(k)·v  (v=1,2) are all deleted for all k, then the problem has no feasible solutions.

Step 3. For left domains, we compute AmtSq(k)·v, BmtSq(k)·v, Y_mtSq(k)·v, and Y¯mtSq(k)·v as defined in Sections 2.2 and 2.3. We solve the LRP(Sq(k)·v) by simplex algorithm, and we denote the obtained linear optimal solutions and values by (y^(Sq(k)·v),LBq(k)·v). Then if y^(Sq(k)·v) is feasible for (P2), we update V*=min{V*,μ0(y^(Sq(k)·v))}. If LBq(k)·v>V*, then delete the corresponding domains from Qq. If V*-LBq(k)·vϵ, then we get the ϵ-approximate optimal value μ0(y^(Sq(k)·v)) and optimal solution y^(Sq(k)·v) of (P2), so we stop this algorithm. Otherwise, we proceed to Step 4.

Step 4. We update the index of left domains Sq(k)·v to Sq+1(k); then we initialize k. And we settle that Qq+1 is a set of Sq+1(k), and go to Step 1.

3.2. The Convergence of the Algorithm

Corresponding to , we obtain the convergence of the algorithm (cf. ).

Theorem 5.

Suppose that problem (P2) has a global optimal solution, and let μ0* be the global optimal value of (P2). Then one has the following:

for the case ϵ>0, the algorithm always terminates after finitely many iterations yielding a global ϵ-optimal solution y* and a global ϵ-optimal value V* for problem (P2) in the sense that (33)y*S,V*-ϵμ0*  with  V*=μ0(y*);

for the case ϵ0, we assume the sequence ϵn is convergence tolerance, such that ϵ1>ϵ2>,,>ϵn>ϵn+1>,,>0; that is, limnϵn=0. And we assume the sequence yn* is optimal solution of (P2) corresponding to ϵn. Then the accumulation point of yn* is global optimal solution of (P2).

Proof.

(i) It is obvious by algorithm statement.

(ii) We assume that the upper bound corresponding to ϵn is Vn*: (34)μ0yn*Vn*-ϵ,Vn*;yn* is the point sequence on bounded closed set, so yn* has a converge subsequence yni*. We assume that limiyni*=y*; then (35)Vni*-ϵniμ0Vni*Vni*i,then  ni,  so  limniϵni=0.

V n * is monotone decreasing sequence, so it converges. We assume that limnVn*=μ0*: (36)limiVni*-ϵnilimiμ0(yni*)limiVni*;μ0(y) is continuous function, so limiμ0(yni)=μ0(y*). And μ0*μ0y*μ0*; that is, μ0(y*)=μ0*. For m, μm(yn*)0. As μm(y*) is continuous, limnμm(yn*)=μm(y*)0.

4. Numerical Experiment

In this chapter, we show the numerical experiments of these optimization problems according to the former rules. We make the algorithms coded with MATLAB. In these codes, we use MATLAB’s unique function code “linprog” to solve the linear optimization problems.

Example 1.

Consider (37)min    h(x)=sinx12+3x2-2x22+1x12+x2+2kkkkkkkkkk+cos-x22+2x1+2x2x1+2.5is.t.      x12-x1x2-10kkkkkx1+3x2x1-50kkkkkX=x:1x13,1x23. We set ϵ=0.0001. After the algorithm, we found a global ϵ-optimal value V*=1.0748 when the global ϵ-optimal solution is x1,x2T=(1.34977,1.64232).

Example 2.

Consider (38)min    exp-x12+3x1+2x22+3x2+3.5x1+1kkk-expx2x12-2x1+x22-8x2+20s.t.      kx1-x2x11kkkkk2x1x2+x2+6kkkkk2x1+x28kkkkkX=x:1x13,1x23. We set ϵ=0.0001. After the algorithm, we found a global ϵ-optimal value V*=58.2723 when the global ϵ-optimal solution is x1,x2T=(1,1.6180).

Example 3.

Consider (39)minsinx12+2x2-2x1+x22+1x1+x22+2kkkkk+cos3x12-3x2+2x1+x22+5x12+2x22+10s.t.ksinx12+3x2-2x22+2x12+x2+2kkkkkk+cos-x22+2x1+2x2x1+2.52kkkkkkX=x:1x12,1x22. We set ϵ=0.0001. After the algorithm, we found a global ϵ-optimal value V*=1.09133 when the global ϵ-optimal solution is x1,x2T=(2,1).

Example 4.

Consider (40)minexpx12-2x22+82x12+x2+1kkkkk+exp3x1-x22+5x12-x1+x22-3x2+10s.t.kx12-2x21kkkkkx1-x2x11kkkkk2x1+x226kkkkkX=x:1.5x12,1.5x22. We set ϵ=0.0001. After the algorithm, we found a global ϵ-optimal value V*=3.9378 when the global ϵ-optimal solution is x1,x2T=(1.5,1.7321).

5. Concluding Remarks

In this paper, we proved that we can solve the nonconvex optimization problems which  cannot solve that the sum of the positive (or negative) first and second derivatives function with the variable defined by sum of polynomial fractional function by using branch and bound algorithm.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank S. Pei-Ping, H. Yaguchi, and S. Tsuyumine for their good suggestion and encouragement.

Cai D. Nitta T. G. Limit of the solutions for the finite horizon problems as the optimal solution to the infinite horizon optimization problems Journal of Difference Equations and Applications 2011 17 3 359 373 10.1080/10236190902953763 MR2783353 ZBL1211.91173 2-s2.0-79952642460 Cai D. Nitta T. G. Optimal solutions to the infinite horizon problems: constructing the optimum as the limit of the solutions for the finite horizon problems Nonlinear Analysis: Theory, Methods & Applications 2009 71 12 e2103 e2108 10.1016/j.na.2009.03.066 MR2671983 2-s2.0-72149108148 Okumura R. Cai D. Nitta T. G. Transversality conditions for infinite horizon optimality: higher order differential problems Nonlinear Analysis: Theory, Methods & Applications 2009 71 12 e1980 e1984 10.1016/j.na.2009.02.110 2-s2.0-72149104729 Pei-Ping S. Gui-Xia Y. Global optimization for the sum of generalized polynomial fractional functions Mathematical Methods of Operations Research 2007 65 3 445 459 10.1007/s00186-006-0130-0 MR2314288 ZBL1180.90331 2-s2.0-34249871354 Jiao H. Wang Z. Chen Y. Global optimization algorithm for sum of generalized polynomial ratios problem Applied Mathematical Modelling 2013 37 1-2 187 197 10.1016/j.apm.2012.02.023 MR2994175 2-s2.0-84868471898 Wang Y. J. Zhang K. C. Global optimization of nonlinear sum of ratios problem Applied Mathematics and Computation 2004 158 2 319 330 10.1016/j.amc.2003.08.113 MR2094622 2-s2.0-4744361676