We present a branch and bound algorithm for globally solving the sum of ratios problem. In this problem, each term in the objective function is a ratio of two functions which are the sums of the absolute values of affine functions with coefficients. This problem has an important application in financial optimization, but the global optimization algorithm for this problem is still rare in the literature so far. In the algorithm we presented, the branch and bound search undertaken by the algorithm uses rectangular partitioning and takes place in a space which typically has a much smaller dimension than the space to which the decision variables of this problem belong. Convergence of the algorithm is shown. At last, some numerical examples are given to vindicate our conclusions.
1. Introduction
The sum of ratios problem has attracted considerable attention in the literature because of its large number of practical applications in various fields of study, including transportation planning, government contracting, economics, and finances [1–6]. And from a research point of view, the sum of ratios problem poses significant theoretical and computational challenges. This is mainly due to the fact that it is known to generally possess multiple local optima that are not globally optimal.
Many solution algorithms have been proposed for globally solving sums of linear ratios problem with linear constraints (see, e.g., [7–11]). Recently, some algorithms have been developed for solving globally the nonlinear sum of ratios problems; for instance, Freund and Jarre [12] proposed an interior-point approach for the convex-concave ratios with convex constraints; Dai et al. [13] and Pei and Zhu [14] presented two algorithms for the sum of dc ratios; Benson [15, 16] gave two branch and bound algorithms for the concave-convex ratios; Yamamoto and Konno [17] proposed an algorithm for convex-convex ratios; Shen and Jin [18] and Jiao and Shen [19] developed global optimization algorithms for two kinds of nonlinear sum of ratios.
In this paper, we are concerned with the following nonlinear sum of ratios problem:
(P)v=maxh(x)=∑i=1pni(x)di(x)s.t.x∈X,
where p≥1, X is a compact, convex set in Rn, and ni(x)=∑s=1Siαsi|∑j=1nnjsixj+n0si|, di(x)=∑t=1Tiβti|∑j=1ndjtixj+d0ti|, i=1,2,…,p. In addition, we assume that αsi,βti∈R, 0<li≤ni(x)≤ui, 0<Li≤di(x)≤Ui, ∀x∈X, i=1,2,…,p.
Problem (P) arises when we replace the variance by the absolute deviation as a measure of the variation of a portfolio. And the global optimization algorithm for this problem is still rare in the literature so far. So we believe that this paper is of interest to researchers in both the fields of portfolio optimization and fractional programming.
The purpose of this paper is to present a branch and bound algorithm for globally solving problem (P). We believe that the proposed algorithm has four potential practical and computational advantages. First, upper bounds are obtained by maximizing the concave envelope of the objective function of problem (P) over rectangles. Second, the proposed algorithm uses rectangles rather than simplices as partition elements, so that branching only takes place in a space of dimension p rather than n or 2p although the algorithm search is carried out mainly in a space of dimension 2p. Third, we choose a simple and standard bisection rule. This rule is sufficient to ensure convergence since the partition rule is exhaustive. Finally, the upper bounding subproblems are convex programming problems that differ from each other only in the coefficients of certain linear constraints and in the bounds that describe their associated rectangles.
The remainder of this paper is organized as follows. In Section 2, an equivalent problem of problem (P) is given. Next, in Section 3, we construct the function overestimating the value of the sum of ratios. In Section 4, the proposed branch and bound algorithm is described, and the convergence of the algorithm is established. Some numerical results are reported in Section 5. A summary is proposed in the last section.
2. Equivalent Problem
In order to globally solve the problem (P), first problem (P) can be converted into an equivalent nonconvex programming problem (P1) as follows:
(P1)v=max∑i=1ptisis.t.ni(x)-ti≥0,i=1,…,p,-di(x)+si≥0,i=1,…,p,x∈X,li≤ti≤ui,Li≤si≤Ui,i=1,…,p.
Theorem 1.
If (x*,t*,s*) is a global optimal solution for problem (P1), then ti*=ni(x*), si*=di(x*), i=1,2,…,p, and x* is a global optimal solution for problem (P). Conversely, if x* is a global optimal solution for problem (P), then (x*,t*,s*) is a global optimal solution for problem (P1), where ti*=ni(x*), si*=di(x*), i=1,2,…,p.
Proof.
The proof of this result can be easily followed from the definitions of problems (P) and (P1) and is therefore omitted.
Without loss of generality, we assume that αsi>0, s=1,…,Ssi, βti<0, t=1,…,Tti, i=1,…,p.
Let us define
(1)usi-vsi=∑j=1nnjsixj+n0si,usivsi=0,usi≥0,vsi≥0,s=1,…,Ssi,i=1,…,p,ξti-ηti=∑j=1ndjtixj+d0ti,ξtiηti=0,ξti≥0,ηti≥0,t=1,…,Tti,i=1,…,p.
Then problem (P1) can be reformulated as follows:
(P2)max∑i=1ptisis.t.∑s=1Ssiαsi(usi+vsi)+∑s=Ssi+1Siαsi|∑j=1nnjsixj+n0si|-ti≥0,i=1,…,p,-∑t=1Ttiβti(ξti+ηti)-∑t=Tti+1Tiβti|∑j=1ndjtixj+d0ti|+si≥0,i=1,…,p,usi-vsi=∑j=1nnjsixj+n0si,usivsi=0,usi≥0,vsi≥0,s=1,…,Ssi,i=1,…,p,ξti-ηti=∑j=1ndjtixj+d0ti,ξtiηti=0ξti≥0,ηti≥0,t=1,…,Tti,i=1,…,p,x∈X,li≤ti≤ui,Li≤si≤Ui,i=1,…,p.
As is well known, the set of complementarity conditions usivsi=0 can be represented as a system of linear inequalities by introducing zero-one integer variable [20]:
(2)usi≤asi≤zsi,vsi≤bsi(1-zsi),
where zsi∈{0,1} and asi, bsi are defined as follows:
(3)asi=max{max{∑j=1nnjsixj+n0si∣x∈X},0},bsi=-min{min{∑j=1nnjsixj+n0si∣x∈X},0}.
Then zsi∈{0,1} can be transformed into
(4)0≤zsi≤1,zsi(1-zsi)≤0.
For ξtiηti=0, we do with them similarly. Let
(5)ξti≤cti≤wti,ηti≤dti(1-wti),0≤wti≤1,wti(1-wti)≤0,
where
(6)cti=max{max{∑j=1ndjtixj+d0ti∣x∈X},0},dti=-min{min{∑j=1ndjtixj+d0ti∣x∈X},0}.
And let
(7)H0={(t,s)∈R2p∣li≤ti≤ui,Li≤si≤Ui,i=1,…,p}.
So the problem (P2) is equivalent to the following problem:
(P(H0))max∑i=1ptisis.t.∑s=1Ssiαsi(usi+vsi)+∑s=Ssi+1Siαsi|∑j=1nnjsixj+n0si|-ti≥0,i=1,…,p,-∑t=1Ttiβti(ξti+ηti)-∑t=Tti+1Tiβti|∑j=1ndjtixj+d0ti|+si≥0,i=1,…,p,usi-vsi=∑j=1nnjsixj+n0si,s=1,…,Ssi,i=1,…,p,ξti-ηti=∑j=1ndjtixj+d0ti,t=1,…,Tti,i=1,…,p,usi≤asi≤zsi,vsi≤bsi(1-zsi),zsi(1-zsi)≤0,s=1,…,Ssi,i=1,…,p,ξti≤cti≤wti,ηti≤dti(1-wti),wti(1-wti)≤0,t=1,…,Tti,i=1,…,p,0≤zsi≤1,0≤wti≤1,s=1,…,Ssi,t=1,…,Tti,i=1,…,p,x∈X,(t,s)∈H0.
3. Convex Relaxation Programming
The principle construct in the development of a solution procedure for solving (P(H0)) is the construction of a convex relaxation programming of (P(H0)) for obtaining the upper bound for this problem, as well as for its partitioned subproblems. Such a convex relaxation can be realized by using the concave envelope of the objective function of (P(H0)) over an associated rectangle.
To help obtain convex relaxations, the concept of a concave envelope may be defined as follows.
Definition 2 (see [<xref ref-type="bibr" rid="B21">21</xref>]).
Let M⊆Rq be a compact, convex set, and let f:M→R be upper semicontinuous on M. Then fM:M→R is called the concave envelope of f on M if
fM(x) is a concave function on M,
fM(x)≥f(x) for all x∈M,
there is no function ω(x) satisfying (i) and (ii) such that ω(x¯)<fM(x¯) for some point x¯∈M.
The following theorem is obtained from the definition above.
Theorem 3.
Consider a rectangle M of R2:M={(x1,x2)∈R2∣l≤x1≤u,L≤x2≤U}, where l,u,Ł,U satisfy 0<l<u, 0<L<U. For any (x1,x2)∈R2(x2≠0), we define function f(x1,x2)=x1/x2; then the concave envelope fM of the function f:M→R is given by
(8)fM(x1,x2)=min{1Lx1-lLUx2+lU,1Ux1-uLUx2+uL}.
Proof.
This result is essentially shown in [15] and is therefore omitted.
In order to obtain an upper bound of the optimal value to (P(H0)) by solving a convex programming, we can utilize Theorem 3 and convex the reverse convex constraints to problem (P(H0)) such that a convex program is given by
(RCP(H0))max∑i=1pris.t.ri≤1Liti-liLiUisi+liUi,i=1,…,p,ri≤1Uiti-uiLiUisi+uiLi,i=1,…,p,∑s=1Ssiαsi(usi+vsi)+∑s=Ssi+1Siαsi|∑j=1nnjsixj+n0si|-ti≥0,i=1,…,p,-∑t=1Ttiβti(ξti+ηti)-∑t=Tti+1Tiβti|∑j=1ndjtixj+d0ti|+si≥0,i=1,…,p,usi-vsi=∑j=1nnjsixj+n0si,s=1,…,Ssi,i=1,…,p,ξti-ηti=∑j=1ndjtixj+d0ti,t=1,…,Tti,i=1,…,p,usi≤asi≤zsi,vsi≤bsi(1-zsi),ξti≤cti≤wti,ηti≤dti(1-wti),zsi-(zsil)2-(zsiu+zsil)(zsi-zsil)≤0,s=1,…,Ssi,i=1,…,p,wti-(wtil)2-(wtiu+wtil)(wti-wtil)≤0,t=1,…,Tti,i=1,…,p,0≤zsi≤1,0≤wti≤1,s=1,…,Ssi,t=1,…,Tti,i=1,…,p,x∈X,(t,s)∈H0.
4. Branch and Bound Algorithm
In this section, a branch and bound algorithm is developed to solve (P(H0)) based on the former convex relaxation method. This algorithm needs to solve a sequence of convex relaxation programming problems about rectangle H0 or the subrectangle H of H0 to find a global solution.
4.1. Rectangular Partition Rule
The critical element in guaranteeing convergence to a global maximum of (P(H0)) is the choice of a suitable partitioning strategy. In this paper, we choose a simple and standard bisection rule. This rule is sufficient to ensure convergence since it derives all the intervals shrinking to a singleton for all the variables along any infinite branch of the branch and bound tree. Assume that, at each stage of the branch and bound algorithm, H0 or a subrectangle of H0 is subdivided into two rectangles by the branching process. To explain this process, assume without loss of generality that H0 or a subrectangle of H0 to be divided is H={(t,s)∈R2p∣li≤ti≤ui,Li≤si≤Ui,i=1,…,p}. This branching rule is as follows.
Let q∈argmax{Ui-Li∣i=1,…,p}.
Let γ=(Lq+Uq)/2.
Let
(9)H1={Lq≤sq(t,s)∈R2p∣li≤ti≤ui,Li≤si≤Ui,cccci≠q,Lq≤sq≤γ},H2={γ≤sq≤Uq(t,s)∈R2p∣li≤ti≤ui,Li≤si≤Ui,cccci≠q,γ≤sq≤Uq}.
It follows easily that this branching process is exhaustive.
We are now ready to formally state the overall algorithm for globally solving problem (P). The basic steps of the algorithm are summarized in the following statement.
4.2. Algorithm StatementStep 1 (initialization).
Given a convergence tolerance ε>0. Set the iteration counter k=0, the set of all active nodes Ω0={H0}, the lower bound LB=-∞, and the set of feasible points F=∅.
Solve the convex relaxation programming problem (RCP(H0)) and obtain the optimal value μ(H0) and an optimal solution x(H0). Set UB0=μ(H0), xc=x(H0), and LB=h(xc). Update F=F⋃{xc}.
If UB0-LB≤ε, then stop with xc which is the globally ε-optimal solution and LB is the optimal value to problem (P). Otherwise, proceed to Step 2.
Step 2 (branching).
According to the above selected branching rule, partition Hk into two new rectangles. Call the set of new partition rectangles Θk.
For each H∈Θk, solve convex programming problem (RCP(H0)) to obtain optimal value μ(H) and optimal solution x(H) of the problem (RCP(H0)). If μ(H)<LB, then remove the corresponding subrectangle H from Θk, that is, Θk=Θk∖H, and skip to the next element of Θk.
If Θk=∅, go to Step 3. Otherwise, update F=F⋃{x(H),H∈Θk}, and set LB=maxx∈Fh(x); the best known feasible point is denoted by xc=argmaxx∈Fh(x).
Step 3 (updating upper bound).
Denote the partition set remaining as
(10)Ωk+1=(Ωk∖Hk)⋃Θk
giving a new upper bound UBk=infH∈Ωk+1μ(H).
Step 4 (convergence check).
Fathom any improving nodes by setting Ωk+1=Ωk∖{H:μ(H)-LB≤ε,H∈Ωk}. If Ωk+1=∅, then stop: LB is the optimal value, and xc are global ε-optimal solutions for problem (P), respectively. Otherwise, set k=k+1 and return to Step 2.
4.3. Convergence Analysis
Next, we will give the convergence properties of the algorithm.
Theorem 4.
(a) If the algorithm is finite, then, upon termination, xk is a global ε-optimal solution to problem (P).
(b) If the algorithm is infinite, then every accumulation point x* of an infinite feasible solutions sequence {xk} to problem (P) generated by the algorithm is a global optimal solution to problem (P).
Proof.
(a) If the algorithm is finite, then it terminates in Step k, k≥1. Upon termination, since xk is found by solving problem (P(H0)) for some H⊆H0, xk is a feasible solution to problem (P). Upon termination of the algorithm,
(11)UBk-∑i=1pni(xk)di(xk)≤ε
is satisfied. It is easy to show by standard arguments for branch and bound algorithm that
(12)UBk≥v.
Since xk is a feasible solution for problem (P), we have
(13)∑i=1pni(xk)di(xk)≤v.
Taken together, the three previous statements imply that
(14)v≤UBk≤∑i=1pni(xk)di(xk)+ε≤v+ε.
Therefore,
(15)v-ε≤∑i=1pni(xk)di(xk)≤v,
and the proof of part (a) is complete.
(b) Assume that the algorithm is infinite, by [21]; then a sufficient condition for a global optimization to be convergent to the global maximum requires that the bounding operation must be consistent and the selection operation is bound improving.
A bounding operation is called consistent if at every step any unfathomed partition can be further refined and if any infinitely decreasing sequence of successively refined partition elements satisfies
(16)limk→∞(UBk-LB)=0,
where UBk is a computed upper bound in Step k and LB is the best lower bound at iteration k not necessarily occurring inside the same subrectangle with UBk. Now, we show that (16) holds.
Since the employed subdivision process is rectangle bisection, the process is exhaustive. Consequently, from the relation v(RCP(H))≤v(P), where v(RCP(H)) and v(P) denote the optimal values of problem (RCP(H0)) and (P) over the rectangle H, respectively, the formulation holds, and this implies that the employed bounding operation is consistent.
A selection operation is called bound improving if at least one partition element where the actual upper bound is attained is selected for further partition after a finite number of refinements. Clearly, the employed selection operation is bound improving because the partition element where the actual upper bound is attained is selected for further partition in the immediately following iteration.
From the above discussion, the branch and bound algorithm proposed in this paper is convergent to the global maximum of (P).
5. Computational Results
We conducted numerical experiments on the branch and bound algorithm on a Pentium IV microcomputer and the algorithm was coded in Fortran 95. Although these problems have a relatively small number of variables, they are quite challenging. For all test problems, numerical results show that the proposed global optimization algorithm can solve these problems efficiently. Computational results are illustrated in Tables 1 and 2.
Computational results for Problem 5.
n
p
Iter
Node
Time
p
Iter
Node
Time
p
Iter
Node
Time
50
2
11
3
0
4
59
10
0
6
103
17
0
100
2
13
5
0
4
67
11
0
6
145
21
0
150
2
16
3
0
4
69
11
0
6
180
19
0
200
2
26
7
0
4
77
13
0
6
185
23
0
Computational results for Problem 6.
n
p
Iter
Node
Time
p
Iter
Node
Time
p
Iter
Node
Time
50
2
14
2
0
4
91
13
0
6
117
21
0
100
2
17
3
0
4
75
15
0
6
136
22
0
150
2
26
4
0
4
79
17
0
6
150
29
0
200
42
29
7
0
4
87
23
0
6
212
33
0
In Tables 1 and 2, some notations have been used for column headers: Iter: the number of the algorithm iterations; Max-node: the maximal number of the active nodes necessary; Time: the execution time in seconds, where when the execution time is very short (e.g., Time < 0.1 second), we record with 0 second in short.
We choose the following two types of sum of ratios problems to test our algorithm, which are generated randomly.
Problem 5.
Consider
(17)max∑i=1p(1/Ti)∑t=1Ti|∑j=1n(rjti-rji)xj|(1/Ti)∑t=1Ti|∑j=1n(r~jti-r~ji)xj|s.t.∑j=1nxj=1,0≤xj≤1,j=1,…,n,
where Ti is an integer number (e.g., Ti is taken to be n), rjti is generated randomly in the interval [0,1], and rji=∑t=1Tirjti/Ti, while r~jti, r~ji are corresponding values calculated by an appropriate factor model [22].
Problem 6.
Consider
(18)max∑i=1p∑s=1Siαsi|∑j=1nnjsixj+n0si|∑t=1Tiβti|∑j=1ndjtixj+d0ti|s.t.∑i=1jxi≤j,j=1,2,…,n;0≤xi,i=1,…,n,
where Ti and Si are integer numbers (e.g., they are taken to be n, resp.) and αsi, βti, n0si, and d0ti are all generated by using random numbers in the intervals [0,0.1], [0,0.1], [0,1], and [0,1], respectively. njsi and djti are randomly generated according to the normal distribution N(0,1).
For solving the above test Problems 5 and 6, we utilized the proposed algorithm, the convergence tolerance parameters are set as ε=0.01, and the corresponding numerical results are listed in Tables 1 and 2, respectively. Average percentages are obtained by running the algorithm for 10 test problems. Tables 1 and 2 show the variation in the average number of computational results required when n was changed in {50,100,150,200} and p was changed in {2,4,6}. From Tables 1 and 2 we see that the algorithm works better for smaller p. So the size of p is the main factor affecting the performance of the algorithm. This is mainly because branching in the subproblem is proportional to p. Also, the time increases as n increases, but not as sharply as p.
6. Conclusion
We have presented and validated a branch and bound algorithm for global sums of ratios problem (P), where each term in the objective function is a ratio of two functions which are the sums of the absolute values of affine functions with coefficients. This problem computes the upper bounds by solving convex programming problems. These problems are derived by using the concave envelope of the objective function. The convergence of the algorithm is proved, and computational results for several test problems have been reported to show the feasibility and efficiency of the proposed algorithm.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have greatly improved the earlier version of this paper.
KonnoH.ThachP. T.TuyH.AlmogyY.LevinO.Parametric analysis of a multistage stochastic shipping problemMajhiJ.JanardanR.SmidM.GuptaP.On some geometric optimization problems in layered manufacturingKonnoH.WatanabeH.Bond portfolio optimization problems and their applications to index tracking: a partial optimization approachKonnoH.InoriM.Bond portfolio optimization by bilinear fractional programmingStancu-MinasianI. M.KunoT.A branch-and-bound algorithm for maximizing the sum of several linear ratiosKunoT.A revision of the trapezoidal branch-and-bound algorithm for linear sum-of-ratios problemsBensonH. P.A simplicial branch and bound duality-bounds algorithm for the linear sum-of-ratios problemBensonH. P.Branch-and-bound outer approximation algorithm for sum-of-ratios fractional programsShenP.-P.WangC.-F.Global optimization for sum of linear ratios problem with coefficientsFreundR. W.JarreF.Solving the sum-of-ratios problem by an interior-point methodDaiY.ShiJ.WangS.Conical partition algorithm for maximizing the sum of dc ratiosPeiY.ZhuD.Global optimization method for maximizing the sum of difference of convex functions ratios over nonconvex regionBensonH. P.Using concave envelopes to globally solve the nonlinear sum of ratios problemBensonH. P.Global optimization algorithm for the nonlinear sum of ratios problemYamamotoR.KonnoH.An efficient algorithm for solving convex-convex quadratic fractional problemsShenP.JinL.Using conical partition to globally maximizing the nonlinear sum of ratiosJiaoH.ShenP.A note on the paper global optimization of nonlinear sum of ratiosWolseyL. A.HorstR.TuyH.LoA. W.MackinlayA. C.Maximizing predictability in the stock and bond markets