Global Optimization for a Class of Nonlinear Sum of Ratios Problem

. We present a branch and bound algorithm for globally solving the sum of ratios problem. In this problem, each term in the objective functionisaratiooftwofunctionswhicharethesumsoftheabsolutevaluesofaffinefunctionswithcoefficients.Thisproblemhasan importantapplicationinfinancialoptimization,buttheglobaloptimizationalgorithmforthisproblemisstillrareintheliterature sofar.Inthealgorithmwepresented,thebranchandboundsearchundertakenbythealgorithmusesrectangularpartitioningand takesplaceinaspacewhichtypicallyhasamuchsmallerdimensionthanthespacetowhichthedecisionvariablesofthisproblem belong.Convergenceofthealgorithmisshown.Atlast,somenumericalexamplesaregiventovindicateourconclusions.


Introduction
The sum of ratios problem has attracted considerable attention in the literature because of its large number of practical applications in various fields of study, including transportation planning, government contracting, economics, and finances [1][2][3][4][5][6].And from a research point of view, the sum of ratios problem poses significant theoretical and computational challenges.This is mainly due to the fact that it is known to generally possess multiple local optima that are not globally optimal.
Many solution algorithms have been proposed for globally solving sums of linear ratios problem with linear constraints (see, e.g., [7][8][9][10][11]).Recently, some algorithms have been developed for solving globally the nonlinear sum of ratios problems; for instance, Freund and Jarre [12] proposed an interior-point approach for the convex-concave ratios with convex constraints; Dai et al. [13] and Pei and Zhu [14] presented two algorithms for the sum of dc ratios; Benson [15,16] gave two branch and bound algorithms for the concave-convex ratios; Yamamoto and Konno [17] proposed algorithm uses rectangles rather than simplices as partition elements, so that branching only takes place in a space of dimension  rather than  or 2 although the algorithm search is carried out mainly in a space of dimension 2.Third, we choose a simple and standard bisection rule.This rule is sufficient to ensure convergence since the partition rule is exhaustive.Finally, the upper bounding subproblems are convex programming problems that differ from each other only in the coefficients of certain linear constraints and in the bounds that describe their associated rectangles.
The remainder of this paper is organized as follows.In Section 2, an equivalent problem of problem (P) is given.Next, in Section 3, we construct the function overestimating the value of the sum of ratios.In Section 4, the proposed branch and bound algorithm is described, and the convergence of the algorithm is established.Some numerical results are reported in Section 5. A summary is proposed in the last section.

Equivalent Problem
In order to globally solve the problem (P), first problem (P) can be converted into an equivalent nonconvex programming problem (P1) as follows: Proof.The proof of this result can be easily followed from the definitions of problems (P) and (P1) and is therefore omitted.
Let us define As is well known, the set of complementarity conditions   V  = 0 can be represented as a system of linear inequalities by introducing zero-one integer variable [20]: where   ∈ {0, 1} and   ,   are defined as follows: Then   ∈ {0, 1} can be transformed into For     = 0, we do with them similarly.Let where And let So the problem (P2) is equivalent to the following problem:

Convex Relaxation Programming
The principle construct in the development of a solution procedure for solving (P (H 0 )) is the construction of a convex relaxation programming of (P (H 0 )) for obtaining the upper bound for this problem, as well as for its partitioned subproblems.Such a convex relaxation can be realized by using the concave envelope of the objective function of (P (H 0 )) over an associated rectangle.
To help obtain convex relaxations, the concept of a concave envelope may be defined as follows.
The following theorem is obtained from the definition above.

Theorem 3. Consider a rectangle 𝑀 of 𝑅
= 0), we define function ( 1 ,  2 ) =  1 / 2 ; then the concave envelope   of the function  :  →  is given by Proof.This result is essentially shown in [15] and is therefore omitted.
In order to obtain an upper bound of the optimal value to (P (H 0 )) by solving a convex programming, we can utilize  (RCP (H 0 ))

Branch and Bound Algorithm
In this section, a branch and bound algorithm is developed to solve (P (H 0 )) based on the former convex relaxation method.This algorithm needs to solve a sequence of convex relaxation programming problems about rectangle  0 or the subrectangle  of  0 to find a global solution.
4.1.Rectangular Partition Rule.The critical element in guaranteeing convergence to a global maximum of (P (H 0 )) is the choice of a suitable partitioning strategy.In this paper, we choose a simple and standard bisection rule.This rule is sufficient to ensure convergence since it derives all the intervals shrinking to a singleton for all the variables along any infinite branch of the branch and bound tree.Assume that, at each stage of the branch and bound algorithm,  0 or a subrectangle of  0 is subdivided into two rectangles by the branching process.To explain this process, assume without loss of generality that  0 or a subrectangle of  0 to be divided is  = {(, ) ∈  2 |   ≤   ≤   ,   ≤   ≤   ,  = 1, . . ., }.This branching rule is as follows.
(iii) Let It follows easily that this branching process is exhaustive.
We are now ready to formally state the overall algorithm for globally solving problem (P).The basic steps of the algorithm are summarized in the following statement.
If UB 0 − LB ≤ , then stop with   which is the globally optimal solution and LB is the optimal value to problem (P).Otherwise, proceed to Step 2.
Step 2 (branching).According to the above selected branching rule, partition   into two new rectangles.Call the set of new partition rectangles Θ  .

Convergence Analysis.
Next, we will give the convergence properties of the algorithm.

Theorem 4. (a)
If the algorithm is finite, then, upon termination,   is a global -optimal solution to problem (P).
(b) If the algorithm is infinite, then every accumulation point  * of an infinite feasible solutions sequence {  } to problem (P) generated by the algorithm is a global optimal solution to problem (P).
Proof.(a) If the algorithm is finite, then it terminates in Step ,  ≥ 1. Upon termination, since   is found by solving problem (P (H 0 )) for some  ⊆  0 ,   is a feasible solution to problem (P).Upon termination of the algorithm, is satisfied.It is easy to show by standard arguments for branch and bound algorithm that Since   is a feasible solution for problem (P), we have Taken together, the three previous statements imply that Therefore, and the proof of part (a) is complete.(b) Assume that the algorithm is infinite, by [21]; then a sufficient condition for a global optimization to be convergent to the global maximum requires that the bounding operation must be consistent and the selection operation is bound improving.
A bounding operation is called consistent if at every step any unfathomed partition can be further refined and if any infinitely decreasing sequence of successively refined partition elements satisfies lim where UB  is a computed upper bound in Step  and LB is the best lower bound at iteration  not necessarily occurring inside the same subrectangle with UB  .Now, we show that ( 16) holds.Since the employed subdivision process is rectangle bisection, the process is exhaustive.Consequently, from the relation V(()) ≤ V(), where V(()) and V() denote the optimal values of problem (RCP (H 0 )) and (P) over the rectangle , respectively, the formulation holds, and this implies that the employed bounding operation is consistent.
A selection operation is called bound improving if at least one partition element where the actual upper bound is attained is selected for further partition after a finite number of refinements.Clearly, the employed selection operation is bound improving because the partition element where the actual upper bound is attained is selected for further partition in the immediately following iteration.
From the above discussion, the branch and bound algorithm proposed in this paper is convergent to the global maximum of (P).

Computational Results
We conducted numerical experiments on the branch and bound algorithm on a Pentium IV microcomputer and the algorithm was coded in Fortran 95.Although these problems have a relatively small number of variables, they are quite challenging.For all test problems, numerical results show that the proposed global optimization algorithm can solve these problems efficiently.Computational results are illustrated in Tables 1 and 2.
In Tables 1 and 2, some notations have been used for column headers: Iter: the number of the algorithm iterations; Max-node: the maximal number of the active nodes necessary; Time: the execution time in seconds, where when the execution time is very short (e.g., Time < 0.1 second), we record with 0 second in short.
We choose the following two types of sum of ratios problems to test our algorithm, which are generated randomly.
For solving the above test Problems 5 and 6, we utilized the proposed algorithm, the convergence tolerance parameters are set as  = 0.01, and the corresponding numerical results are listed in Tables 1 and 2, respectively.Average percentages are obtained by running the algorithm for 10 test problems.Tables 1 and 2 show the variation in the average number of computational results required when  was changed in {50, 100, 150, 200} and  was changed in {2, 4, 6}.From Tables 1 and 2 we see that the algorithm works better for smaller .So the size of  is the main factor affecting the performance of the algorithm.This is mainly because branching in the subproblem is proportional to .Also, the time increases as  increases, but not as sharply as .

Conclusion
We have presented and validated a branch and bound algorithm for global sums of ratios problem (P), where each term in the objective function is a ratio of two functions which are the sums of the absolute values of affine functions with coefficients.This problem computes the upper bounds by solving convex programming problems.These problems are derived by using the concave envelope of the objective function.The convergence of the algorithm is proved, and computational results for several test problems have been reported to show the feasibility and efficiency of the proposed algorithm.

Table 1 :
Computational results for Problem 5.