An Efficient Algorithm for Solving a Class of Fractional Programming Problems

This paper presents an efficient branch-and-bound algorithm for globally solving a class of fractional programming problems, which are widely used in communication engineering, financial engineering, portfolio optimization, and other fields. Since the kind of fractional programming problems is nonconvex, in which multiple locally optimal solutions generally exist that are not globally optimal, so there are some vital theoretical and computational difficulties. In this paper, first of all, for constructing this algorithm, we propose a novel linearizing method so that the initial fractional programming problem can be converted into a linear relaxation programming problem by utilizing the linearizing method. Secondly, based on the linear relaxation programming problem, a novel branch-and-bound algorithm is designed for the kind of fractional programming problems, the global convergence of the algorithm is proved, and the computational complexity of the algorithm is analysed. Finally, numerical results are reported to indicate the feasibility and effectiveness of the algorithm.


Introduction
In this paper, we consider the following a class of fractional programming problems: where p and m are all any arbitrary natural numbers and x is an N−dimensional variable, and for any x ∈ D ⊂ R N , f jt (x) and h jt (x)(j � 1, . . . , p and t � 1, . . . , m) are all affine functions such that f jt (x) > 0 and h jt (x) > 0. It should be pointed out that, in portfolio optimization, the variable x refers to the amount of investment; in computer vision, the variable x refers to the mapping space; in communication engineering, variable x refers to input signal. During the past decades, as special cases of the (FP), the linear sum-of-ratios problem and linear multiplicative programming problem have attracted a huge attention of practitioners and researchers for many years. is is because the linear sum-of-ratios problem and linear multiplicative programming problem exist in very important applications such as chance optimization, portfolio optimization, engineering optimization, and data envelopment analysis [1]. In addition, the linear sum-of-ratios problem and linear multiplicative programming problem generally possess multiple local optima which are not globally optimum. e problem (FP) investigated in this paper can be looked as the extensions of the linear sum-of-ratios problem or linear multiplicative programming problem so that the problem (FP) has a broader of applications than the linear sum-ofratios problem and the linear multiplicative programming problem, and it poses more complex theory and computational difficulties.
Over the two decades, a variety of algorithms have been developed for solving the special forms of the problem (FP). For example, for the linear sum-of-ratios problem, several algorithms can be obtained, such as simplex and parametric simplex methods [2,3], image space approach [4], branchand-bound methods [5][6][7][8][9][10][11][12], trapezoidal algorithm [13,14], and monotonic optimization algorithm [15]; for the linear multiplicative programming problem, some algorithms can be also found in literatures, such as branch-and-bound algorithms [16][17][18], polynomial time approximation algorithm [19], outcome space algorithms [20,21], level set algorithm [22], heuristic method [23], and monotonic optimization algorithm [15]. Furthermore, Jiao et al. [24,25] and Chen and Jiao [26] presented three different algorithms for solving the linear multiplicative programming problem. Recently, for the several kinds of special forms of the problem (FP), Huang et al. [27], Jiao et al. [12], and Wang and Zhang [28] presented three different global optimization algorithms for the sum of linear ratios' problem; Jiao et al. [29] and Yin et al. [30] proposed two different outer space branch-and-bound algorithms for the generalized linear multiplicative programming problem; Jiao and Chen [31] and Jiao and Liu [32,33] gave three branch-reductionbound algorithms for solving the quadratically constrained quadratic programming problem and quadratically constrained sum of quadratic ratios' problem; Jiao et al. [34,35], Ghazi and Roubi [36], and Bennani et al. [37] presented four different algorithms for solving the generalized polynomial optimization problem and generalized linear fractional programming problem. In addition, several differential evolution algorithms [38][39][40], a novel gate resource allocation method using improved PSO-based QEA [41], and an enhanced MSIQDE algorithm with novel multiple strategies [42] are also proposed for solving the global optimization problems including the problem (FP). Up to today, although some researchers have proposed some algorithms for solving the linear sum-of-ratios problem, the linear multiplicative programming problem, or the special form of the problem (FP), to our knowledge, little work has been still done for globally the general form of the problem (FP) considered in this paper. e purpose of this paper is to develop an effective algorithm for globally solve all variants of the problem (FP). First of all, based on equivalent transformation and characteristics of the exponential function and logarithmic function, a novel linearizing method is proposed. By utilizing the linearizing method, we can convert the initial problem (FP) or its subproblem into a linear relaxation problem (LRP), whose solution can be infinitely close to the optimal solution of the problem (FP) by a successive refinement partition. Secondly, based on the branch-andbound framework, a novel branch-and-bound algorithm is constructed for globally solving all variants of the problem (FP), and the global convergence of the proposed algorithm is proved. In addition, the computational complexity of the algorithm is analysed for the first time, and the maximum iterations of the algorithm are estimated for the first time. Finally, numerical experimental results demonstrate the feasibility and effectiveness of the proposed algorithm. e remaining sections of this paper are organized as follows. A new linearizing method is constructed for deriving the problem (LRP) of the problem (FP) in Section 2. Based on the branch-and-bound scheme and the constructed problem (LRP), a global optimization algorithm is established in Section 3, and its convergence is derived. In Section 4, the computational complexity of the algorithm is analysed. In Section 5, numerical experiments are given to verify the feasibility and effectiveness of the proposed algorithm. Finally, some conclusions are given in Section 6.

Novel Linearizing Technique
In this section, we will present a new linearizing method for constructing the problem (LRP). For each i � 1, . . . , N, we firstly solve the following two linear programming problems: We can get an initial rectangle , which contains the feasible region of the problem (FP).
Next, since f jt (x) > 0 and h jt (x) > 0 for all x ∈ X 0 , it can easily follow that For any x ∈ X � x| x ≤ x ≤ x ⊆ X 0 , for each j � 1, . . . , p and t � 1, . . . , m, we let By the characteristics of the logarithmic functions ln(f jt (x)) and ln(h jt (x)) over X, it can follow that en, by the above inequalities, we can follow that 2 Mathematical Problems in Engineering For each j � 1, . . . , p and t � 1, . . . , m, we let us, by equations (3)-(7), it follows that For By the characters of the exponential function exp Let Combining equations (8)- (12), it can follow that Consequently, based on the above discussions, we can establish the corresponding linear relaxation problem (LRP) of the problem (FP) over X as follows: where φ j (x), φ l j , and K j , j � 1, 2, . . . , p, are given by (9). Based on the constructing method of the linear relaxation problem (LRP), for any X⊆X 0 , the global optimal value of the problem (LRP) can offer a reliable upper bound for the global optimal value of the problem (FP) over X. eorem 1 guarantees that Ug(x) will infinitely approximate g(x) as ‖x − x ‖ ⟶ 0.
Proof. For any x ∈ X, without loss of generality, let Since ; then, through computing, we can obtain that Mathematical Problems in Engineering Since we have that erefore, by (17) and (20), we have Since where For each j ∈ 1, . . . , T { }, let and then, it follows that Since the function Δ 2 jt.1 is a convex function about (f jt (x)), it can obtain the maximum value Δ 2, max jt. 1 at the point f l jt or f u jt . en, through computing, we get Similarly, we can prove that By (28) and (29), it follows that, as ‖x − x ‖ ⟶ 0, Since exp(θ j ) is a continuous and bounded function about x, then there exists some M > 0 such that exp(θ j ) ≤ M. erefore, by (23) and (30), it follows that By (30) and (31), it follows that By (21) and (32), it follows that By the above discussions, it is obvious that the conclusion can be followed.

Algorithm and Its Global Convergence
In this section, based on the above linear relaxation problem, a branch-and-bound algorithm is proposed for globally solving the problem (FP). We firstly select a rectangle bisection technique of maximum edge, which ensures the global convergence of the present algorithm. e detailed branching technique is described as follows. Suppose that X k � [x k , x k ]⊆X 0 is the selected rectangle for partitioning; let so that we can subdivide X k into two subrectangles. Next, by solving a sequence of linear relaxation problems (LRP) of the problem (FP), the upper bound of the optimal value of the problem (FP) can be updated. Moreover, by detecting the feasible points and computing the objective functional values of these feasible points, the lower bound of the optimal value of the problem (FP) can be updated.

e Proposed Branch-and-Bound Algorithm.
Let UB(X k ) and x k � x(X k ) be the optimal value and the optimal solution of the problem (LRP) over X k , respectively. e corresponding branch-and-bound algorithm is given as follows.
Step 2. Let LB k � LB k−1 . Use the selected branching method to subdivide the rectangle X k− 1 into two new subrectangles X k,1 and X k,2 , and let Λ � Λ ∪ X k− 1 .
Step 3. For each subrectangle X k,t , where t � 1 and 2, solve the problem (LRP) to get x k,t and UB(X k,t ), and let LB k � max LB k , g(x k,t ) .
If the midpoint x mid of X k,t (t ∈ 1, 2 { }) is feasible to the problem (FP), let LB k � max LB k , g(x mid )}, and let x k be the best feasible solution satisfying that LB k � g(x k ).
Step 5. Let UB k � max UB(X)|X ∈ Ω k and X ∈ Ω k satisfy that UB k � UB(X). If UB k − LB k ≤ ε, then the algorithm terminates with that x k is the global ϵ-optimal solution of the problem (FP). Otherwise, let k � k + 1, and return to Step 2.

Convergence Analysis.
In this section, the global convergence of the proposed algorithm is given as follows.

Theorem 2.
If the proposed algorithm is finite, then the algorithm will terminate after k iteration, and x k is the global ϵ−optimal solution of the problem (FP). Otherwise, the proposed algorithm will produce an infinite sequence of iterations such that any accumulation point of the sequence x k will be the global optimal solution of the problem (FP).
If the proposed algorithm terminates after k iteration, by the termination condition of the proposed algorithm, we can get that UB k − LB k ≤ ε. By the updating method of the upper bound, we have that LB k � g(x k ). By the structure of the branch-and-bound algorithm, we have that v ≤ UB k and LB k ≤ v. By combining these inequalities, we have that v ≤ g(x k ) ≤ v + ε. us, x k is a global ϵ-optimal solution of the problem (FP).
If the present algorithm does not stop after k iterations, by the branch-and-bound structure of the algorithm, it is well known that we can obtain a nondecreasing sequence UB k with that the lower bound is max x∈Π g(x), where Π is the set of the known feasible point.
us, we get that lim k⟶∞ UB k ≥ max x∈Π g (x).
Because x k ∈ X 0 and X 0 is a bounded close set, we can follow that there must exist a convergent subsequence x μ { }⊆ x k satisfying that lim μ⟶∞ x μ � x * . It is obvious that we have x * ∈ X 0 . By the branch-and-bound framework of the algorithm, there must exist a subsequence X μ lim μ⟶∞ x μ � x * , and lim μ⟶∞ X μ � x * . By eorem 1 and continuity of the function g(x), we can follow that . From the steps of the algorithm, we know that x μ is always a feasible solution to the problem (FP), thus x * is the global optimal solution for the problem (FP), and the proof of the theorem is completed.

Computational Complexity of the Algorithm
In this section, first of all, we derive the difference between Ug(x) and g(x) with the relationship of (x − x) to analyse the computational complexity of the present algorithm.
For any x ∈ X⊆X 0 , for each j � 1, . . . , p and t � 1, . . . , m, without loss of generality, assume that and define Obviously, we have that and for any x ∈ X ⊆ X 0 and j � 1, . . . , p, we have that Mathematical Problems in Engineering For any x ∈ X⊆X 0 , the difference between Ug(x) and g(x) with the relationship of (x − x) satisfies the following inequality: By eorem 1, we have that By the proving process of eorem 1, we have From the proving process of eorem 1, for any x ∈ X ⊆ X 0 , we have that Mathematical Problems in Engineering By the proving process of eorem 1, for any x ∈ X ⊆ X 0 , we also have erefore, by the above conclusions, we can get that

Mathematical Problems in Engineering
Without losing generality, we define the size Θ(X) of a rectangle X � x ∈ |R N x i ≤ x i ≤ x i , i � 1, 2, . . . , N ⊆X 0 by Besides, for convenience, we let Theorem 4. For any given convergence error ε > 0, if there exists a rectangle X k at the k th iteration, which satisfies that Θ(X k ) ≤ (ε/Nβ), then we have that where UB(X k ) is the optimum value to the problem (LRP) over X k and LB k is the known best lower bound of the global minimum value of the problem (FP).
Proof. Assume that x k is the optimal solution to the problem (LRP) over X k ; obviously, x k is also feasible to the problem (FP) over X k . erefore, we have that By the conclusion of eorem 3, we can get that Furthermore, by Θ(X k ) ≤ (ε/Nβ), we can get that and the proof of the theorem is completed. From the steps of the algorithm and the conclusions of eorem 4, when Θ(X k ) ≤ (ε/Nβ), the investigated rectangle X k can be deleted from the active nodes' set. erefore, when the sizes of all investigated rectangles X satisfy Θ(X) ≤ (ε/Nβ), the present algorithm will be terminated. From the conclusions of eorem 4, we can estimate the maximum iterations of the present algorithm as follows. □ Theorem 5. For any given convergence error ε ∈ (0, 1), the present algorithm can obtain a global ϵ-optimal solution to the problem (FP) in at most iterations.
Without loss of generality, assume that the subrectangle X k ⊆X 0 is selected for subdividing in Step 2 of the present algorithm for every iteration. After k · N iterations, we can get that From the conclusions of eorem 4, when i.e., we can obtain that UB(X k ) − LB k ≤ ε. erefore, after at most iterations, we can obtain a global ϵ-optimal solution to the problem (FP), and the proof of the theorem is completed.

Numerical Experiments
To verify the feasibility of the proposed algorithm, the algorithm is coded in C++ software, and the simplex method is used to solve the problem (LRP); some numerical examples are solved on the microcomputer with Intel(R) Core(TM)2 i5-4590s CPU @3.0 GHz; the computational comparison of numerical results is listed Tables 1 and 2 ; although these numerical examples have a relatively small-size variable, they are still challenging. In Table 1, we denote "Iter" as the number of iteration of the present algorithm.
In the following, first of all, some small-size certainty examples in Appendix were tested with the present algorithm; a deep and accurate numerical comparison with the current state-of-the-art "BARON" is given in Table 1. Next, to verify the robustness and reliability of the proposed algorithm, with the convergence error ε � 10 − 2 , we have also solved following randomly generated test Problem 1; numerical comparisons with the current state-of-the-art "BARON" are listed in Table 2.
where p is a natural number, A ∈ R M×N , b ∈ R M , and c 1 ji , e 1 ji , d 1 j , f 1 j , c 2 ji , e 2 ji , d 2 j , and f 2 j , j � 1, . . . , p and i � 1, . . . , N, are all randomly generated between 0 and 1; all elements of the matrix A are randomly generated between 0 and 1; all elements of vector b are randomly generated between 0 and 16; the number of the constraints M � 5.
When N ≥ 4, the software BARON failed to solve all arbitrary ten tests in 4800 s. us, we just report the numerical computational results of our algorithm in Table 2.
In Table 2, some notations have been used as follows: Avg.NT denotes the average number of iterations for the algorithm; Std.NT denotes the standard deviation of number of iterations for the algorithm; Avg.Time denotes the average running time of the algorithm in seconds; Std.Time denotes the standard deviation of running time for the algorithm.
From Table 1, it is observed that, our algorithm spends less running time than the software BARON.
us, our algorithm is the most efficient one for solving the problem (FP) with the small-size variable.   Table 2, we can observe that, when N ≥ 4, the software BARON failed to solve all arbitrary ten tests in 4800 s, but our algorithm can obtain the global optimal solution of the problem (FP) for arbitrary ten tests in a short time. So this demonstrates that our algorithm has the stronger robustness and stability than the software BARON.

Conclusions
In this paper, based on the branch-and-bound framework, we present an efficient global optimization algorithm for solving the problem (FP). In this algorithm, a novel linearizing technique is proposed for deriving the linear relaxation problem of the problem (FP), which can provide a reliable upper bound in the branch-and-bound algorithm. By subsequently partitioning the initial rectangle and solving a series of linear relaxation problems, the proposed algorithm is convergent to the global optimal solution for the problem (FP). Furthermore, based on the steps of the branch-and-bound algorithm, the computational complexity of the algorithm is analysed. Finally, numerical experiments verify the feasibilities and effectiveness of the proposed algorithm. e future work is to extend our new algorithm to the min-max affine fractional programming problem and generalized linear fractional programming problems.

Appendix
Examples of some small-size certainty are given in the following.
x 1 + x 2 ≤ 1.5, Data Availability e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.