^{1}

^{1}

^{2}

^{1}

^{2}

The special importance of Difference of Convex (DC) functions programming has been recognized in recent studies on nonconvex optimization problems. In this work, a class of DC programming derived from the portfolio selection problems is studied. The most popular method applied to solve the problem is the Branch-and-Bound (B&B) algorithm. However, “the curse of dimensionality” will affect the performance of the B&B algorithm. DC Algorithm (DCA) is an efficient method to get a local optimal solution. It has been applied to many practical problems, especially for large-scale problems. A B&B-DCA algorithm is proposed by embedding DCA into the B&B algorithms, the new algorithm improves the computational performance and obtains a global optimal solution. Computational results show that the proposed B&B-DCA algorithm has the superiority of the branch number and computational time than general B&B. The nice features of DCA (inexpensiveness, reliability, robustness, globality of computed solutions, etc.) provide crucial support to the combined B&B-DCA for accelerating the convergence of B&B.

DC programming is an important subject in optimization problems. This paper studies one class of DC programming, which is originally derived from the portfolio investment problems.

Consider the following problem:

Falk and Soland propose a B&B algorithm for separable nonconvex programming problems in [

DCA is an effective local optimization method based on local optimality and the duality for solving DC programming, especially for large-scale problems. DCA has been first proposed by Tao and An [

DCA is an efficient method for DC programming which allows to solve large-scale DC programming. In this paper, we will obtain a local optimization solution by the DCA method, the optimal value of which is also an upper bound for the optimal value of the problem (

The rest of the paper is organized as follows. Local optimization method DCA for the problem (

Consider the following general DC programming:

Let

The necessary local optimality condition [

Let

Based on local optimality conditions and duality in DC programming, the DCA consists in the construction of two sequence

Then, the basic scheme of DCA can be expressed as follows:

In the following, we will show main convergence properties of DCA which have been proposed and proven in [

The sequences

If

If the optimal value

DCA has a linear convergence rate for general DC programming.

Problem (

As can be seen, if and only if

Let

Set

If

We can obtain a local optimal solution by Algorithm

In most cases, B&B [

In this subsection, we present the B&B-DCA method for the problem (

Let

Then, Algorithm

Let

The following relationship holds true:

Before continuing to describe the algorithm, we need to know the “Rectangle Subdivision Process”, that is, divide the set

Assumed that

If

Similar to the problem (

If either of them is infeasible, then the corresponding subproblem (

If at least one subproblem is feasible, we can get the optimal solution

Remarkably, if

We delete those subproblems of which the lower bound are larger than

In the following, we will give the detailed description of B&B-DCA algorithm.

Set

Delete all

Select a problem (

For the subproblem (

For

Let

Since the DCA method is an efficient local optimization method for DC programming, the combination of DCA and B&B algorithm will guarantee the global optimality and accelerate the convergence of general B&B algorithm (see [

The sequence

If the algorithm terminates at finite iterations

If the algorithm does not stop at finite iterations, it must generate an infinite nested sequence

We can see that NRS process plays an important role in the convergence of the B&B-DCA algorithm

In this section, we will test the performance of proposed B&B-DCA algorithm for randomly generated datasets and the results will be compared with that of general B&B algorithm (see [

Datasets with different dimensions will be generated to test the performance of the B&B-DCA and general B&B algorithms. We will conduct numerical experiments of the proposed algorithms with dimensions from 50 to 400 for the problem (

For the objective function

For the convex part

The feasible solution sets in the problem (

The tolerance

B&B-DCA and general B&B algorithms [

The NRS process has an important effect on the convergence of the B&B-DCA and general B&B algorithms. To our knowledge, the NRS process includes the exhaustive bisection,

We find that optimal value computed by two algorithms are equal. In the following, we show the average branch number (Avg Bran) and mean CPU times (Time) for each dimension by the B&B-DCA and general B&B algorithms in Table

Computational results with the B&B, B&B-DCA and DCA, methods.

Dim | B&B | B&B-DCA | DCA | ||||

Avg Bran | Time(s) | Avg Bran | Time(s) | Num DCA | Time(s) | Num Glob | |

56.4 | 7.15 | 47.6 | 6.29 | 1 | 0.46 | 5 | |

195.8 | 40.70 | 145.4 | 30.10 | 1 | 0.58 | 5 | |

309 | 83.00 | 174.2 | 47.35 | 1 | 0.70 | 5 | |

327.6 | 182.95 | 310.2 | 174.28 | 1 | 1.26 | 5 | |

486.4 | 382.98 | 399.6 | 315.46 | 1 | 2.02 | 5 | |

428.2 | 422.59 | 341.2 | 333.98 | 1 | 2.09 | 5 | |

951.8 | 1231.52 | 756 | 969.82 | 1.2 | 3.51 | 4 | |

988 | 1753.09 | 666.6 | 1183.18 | 1.2 | 4.28 | 4 |

From the results and Table

General B&B and the proposed B&B-DCA algorithms can efficiently solve the problem (

The DCA method always gives a good approximation for optimal solution of problem (

In this subsection, the proposed B&B-DCA and general B&B algorithms are applied to solve portfolio selection problem with concave transaction costs. It is pointed that concave transaction costs function is more reliable [

Concave transaction costs function

The sum of investment weight in each asset should be one, that is,

In general, the covariance matrix

Tests will be performed on five datasets from the OR-Library (see [

Computational results for portfolio selection problem.

Dim | B&B | B&B-DCA | DCA | |||||

Avg Bran | Time(s) | Avg Bran | Time(s) | Num DCA | Time(s) | Num Glob | ||

data 1: | 72.95 | 18.03 | 64.10 | 16.10 | 2.00 | 0.29 | 14 | |

data 2: | 117.22 | 34.21 | 110.14 | 28.79 | 1.78 | 0.32 | 15 | |

data 3: | 194.95 | 40.70 | 175.36 | 32.86 | 1.42 | 0.25 | 17 | |

data 4: | 292.53 | 86.96 | 257.19 | 77.58 | 2.11 | 0.30 | 14 | |

data 5: | 113.11 | 76.77 | 100.34 | 72.55 | 1.16 | 0.62 | 18 |

Similar to randomly generated datasets, we show average branch number (Avg Bran), average CPU time (Time), average number of calling DCA (Num DCA) for B&B and B&B-DCA, also total number (Num glob) when a global optimal solution is obtained after one process of DCA in Table

As can be seen from Table

Additionally, Figure

Concave transaction costs function.

In this paper, a class of DC programming is studied. General B&B is usually adopted to solve such problems. Based on existing local optimization method in DC programming, we have proposed a new global method B&B-DCA to solve the problem. DCA is an efficient local optimization method based on local optimality and the duality for solving DC programming, especially for large-scale problems.

Numerical tests on randomly generated datasets show that the proposed B&B-DCA has great superiority of branch number and computational time than general B&B algorithm with different dimensions. In addition, portfolio selection problem with transaction costs can be solved efficiently. The proposed B&B-DCA can be applied to solved other practical problems which can be modeled by this class of DC programming.

We find that DCA method always provides a global optimal solution, but the lower bound for the optimal value cannot guarantee fast convergence rate of B&B. If we can give a new method to obtain a more tighter lower bound, the proposed B&B-DCA algorithm can solve the problem with much shorter time. This seems significant in solving practical problems. Furthermore, other global optimization methods like filled function methods and so on can be combined with DCA to solve DC Programming. Some of these are under our current consideration.

This work is supported by National Natural Science Foundations of China: 10971162, 11101325, and 71171158.