^{1}

^{1}

^{2}

^{1}

^{1}

^{1}

^{2}

Solving the large-scale problems with semidefinite programming (SDP) constraints is of great importance in modeling and model reduction of complex system, dynamical system, optimal control, computer vision, and machine learning. However, existing SDP solvers are of large complexities and thus unavailable to deal with large-scale problems. In this paper, we solve SDP using matrix generation, which is an extension of the classical column generation. The exponentiated gradient algorithm is also used to solve the special structure subproblem of matrix generation. The numerical experiments show that our approach is efficient and scales very well with the problem dimension. Furthermore, the proposed algorithm is applied for a clustering problem. The experimental results on real datasets imply that the proposed approach outperforms the traditional interior-point SDP solvers in terms of efficiency and scalability.

Semidefinite programming (SDP) is a technique widely used in modeling of complex systems and some important issues in computer vision and machine learning. Examples of application include model reduction [

Inspired by column generation [

In this paper, we propose the matrix generation-based iteration approach to solve general SDP optimization problems. At each iteration, the exponentiated gradient (EG) algorithm [

The method proposed here can be seen as an extension of column generation to solve SDP problems. The proposed matrix generation method also has the drawback of column generation. It converges slowly of one wants to achieve high accuracy. This is the so-called “tailing effect”. In practice, for many applications, we do not need a very accurate solution. Typically, a moderately accurate solution suffices. On the other hand, the proposed method can also be considered as a generalization of EG to the matrix case. EG is used to solve problems with a vector optimization variable, and the vector must be on a simplex. Here, we generalize EG in the sense that the proposed method solves optimization with a semidefinite matrix whose eigenvalues are on a simplex.

We present our main results in the next section.

The following notations will be used throughout the paper:

a bold lower-case letter

an upper-case letter

We are interested in the following general convex problem:

We need to extend the Fenchel conjugate [

the gradient

In other words,

We are ready to reformulate our problem (

This above problems is still very hard to solve, since it has nonconvex rank constraints, and the variable

We apply matrix generation to solve the problem. Matrix generation is an extension of column generation to nonpolyhedral semidefinite constraints for solving difficult large-scale optimization problems [

The Lagrangian is

We now only consider a small subset of the variables in the primal; that is, only a subset of

At optimality, because of (

We propose the implementation of the algorithm for the general convex problem (

The detailed implementation of the algorithm is proposed in Algorithm

(i) The maximum number of integrations

(ii) The pre-set tolerance value

(iii)

(1) randomly select

(2) then compute:

(1) If

(2) Find a new

(3) Find a new

spectral structure problem by EG algorithm as described in Section

where

(4) Compute

In [

(i)

(ii) The maximum number of integrations

(1) Generate the sequence

where

(2) Stop if some stopping criteria are met.

In this section, we evaluate the convergence, running speed, and memory consumption of the algorithm by solving

The above problem is approximate to

Before reporting the results, we compare the proposed algorithm with the convex optimization solver, namely, SDPT3 [

We randomly generate a

The convergence of the proposed algorithm. The blue curve shows the optimal value obtained by the proposed algorithm at each iteration, and the dash line shows the ground truth obtained by directly solving the original problem in (

We solve the problem in (

Running time of the proposed algorithm. The blue curve shows the running time of the proposed algorithm versus the matrix dimension, and the red curve shows the running time of CVX versus the matrix dimension.

Figure

Memory consumption of the proposed algorithm. The blue curve shows the approximate memory consumption of the proposed algorithm versus the matrix dimension, and the red curve shows the approximate memory consumption of CVX versus the matrix dimension.

In summary, the proposed algorithm can converge to the near-optimal solutions quickly and scale very well with the dimension, which are very important for solving large-scale SDP problems. As a result, the proposed method is potential to be applied to computer vision to deal with large matrix of 3D visual data [

The problem of partitioning data points into a number of distinct sets, known as the clustering problem, is one of the most fundamental and significant topics in data analysis and machine learning. From an optimization viewpoint, clustering aims at the minimal sum-of-squares (MSSC) based on some definition of similarly or distance. But the variable is a 0-1 discrete indicator matrix

In this section, we apply the proposed algorithm to solve the SDP relaxed clustering problem (0-1 SDP for clustering) [

We solve the problem in (

the proposed algorithm performs better than K-means,

the proposed algorithm achieves higher accuracy than the spectral clustering that is because the proposed algorithm is the SDP relaxation of the clustring problem and is tighter than the spectral relaxation,

the proposed algorithm is as good as CVX. This means it can solve the SDP problem as accurately as the CVX toolbox.

UCI datasets used together with some characteristics and the clustering results achieved using the different methods.

Dataset | Size | Dims. | Accuracy | ||||

K-means | Spectral clustering | SDP relaxation | |||||

Ours | CVX | ||||||

Soybean | 4 | 47 | 36 | 0.723 | 0.745 | 0.766 | 0.766 |

Iris | 3 | 150 | 5 | 0.893 | 0.933 | 0.947 | 0.953 |

Wine | 3 | 178 | 13 | 0.702 | 0.725 | 0.730 | 0.730 |

SPECTF heart | 2 | 267 | 44 | 0.625 | 0.802 | 0.816 | 0.816 |

We also compare the speed of the proposed algorithm with CVX, and the results are shown in Table

Time consumptions (seconds) of the proposed algorithm and CVX on clustering the selected UCI datasets.

Soybean | Iris | Wine | SPECTF heart | |
---|---|---|---|---|

Ours | 1.34 | 2.879 | 4.22 | 21.29 |

CVX | 1.21 | 4.629 | 6.95 | 28.00 |

As observed from the experiments, it can be found that the proposed algorithm performs very well on these datasets and can efficiently solve the 0-1 SDP problem for clustering. Both speed and accuracy are improved as compared with traditional interior-point SDP solvers. We plan to further extend the approach to more general problems and look for other applications such as modeling [

This paper proposed an approach to solve the optimization problem with SDP constraints using matrix generation and exponentiated gradient. The process is faster and more scalable than the traditional SDP approach. The results of numerical experiments show that the method scales very well with the problem dimensions. We also applied the approach to solve the SDP relaxation clustering problem. Experimental results on real datasets show that the algorithm outperforms much in both speed and accuracy as compared with interior-point SDP solvers.