1. Introduction
Semidefinite linear programming can be regarded as an extension of linear programming and solves the following problem:
(1)min〈C,X〉F,〈Aj,X〉F⩽bj, j=1,2,…,s,X≽0,
where X∈ℝn×n is a matrix of variables and Aj∈ℝn×n,j=1,2,…,s. X≽0 is notation for “X is positive semidefinite”. 〈·,·〉F denotes Frobenius norm and ∥X∥F=〈A,A〉F.

Semidefinite programming finds many applications in engineering and optimization [1]. Most interior-point methods for linear programming have been generalized to semidefinite convex programming [1–3]. There are many works devoted to the semidefinite convex programming problem but less attention so for has been paid to quasiconvex programming semidefinite quasiconvex minimization problem.

The aim of this paper is to develop theory and algorithms for the semidefinite quasiconvex programming. The paper is organized as follows. Section 2 is devoted to formulation of semidefinite quasiconvex programming and its global optimality conditions. In Section 3, we consider an approximation of the level set of the objective function and its properties.

2. Problem Definition and Optimality Conditions
Let X be matrices in ℝn×n, and define a scalar matrix function as follows:
(2)f:ℝn×n⟶ℝ.

Definition 1.
Let f(X) be a differentiable function of the matrix X. Then
(3)f′(X)=(∂f(X)∂xij)n×n.

Introduce the Frobenius scalar product as follows:
(4)〈X,Y〉F=∑i=1n∑j=1nxijyij, ∀X,Y∈ℝn×n.
If f(·) is differentiable, then it can be checked that
(5)f(X+H)-f(X)=〈f′(X),H〉F+o(∥H∥F).

Definition 2.
A set 𝔻⊂ℝn×n is convex if αX+(1-α)Y∈𝔻 for all X,Y∈𝔻 and α∈[0,1].

Definition 3.
The function f:𝔻→ℝ is said to be quasiconvex on 𝔻 if
(6)f(αX+(1-α)Y)⩽max{f(X),f(Y)} ∀X,Y∈𝔻, α∈[0,1].

The well-known property of a convex function [3] can be easily generalized as follows.

Lemma 4.
A function f:Rn×n→R is quasiconvex if and only if the set
(7)Lc(f)={X∈Rn×n∣f(X)⩽c}
is convex for all c∈R.

Proof
Necessity. Suppose that c∈R is an arbitrary number and X,Y∈Lc(f). By the definition of quasiconvexity, we have
(8)f(αX+(1-α)Y)⩽max{f(X),f(Y)}⩽c ∀α∈[0,1],
which means that the set Lc(f) is convex.

Sufficiency. Let Lc(f) be a convex set for all c∈R. For arbitrary X,Y∈Rn, define co=max{f(X),f(Y)}. Then X∈Lco(f) and Y∈Lco(f). Consequently, αX+(1-αY)∈Lco(f), for any α∈[0,1]. This completes the proof.

Lemma 5.
Let f:ℝn×n→ℝ be a quasiconvex and differentiable function. Then the inequality f(X)⩽f(Y) for X,Y∈ℝn×n implies that
(9)〈f′(Y),X-Y〉F⩽0,
where f′(X)=(∂f(X)/∂xij)n×n and 〈·,·〉F denotes the Frobenius scalar product of two matrices.

Proof.
Since f is quasiconvex,
(10)f(αX+(1-α)Y)⩽max{f(X),f(Y)}=f(Y)
for all α∈[0,1] and X,Y∈Rn×n such that f(X)⩽f(Y). By Taylor's formula, there is a neighborhood of the point Y on which:
(11)f(Y+α(X-Y))-f(Y) =α(〈f′(Y),X-Y〉F+o(α∥X-Y∥F)α)⩽0, α>0.
From the fact that o(α∥x-y∥F)/α→α→o0, we obtain 〈f′(Y),X-Y〉F⩽0 which completes the proof.

Consider the problem of minimizing a differentiable quasiconvex matrix function subject to constraints
(12) minf(X)(13) subject to gj(X)⩽bj, j=1,2,…,s,(14) X≽0,
where gj:ℝn×n→ℝ,j=1,2,…,s are scalar functions and X≽0 are positive semidefinite matrices, bj∈ℝ.

We call problem (12)–(14) as the semidefinite quasiconvex minimization problem.

Denote by 𝔻 a constraint set of the problem as follows:
(15)𝔻={X∈ℝn×n∣g(X)j⩽bj,j=1,2,…,s;X≽0}.
Then problems (12)–(14) reduce to
(16)minx∈𝔻f(X).
In general, the set 𝔻 is nonconvex. Problem (16) is nonconvex and belongs to a class of global optimization problems in Banach space.

We formulate a new global optimality condition for problem (16) in the following. For this purpose,we introduce the level set Ef(Z)(f) of the function f:ℝn×n→ℝ at a point Z∈ℝn×n:
(17)Ef(Z)(f)={Y∈ℝn×n∣f(Y)=f(Z)}.
Then global optimality conditions for problem (16) can be formulated as follows.

Theorem 6.
Let Z be a solution of problem (16). Then
(18)〈f′(X),X-Y〉F⩾0 ∀Y∈Ef(Z)(f),X∈D,
where Ec(f)={Y∈Rn×n∣f(Y)=c}. If, in addition,
(19)lim∥X∥→∞f(X)=+∞, f′(X+αf′(X))≠0
holds for all X∈D and α≥0, then condition (18) becomes sufficient.

Proof
Necessity. Assume that z is a solution of problem (16). Let X∈D and Y∈Ef(Z)(f). Then we have 0⩾f(Z)-f(X)=f(Y)-f(X), and Lemma 5 implies 〈f′(X),X-Y〉F⩾0.

Sufficiency. Suppose, on the contrary, that Z is not a solution of (16). Then there exists a U∈D such that f(U)<f(Z). Construct a ray Yα for α>0 defined by
(20)Yα=U+αf′(U).

We claim that f(Yα)>f(U) holds for all positive α. By Taylor's formula, we have
(21)f(U+αf′(U))-f(U)=α(∥f′(U)∥2+o(α∥f′(U)∥)α)
for small α>0, where limα→0+o(α∥f′(U)∥)/α=0. Therefore, there exists αo>0 such that f(Yα)-f(U)>0 holds for all α∈(0,αo). Hence, by Lemma 5, we have 〈f′(U+αof′(U)),f′(U)〉F⩾0 since f′(U)≠0 and f′(U+αof′(U))≠0 by the assumption. Note that for all γ>1, we also have f(U+γαof′(U))>f(U+αof′(U)); for otherwise, we would have f(U+γαof′(U))⩽f(U+αof′(U)), and consequently, by Lemma 5, 〈f′(U+αof′(U)),αo(γ-1)f′(U)〉F⩾0, which would imply γ⩽1 which is contradicting to the assumption that γ>1. Moreover, we can show that f(U+γαof′(U)) is increasing in γ>0. If f(U+γ′αof′(U))<f(U+γαof′(U)) holds for some γ′>γ, then αo(γ′-γ)〈f′(U+γαof′(U)),f′(U)〉F⩽0, which would contradict the fact that γ′>γ. These prove our claim f(Yα)>f(U) for all α>0.

Now it is obvious that the function φ:R+→R defined as
(22)φ(α)=f(Yα)
is continuous on [0,∞). Also, with assumption (19) implies limα→∞φ(α)=+∞, and therefore, there exists an α^ such that φ(α^)>f(Z). Using the continuity of φ(α) and the inequalities φ(α^)>f(Z)>f(U), there exists an α¯ such that
(23)f(Y+α¯f′(U))=f(Z),
which means that Yα¯∈Ef(Z)(f). On the other hand, we have f′(U)=(1/α¯)(Yα¯-U). Thus we get
(24)〈f′(U),U-Yα¯〉F=1α¯〈Yα¯-U,U-Yα¯〉F=-1α¯∥Yα¯-Y∥F2<0,
which contradicts (18). This means that Z must be a solution of (16).

Example 7.
Consider the following problem:
(25) minX∈D(f(X)=∥X∥F2),(26) subject to 𝔻={X∈ℝn×n∣X_=(2341)⩽X⩽X¯ =(7589),X≽0}.

Example 8.
Consider the fractional programming problem
(27)minX∈D(f(x)=f1(X)f2(X)),
where f1 is convex and differentiable on Rn×n and f2 is concave and differentiable on Rn×n. Suppose that f1 and f2 are defined positively on a ball B containing a subset D⊂Rn×n; that is,
(28)f1(X)>0, f2(X)>0 ∀X∈D⊂B.
We will call this problem as the mixed fractional minimization problem. By Lemma 4, we can easily show that f(X) is quasiconvex. Hence, the optimality condition (13) at a solution Z of (27) is as follows:
(29)∑i=1n∑j=1n(∂f1(X)∂xijf2(X)-∂f2(X)∂xijf1(X))(xij-yij)f22(X)≥0 ∀Y∈Ef(Z)(f), X∈D.

3. An Algorithm for the Convex Minimization Problem
We consider the quasiconvex minimization problem as a special case of problem (16):
(30)minX∈Df(X),
where f:Rn×n→R is strongly convex and continuously differentiable and D is an arbitrary compact set in Rn×n. In this case, then we can weaken condition (19) as shown in the next theorem.

Theorem 9.
Let Z be a solution of problem (30). Then
(31)〈f′(X),X-Y〉F⩾0 ∀Y∈Ef(Z)(f), X∈D.
If, in addition,
(32)minX∈D∥f′(X)∥F>0
holds, then condition (31) is also sufficient.

Proof
Necessity. Assume that z is a solution of problem (30). Consider X∈D and Y∈Ef(Z)(f). Then by the convexity of f, we have
(33)0⩾f(Z)-f(X)=f(Y)-f(X)⩾〈f′(X),Y-X〉F.

Sufficiency. Let us prove the assertion by contradiction. Assume that (31) holds and there exists a point U∈D such that
(34)f(U)<f(Z).
Clearly, f′(U)≠0 by assumption (32). Now define Uα as follows for α>0:
(35)Uα=U+αf′(U).
Then, by the convexity of f, we have
(36)f(Uα)-f(U)⩾〈f′(U),Uα-U〉F=α∥f′(U)∥F2,
which implies
(37)f(Uα)⩾f(U)+α∥f′(U)∥F2>f(U).
Then find α=α¯ such that
(38)f(U)+α¯∥f′(U)∥F2=f(Z);
that is,
(39)α¯=f(Z)-f(U)∥f′(U)∥F2>0.
Thus we get
(40)f(Uα¯)⩾f(U)+α¯∥f′(U)∥F2=f(Z)>f(U).
Define a function h:R+→R as
(41)h(α)=f(U+αf′(U))-f(Z),
where R+={α∈R∣α⩾0}. It is clear that h is continuous on [0,+∞). Note that h(α¯)⩾0 and h(0)<0. There are two cases with respect to the values of h(α¯) which we should consider.

Case a. h(α¯)=0 (or f(U+α¯f′(U))=f(Z)), then
(42)〈f′(U),U-Uα¯〉F=-〈f′(U),α¯f′(U)〉F=-α¯∥f′(U)∥F2<0,
contradicting condition (31).

Case b. h(α¯)>0 and h(0)<0. Since h is continuous, there exists a point αo∈(0,α¯) such that h(αo)=0 (or f(U+αof′(U))=f(Z)). Then we have
(43)〈f′(U),U-Uαo〉F=-αo∥f′(U)∥F2<0,
again contradicting (31).

Thus, in both cases, we find contradictions, proving the theorem.

Now using the function P(Y)=minX∈D〈f′(X),X-Y〉F,Y∈Rn×n, we reformulate Theorem 9 in terms of function ψ(z) defined as follows:
(44)ψ(Z)=minY∈Ef(Z)(f)P(Y), Z∈D.

Theorem 10.
Assume that f:Rn×n→R is strongly convex and continuously differentiable and D is a compact set in Rn×n. Let minX∈D∥f′(X)∥>0. If ψ(Z)=0, then the point Z is a solution to problem (30).

Proof.
This is an obvious consequence of the following relations:
(45)0=ψ(Z)⩽P(Y)⩽〈f′(X),X-Y〉F,
which are fulfilled for all Y∈Ef(Z)(f) and X∈D.

Now we are ready to present an algorithm for solving problem (30). We also suppose that one can efficiently solve the problem of computing minX∈D〈f′(X),X-Y〉F for any given Y∈Rn×n.

Algorithm MIN

Input. A strongly quasiconvex function f and a compact set D.

Output. A solution X to the minimization problem (30).

Step 1.
Choose a feasible point X0∈D. Set k:=0.

Step 2.
Solve the following problem:
(46)minY∈Ef(Xk)(f)P(Y).
Let Yk be a solution of this problem (i.e., P(Yk)=minX∈D〈f′(X),X-Yk〉F=minY∈Ef(Xk)(f)P(Y)), and let Xk+1 realizes P(Yk) (i.e., ψ(Xk)=P(Yk)=〈f′(Xk+1),Xk+1-Yk〉F).

Step 3.
If ψ(Xk)=0 then output X=Xk and terminate. Otherwise, let k:=k+1 and return Step 2.

The convergence of this algorithm is based on the following theorem.

Theorem 11.
Assume that f:Rn×n→R is strongly convex and continuously differentiable and D is a compact set in Rn×n. Let minX∈D∥f′(X)∥F>0. Then the sequence {Xk, k=0,1,…} generated by Algorithm MIN is a minimizing sequence for problem (30); that is,
(47)limk→∞f(Xk)=minX∈Df(X),
and every accumulation point of the sequence {Xk} is a global minimizer of (30).

Proof.
From the construction of {Xk}, we have Xk∈D and f(Xk)≥f* for all k, where f*=f(X*)=minX∈Df(X). Clearly, f′(X*)≠0 by assumption. Also, note that for all Y∈Ef(Xk)(f) and X∈D, we have
(48)ψ(Xk)=minY∈Ef(Xk)(f)minX∈D〈f′(X),X-Y〉F⩽〈f′(X),X-Y〉F⩽0.
If there exists a k such that ψ(Xk)=0 then, by Theorem 11, Xk is a solution to problem (30) and in this case the proof is complete. Therefore, without loss of generality, we can assume that ignored ψ(Xk)<0 for all k and prove the theorem by contradiction. If the assertion is false; that is, Xk is not a minimizing sequence for problem (30), the following inequality holds:
(49)limk→∞inff(Xk)>f*.
By the definition of ψ(Xk) and Algorithm MIN, we have
(50)p(Yk)=ψ(Xk)=minY∈Ef(Xk)(f)minX∈D〈f′(X),X-Y〉F=〈f′(Xk+1),Xk+1-Yk〉F
and f(Yk)=f(Xk). The convexity of f implies that
(51)f(Xk)-f(Xk+1)=f(Yk)-f(Xk+1)⩽〈f′(Xk+1),Yk-Xk+1〉F=-ψ(Xk)>0.
Hence, we obtain f(Xk+1)<f(Xk) for all k, and the sequence {f(Xk)} is strictly decreasing. Since the sequence is bounded from below by f*, it has a limit and satisfies
(52)limk→∞(f(Xk+1)-f(Xk))=0.
Then, from (49) and (50), we obtain
(53)limk→∞ψ(Xk)=0.
From (51) we have f(Xk)>f(X*) for all k. Now define vα as follows:
(54)Vα=X*+αf′(X*), α>0.
Then, by the convexity of f, we have
(55)f(Vα)-f(X*)⩾〈f′(X*),Vα-X*〉F=α∥f′(X*)∥2,
which implies
(56)f(Vα)≥f(X*)+α∥f′(X*)∥2>f(X*), α>0.
Choose α=αk such that
(57)f(X*)+αk∥f′(X*)∥2>f(Xk);
that is,
(58)αk>f(Xk)-f(X*)∥f′(X*)∥2>0.
Define a function hk:R+→R as
(59)hk(α)=f(X*+αf′(X*))-f(Xk),
where R+={α∈R∣α≥0}. It is clear that hk is continuous on [0,+∞). Note that hk(αk)>0 and hk(0)<0. Since hk is continuous, there exists a point αk¯∈(0,αk) such that hk(αk¯)=0; that is, f(vαk¯)=f(xk) and vαk¯=x*+αk¯f′(x*). Also, note that
(60)ψ(Xk)=minY∈Ef(Xk)(f)minX∈D〈f′(X),X-Y〉F⩽〈f′(X*),X*-Vαk¯〉F.
Taking into account Vαk¯=X*+αk¯f′(X*), we have
(61)-ψ(Xk)⩾〈f′(X*),Vαk¯-X*〉F=∥f′(X*)∥∥Vαk¯-X*∥⩾minX∈D∥f′(X)∥∥vαk¯-X*∥>0.
Since limk→∞ψ(Xk)=0, this implies
(62)limk→∞Vαk¯=X*.
The continuity of f on Rn×n yields
(63)limk→∞f(Xk)=limk→∞f(Vαk¯)=f(X*),
which is a contradiction to (49).

Consequently, {Xk} is a minimizing sequence for problem (30). Since D is compact, we can always select the convergent subsequences {Xkl} from {Xk} such that
(64)liml→∞Xkl=X¯∈D.
Then together with (63), we obtain
(65)liml→∞f(Xkl)=f(X¯)=f*,
which completes the proof.