Complexity Analysis of Primal-Dual Interior-Point Methods for Linear Optimization Based on a New Parametric Kernel Function with a Trigonometric Barrier Term

. We introduce a new parametric kernel function, which is a combination of the classic kernel function and a trigonometric barrier term, and present various properties of this new kernel function. A class of large-and small-update primal-dual interior-point methods for linear optimization based on this parametric kernel function is proposed. By utilizing the feature of the parametric kernel function, we derive the iteration bounds for large-update methods, 𝑂(𝑛 2/3 log (𝑛/𝜀)) , and small-update methods, 𝑂(√𝑛 log (𝑛/𝜀)) . These results match the currently best known iteration bounds for large-and small-update methods based on the trigonometric kernel functions.

The dual problem of () is given by max {   :    +  = ,  ≥ 0} . () For years, LO has been one of the most active research areas in mathematical programming.There are many solution approaches for LO.Among them, the interior-point methods (IPMs) gain much more attention.Several efficient IPMs for LO and a large amount of results have been proposed.For an overview of the relevant results, see a recent book on this subject [1] and the references cited therein.
In the literature two types of primal-dual IPMs are distinguished: large-update methods and small-update methods, according to the value of the barrier-update parameter .However, there is still a gap between the practical behavior of these algorithms and these theoretical performance results.The so-called large-update IPMs have superior practical performance but with relatively weak theoretical results.While the so-called small-update IPMs enjoy the best known worstcase iteration bounds but their performance in computational practice is poor.
Recently, this gap was reduced by Peng et al. [2] who introduced the so-called self-regular kernel functions and designed primal-dual IPMs based on self-regular proximities for LO.They improved the iteration bound for large-update methods from ( log(/)) to (√ log  log(/)), which almost closes the gap between the iteration bounds for largeand small-update methods.Later, Bai, et al. [3] presented a large class of eligible kernel functions, which is fairly general and includes the classical logarithmic function and the selfregular functions, as well as many non-self-regular functions as special cases.The best known iteration bounds for LO obtained are as good as the ones in [2] for appropriate choices of the eligible kernel functions.Some well-known eligible kernel functions and the corresponding iteration bounds for large-and small-update methods are collected in Table 1.For some other related kernel-function based IPMs we refer to the recent books on this subject [4,5].Particularly, El Ghami et al. [6] first introduced a trigonometric kernel function for primal-dual IPMs in LO.They established the worst case iteration bounds for largeand small-update methods, namely, ( 3/4 log(/)) and (√ log(/)), respectively.Peyghami et al. [7] considered a new kernel function with a trigonometric barrier term.Based on this kernel function, they proved that large-update method for solving LO has the worst case iteration bound, namely, ( 2/3 log(/)), which improves the so far obtained iteration bound for large-update methods based on the trigonometric kernel function proposed in [6].Recently, Peyghami and Hafshejani [8] established the better iteration bound (√(log()) 2 log(/)) for large-update methods based on a new kernel function consisting of a trigonometric function in its barrier term.
Motivated by their work, the purpose of this paper is to deal with the so-called primal-dual IPMs for LO based on a new kind of parametric kernel function as follows: where 0 <  ≤ 8/25 (the bound of the  is due to the proof of Lemma 3) and ℎ() = (1 − )/(3 + 2).We develop some new properties of the parametric kernel function, as well as the corresponding barrier function.
Compared to the existing ones, the proposed function has a parameter .This implies that our kernel function includes a class of kernel functions.Furthermore, we present a class of primal-dual IPMs for LO based on this new parametric kernel function.The obtained iteration bound for largeupdate methods, namely, ( 2/3 log(/)), which improves the classical iteration complexity with a factor  1/3 , and for small-update methods, we derive the iteration bound, namely, (√ log(/)), which matches the currently best known iteration bound for small-update methods.The paper is organized as follows.In Section 2, we present the framework of kernel-based IPMs for LO.In Section 3, we introduce the new parametric kernel function with a trigonometric barrier term and develop some useful properties of the new kernel function, as well as the corresponding barrier function.The analysis and complexity of the algorithms for large-and small-update methods are presented in Section 4. Finally, Section 5 contains some conclusions and remarks.Some notations used throughout the paper are as follows.R  , R  + , and R  ++ denote the set of vectors with  components, the set of nonnegative vectors, and the set of positive vectors, respectively.‖‖ denotes the 2-norm of the vector . denotes the identity vector.For any  ∈ R  ,  min (or  max ) denotes the smallest (or largest) value of the components of .Finally, if () ≥ 0 is a real valued function of a real nonnegative variable, the notation () = () means that () ≤  for some positive constant  and () = Θ() means that  1  ≤ () ≤  2  for two positive constants  1 and  2 .

Framework of Kernel-Based IPMs for LO
In this section, we briefly recall the framework of kernelbased IPMs for LO, which includes the central path, the new search directions, and the generic primal-dual interior-point algorithm for LO.

Central Path for LO.
Throughout the paper, we assume that both () and () satisfy the interior-point condition (IPC); that is, there exists ( 0 ,  0 ,  0 ) such that The Karush-Kuhn-Tucker (KKT) conditions for () and () are given by where  denotes the component-wise product of the vectors of  and .The standard approach is to replace the third equation in (3), the so-called complementarity condition for () and (), by the parameterized equation  = , with  > 0. This yields the following system: Since rank() =  and the IPC holds, the parameterized system (4) has a unique solution for each  > 0. This solution is denoted as ((), (), ()) and we call () the -center of () and ((), ()) the -center of ().The set of -centers (with  running through all positive real numbers) gives a homotopy path, which is called the central path of () and ().If  → 0, then the limit of the central path exists and since the limit points satisfy the complementarity condition (3), the limit yields optimal solutions for () and () (see, e.g., [1]).

New Search Directions. IPMs follow the central path
approximately and approach the optimal set of LO by letting  go to zero.Applying Newton's method to the system (4), we have Δ = 0,   Δ + Δ = 0, Δ + Δ =  − . (5 This system has a unique solution [2,3].Defining the vector Note that the triple (, , ) coincides with the -center ((), (), ()) if and only if V = .For further use we introduce the scaled search directions   and   according to By using ( 6) and ( 7), after some elementary reductions, we have where It is obvious that the right-hand side V −1 − V in the third equation of the system (8) equals minus the derivative of the classic barrier function as follows: where is the kernel function of the classic barrier function.Thus, the system (8) can be rewritten as the following system: Corresponding to the parametric kernel function (1), we define the barrier function Ψ(V) : R  ++ → R + as follows: Due to the properties of the parametric kernel function (), see, for example, Section 3, we can conclude that Ψ(V) is a strictly convex function and attains minimal value at V =  and Ψ() = 0; that is, Hence, the value of Ψ(V) can be considered as a measure of the distance between the given iterate and the -center of the algorithms.
The approach in this paper differs only in one detail: we replace the right-hand side of the third equation in (8) by −∇Ψ(V).This yields the following system: The scaled search directions   and   are orthogonal vectors due to the fact that   belongs to the null space and   to the row space of the matrix .From (13), one may easily verify that the right-hand side in the system (14) vanishes if and only if V = .Thus we conclude that Δ, Δ, and Δ all vanish if and only if V = , that is, if and only if  = (),  = (), and  = ().Otherwise, we will use (Δ, Δ, Δ) as the new search direction.
For the analysis of the interior-point algorithm, we define the norm-based proximity measure (V) as follows: One can easily verify that

Generic Primal-Dual Algorithm for LO.
In general each kernel function gives rise to a primal-dual interiorpoint algorithm.Without loss of generality we assume that ((), (), ()) is known for some positive .For example, due to the above assumption we may assume this for  = 1, with (1) = (1) = .Then, we decrease  to  := (1 − ) for some  ∈ (0, 1).We solve the scaled Newton system (14) and through (7) to get the new search direction (Δ, Δ, Δ).The new triple ( + ,  + ,  + ) is given by where  denotes the default step size,  ∈ (0, 1], which has to be chosen appropriately.If necessary, we repeat the procedure until we find iterates that are in the neighborhood of ((), (), ()).Then  is again reduced by the factor 1− and we apply Newton's method targeting the new -centers, and so on.This process is repeated until  is small enough, say until  < ; at this stage we have found an -solution of () and ().The generic form of this algorithm is shown in Algorithm 1.

New Parametric Kernel Function and Its Properties
In this section, we introduce the new parametric kernel function with a trigonometric barrier term and develop some useful properties of the new kernel function as well as the corresponding barrier function that are needed in the analysis of the algorithms.
For ease of reference, we give the first three derivatives of () given by ( 1) with respect to  as follows: where One can easily verify that This implies that the kernel function () is completely defined by its second derivative as follows: In what follows, we develop some technical lemmas on the parametric kernel function.
From the above discussions, the proof of the lemma is completed.
The property described below is exponential convexity, which has been proven to be very useful in the analysis of primal-dual interior-point algorithms based on the eligible kernel functions [2,3].
Proof.The result of the lemma follows immediately from Lemma 1 in [2], which states that the above inequality holds if and only if   () +   () > 0 for all  > 0. Hence, from (29) in Lemma 3, the proof of the lemma is completed.
From (28) of Lemma 3 (i.e.,   () > 1), we say that () is strongly convex.The following lemma provides an important consequence of this property.These results can be directly obtained from the corresponding results in [3].
As the consequences of Lemma 5, one has the following two corollaries.
Theorem 9. Let 0 <  < 1 and Proof.It follows from Lemma 8 with  = 1/ √ 1 −  that Thus, we have, by Corollary 7, The proof of the theorem is completed.

Analysis and Complexity of the Algorithms
In this section, we first choose a default step size.Then, we derive an upper bound for the decrease of the barrier function during an inner iteration.Finally, the iteration bounds for large-and small-update methods are established.

Computation of the Default
Step Size.In each inner iteration, we first compute the search direction (  , Δ,   ) from the system (14).Then through (7), we obtain the search direction (Δ, Δ, Δ).Note that during an inner iteration the parameter  is fixed.Hence, after the step the new V-vector is given by It follows from (17) that Also using  = V 2 , we obtain We consider the decrease in Ψ as a function of  and define  () := Ψ (V + ) − Ψ (V) . (65) However, working with () may not be easy because in general () is not convex.Thus, we are searching for the convex function  1 () that is an upper bound of () and whose derivatives are easier to calculate than those of ().
Taking the derivative with respect to , we have This gives, also using the third expression of system (14), Differentiating once again, we get Below we use the shorthand notation:  := (V).The following lemma provides an upper bound of   1 () , which can be found in Lemma 4.1 in [3].

Decrease of the Barrier Function during an Inner Iteration.
In what follows, we will show that the barrier function Ψ(V) in each inner iteration with the default step size α, as defined by (84), is decreasing.For this, we need the following technical result.
The following theorem shows that the default step size (84) yields a sufficient decrease of the barrier function during each inner iteration.

Table 1 :
Complexity results for the eligible kernel functions.