An Inexact Update Method with Double Parameters for Nonnegative Matrix Factorization

1School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin 541004, China 2Guangxi Key Laboratory of Automatic Detecting Technology and Instruments, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China 3Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China 4Guangxi Key Laboratory of Cryptography and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China 5School of Mathematics and Information, Beifang University of Nationalities, Yinchuan 710021, China


Introduction
Nonnegative matrix factorization (NMF) [1] is not only a well-known matrix decomposition approach but also an utility and efficient feature extraction technique.NMF was first put forward by Lee and Seung.Recently, NMF has been successfully applied in many fields, including face verification [2], text mining [3], gene expression data [4], blind source separation [5], and signal processing [6].
A nonnegative matrix  ∈  × + is decomposed into two low-rank nonnegative matrices  ∈  × + and  ∈  × + such that  is approximately equal to , denoted by where  ∈  + and  ≪ /( + ). and  mean different things to different applications; for example, in blind source separation (BSS),  and  are called mixing matrix and source signal matrix, respectively.In order to decrease the approximation error between  and , the Euclidean distance-based model is employed in this paper.Namely, NMF can be expressed as the following optimization form: where ‖ ⋅ ‖  is the Frobenius norm and  ≥ 0 ( ≥ 0) means that all elements of the matrix () are nonnegative.Clearly, it is difficult to find a global optimal solution (, ) because the objective function (, ) is nonconvex.Therefore, the remaining issue is how to solve the nonconvex problem.
In 2001, Lee and Seung tried to find a local optimal solution instead of a global one and proposed multiplicative update algorithm [7].Multiplicative update algorithm is widely used as an efficient computational method for NMF.The update rules for (2) are given as follows: 2 Mathematical Problems in Engineering where  represents the iteration count.Later, many new methods were available to solve NMF in addition to multiplicative update algorithm, such as gradient descent algorithms [8,9], the active set method [10], and alternating nonnegative least squares (ANLS) [11][12][13].
For the sake of solving the minimum problem (2), Hien et al. [14] have proposed a novel algorithm, in which the update rule contains only a parameter and the parameter selection is considered exact in some cases.Compared with multiplicative update algorithm, the novel algorithm [14] has fast convergence speed and small decomposition error.
Some inexact technique is widely applied to large scale optimization problems.Based on this idea, we employ an inexact parameter instead of the exact parameter [14].In the meantime, we add another parameter to accelerate the speed decrease of objective function.Hence, we present an inexact update algorithm with two parameters.The proposed method also updates the elements of matrices  and  one by one.Under some assumptions, the proposed method ensures that the objective function is always descent before the optimal solution is found.
Similar to multiplicative update algorithm, the proposed method has many advantages, including its efficient storage, simple calculation, and good results.In multiplicative update algorithm, the descent property is established by means of an auxiliary function.However, the proposed method is descent by the local monotonicity of a quadratic function.
And it has a faster convergent speed.Besides, the main idea of ANLS is that two optimization problems can be solved alternately.By contrast, the proposed method is easier to implement.
The remainder of this paper is organized as follows: we present the procedure to update an element of the matrices  and  in Section 2. In Section 3, we give an inexact update method with double parameters for NMF and establish the convergent properties.In Section 4, experimental results demonstrate the validity of the method.Finally, we conclude this paper.

Algorithm for Updating an Element of Matrix
In this section, we will discuss the procedure to update an element of the matrices  and .In [14],   is adjusted by adding a parameter : where Motivated by the above work, we give the following adjustment to   by two parameters  and : in which  can be viewed as a constant as well as a function.
We define the parameter  by a certain value , where 0 <  ≤ 1.Similar to [14], we have that if Next, we deduce where In order to ensure that the function ( W, ) possesses the descent property, we should define  so that () is nonpositive in W =   +  ≥ 0. For a given value ,  is expressed as follows: Since () is a quadratic function,  ∈ (0, −2/) or  ∈ (−2/, 0).In other words, the  value can be "inexact."If  = −/, specially, () reaches the minimum value.It is obvious that the decline of ( W, ) is the steepest; at the moment,  is believed to be "exact." Lemma 1.If W does not satisfy the KKT conditions, then ( W, ) < (, ); otherwise, W = ; this implies ( W, ) = (, ).

The Proposed Algorithm
Based on the above analysis, in this section, we report our algorithm as follows.
In order to ensure the monotonous decrease of (, ), we give the next theorem, which follows directly from Lemmas 1 and 2 and Algorithm 3.
Theorem 4. Suppose that (  ,   ) is generated by Algorithm 3; then (  ,   ) is monotonically decreasing; that is, for all  large enough.
It is clear to obtain the above corollary from theorem.Next, the another convergence property of Algorithm 3 is given.Theorem 6. Suppose (, ) is a limit point of (  ,   ) and

Then (𝑊, 𝐻) is the stationary point of problem (2).
Since the above theorem corresponds to [14] (Theorem 3.2.2) and the proof is the same as that in [14] (Theorem 3.2.2),we will not prove it here.

Numerical Experiments
In this section, we give some numerical experiments of Algorithm 3 and compare its behavior with the method of [14]  In experiments, we will compare the following statistics: the number of iterations (Iter), the CPU time in second (time), the speed of convergence to stationary point, and the minimum value of objective function value (fun).Same as [14], the speed of convergence is measured by using the following formula:  In order to avoid the initial point effect on the numerical result, in every experiment, we use 20 initial points, and every initial point ( 1 ,  1 ) is randomly generated.We list the average values of Iter, Fun, Time, and Δ(, ), respectively.
In Table 1, the relevant parameters of Algorithm 3 are specified as follows: From Table 1, we can clearly see the following two points: (1) Algorithm 3 requires fewer iterations and fewer function factorization errors and is less time consuming; (2) for the speed of convergence, Algorithm 3 is not less than NNMF.
In Table 2, the relevant parameters of Algorithm 3 are specified as follows: From Table 2, we find that Algorithm 3 is still better than NNMF for most of the test problems.Note that Algorithm 3 differs from the method of [14] only in the values of  and   , in which one is "inexact" and another is "exact."Moreover, from Tables 1 and 2, the value of Fun in Table 1 is more efficient than Table 2 for Algorithm 3. In Table 3, the relevant parameters of Algorithm 3 are specified as follows: In Table 3,  and   are regarded as a function, respectively.It is obvious that average numbers of iterations and the minimum value of objective function value for Algorithm 3 are smaller than those for NNMF, CPU time of Algorithm 3 is less than that of NNMF, and the speed of convergence of Algorithm 3 and NNMF is very close.
In Figure 1, we compare the performance of the proposed method with NNMF, respectively.The convergence curves are given as follows.
From Figure 1, it is obvious that the speed of decrease of the objective function value is faster for Algorithm 3.And the numbers of iterations for Algorithm 3 are smaller than that for NNMF.
For all test problems, although the two methods have the same speed of convergence to stationary point, the proposed method requires fewer iterations and fewer function factorization errors and is less time consuming.Consequently, the above experiment results show that Algorithm 3 performs better than the method of [14] by choosing the parameters  and  suitably.

Conclusions
In this paper, we propose an inexact update method, with two parameters, which can ensure that the objective function is always descent before the optimal solution is found.Experiment results show that the proposed method performs better than the method of [14].
(NNMF).All codes of the computer procedures are written in MATLAB and run in a MATLAB 7.10 and are carried out on a PC (CPU 2.13 GHz, 2 G memory) with Windows 7 operation system environment.