Linear Total Variation Approximate Regularized Nuclear Norm Optimization for Matrix Completion

and Applied Analysis 3 of each regularization problem separately which are defined as follows. Row TV is as follows: L R (XR,YR,WR, λ) = (1 − γ) ‖XR‖ ∗ + γ 󵄩 󵄩 󵄩 󵄩 (WR − φ 2 WR)󵄩󵄩󵄩 󵄩 2


Introduction
The problem of matrix completion, which can be seen as the extension of recently developed compressed sensing (CS) theory [1][2][3], plays an important role in the field of signal and image processing [4][5][6][7][8][9][10][11].This problem occurs in many real applications in computer vision and pattern recognition, such as image inpainting [12,13], video denoising [14], and recommender systems [15,16].Reconstruction algorithms for matrix completion have received much attention.Cai et al. [17] proposed an algorithm, namely, the singular value thresholding (SVT) algorithm for matrix completion and related nuclear norm minimization problems.In [18], a simple and fast singular value projection (SVP) algorithm for rank minimization with affine constraints is exploited.Keshavan et al. [19] dealt with the matrix completion based on singular value decomposition followed by local manifold optimization.In order to achieve a better approximation of the rank of matrix, Hu et al. [11] presented an approach based on the truncated nuclear norm regularization (TNNR), which is defined by the difference between the nuclear norm and the sum of the largest few singular values.Since most of the existing matrix completion models aim to solve the low rank optimization via nuclear norm, we recall here this model.For an incomplete matrix M ∈ R × of rank , the model can be described as follows: where X ∈ R × and M Ω = M  , (, ) ∈ Ω, and Ω is the set of locations corresponding to the observed entries.
Unfortunately, the rank minimization problem in ( 1) is an NP-hard one, so the following convex relaxation is widely used: min where ‖ ⋅ ‖ * is the nuclear norm given by where   denotes the th largest singular value of X.

Abstract and Applied Analysis
In this paper, our objective is to exploit the intrinsic geometry of the data distribution and incorporate it as an additional regularization term to deal with the images which are rich in texture.The total variation (TV) norm has demonstrated its usefulness as a graph regularizer in the field of image processing, so we propose here a method that combines the nuclear norm with the linear TV approximate norm to solve the problem of matrix completion.We call it the linear total variation approximate regularized nuclear norm (LTVNN) minimization problem.This combination optimization problem will be solved by simple and efficient optimization scheme based on the alternating direction method of multipliers (ADMM) model [20,21].
The paper is organized as follows.In the next section, we introduce the proposed LTVNN model and we describe the optimization schemes.In Section 3, we establish the convergence results for the iterations given in Section 2. Experimental results on a set of images are provided in Section 4. Finally, we draw some conclusions in Section 5.

Proposed Method
2.1.Some Preliminaries.The total variation along the vertical and horizontal directions can be described as So the total variation of X is the summation for the magnitude of the gradient of each pixel [22]: And the equvalent total variation formula as follows: Here, we use the linear total variation approximate of ( 7) to approximate the second kind of total variation; that is, 2.2.Proposed Model.As mentioned above, the key point of the proposed approach is the combination of the nuclear norm and the linear total variation approximate norm; therefore, the optimization problem is described as min where 0 ≤  ≤ 1 is a penalty parameter, ‖X‖ * is the nuclear norm defined in (3), and ‖X‖ LTVA is linear total variation norm approximate defined in (8), which can be reformulated as where "Tr" means the trace of the matrix, ‖ ⋅ ‖  denotes the Frobenius norm of the matrix, and  1 and  2 are, respectively, the column and row transform matrix given by So, the problem in ( 9) can be rewritten as min 2.3.The Optimization Scheme.The alternating direction method of multipliers-ADMM [20,21] is an efficient and scalable optimization model which exploits the structure of the optimization problem.In this section, we use ADMM to deal with the problem in (12), which can be reformulated as min where ‖(W − W 1 )‖ 2  and ‖(W −  2 W)‖ 2  are the indicator functions.The augmented Lagrange function of (13) is where  > 0 is the penalty parameter and Y is the multiplier.
The solution can be obtained by incorporating the solutions of each regularization problem separately which are defined as follows.
Row TV is as follows: where XR denotes the optimization result along the vertical direction of the total variation defined in (4).
Column TV is as follows: where XC denotes the optimization result along the horizontal direction of the total variation defined in (5).
We deal with column linear TV optimization problem in ( 16) by the following steps in each iteration.
Using the operator D  in (19), the solution of ( 18) can be obtained as Step 3 (computing WC +1 ).Fix XC +1 and YC  and calculate WC +1 as follows: which is a quadratic function of WC and can be easily solved by setting the derivation of L(XC +1 , YC  , WC, ) to zeros, and then we get Then we fix the values at the observed entries: where Ω  denotes the set of the missing entries.
Step 4 (computing YC +1 ).Fix XC +1 and WC +1 and calculate YC +1 as follows: Until the stop condition: Row TV problem defined by ( 15) can be solved in a similar way to that of column TV problem.The only difference is the WR +1 in the second step, which is given by And the stop condition is ‖XR +1 − XR  ‖  ≤ .Finally, we obtained X +1 as the average of XC +1 and XR +1 ; that is,

Convergence Analysis
In this section, we give the proof of the convergence of column total variation ( 16) and the convergence of row total variation is similar to the column total variation.Here, the objection function ( 16) about column variation is as follows: min The details of the proof can be found in [17].Theorem 2. Assuming that the sequence of step size obeys Here, X * denotes the optimization result and X  denotes the th iteration object variable; then by the iteration procedure defined in Section 2.3, we can obtain the unique optimization result, that is, X * .And the details of the proof can be found in the Appendix.

Experiments
In this section, we test the proposed method on a set of images.The algorithm was implemented with MATLAB programming language on a PC machine, which sets up Microsoft Windows 7 operating system and has an Intel Core I5 CPU with speed of 2.79 GHz and 2 GB RAM.
We deal with three channels (, , ) of color images separately and combine the results together to get the final outcome.We use peak signal-to-noise ratio (PSNR) values to evaluate the performance:  In the experiments, we consider two situations: random mask sample and word mask sample.Figure 1 describes the recovered results with 60% random mask and word mask for  = 0, 0.5 and 1 by LTVNN. Figure 2 shows the recovered PSNR for Pepper under different random sample ratios and word mask sample for  from 0 to 1 with step of 0.1 by LTVNN.It can be observed from these two figures that the best result is obtained for the value of  near to 0.5, which corresponds to the case where the two norms (nuclear and LTV) are equivalently used in (9).For the two extreme cases:  = 0 (only the nuclear norm is taken into consideration) and  = 1 (only the linear total variation approximate norm is considered), the algorithm loses its efficiency.
We also compare our method (LTVNN) with other matrix completion methods including TNNR [10,11], SVT [12], SVP [13], and OptSpace [14].Figure 3 plots the recovered PSNR for Pepper for  = 0.5 with different random sample ratios (from 40% to 90%) by LTVNN and other four methods (TNNR, SVT, SVP, and OptSpace).It can be seen from Figure 3 that the proposed LTVNN method achieves much higher PSNR than the other methods.Figure 4 shows the comparison of PSNR of recovered methods for Lena under word mask with  = 0.5 by LTVNN and the other methods.Table 1 lists the PSNR results under word mask sample with  = 0.5 for different images by LTVNN and the other methods.From Figure 4 and Table 1, we can see that the proposed method outperforms the other matrix completion methods under word mask for different images.

Conclusion
In this paper, we have proposed a new model that combines the nuclear norm and total variation norm for the matrix completion problem, which was then solved by ADMM model.Experimental results demonstrate the effectiveness of the proposed algorithm compared to other methods.

Figure 2 : 5 Figure 3 :
Figure 2: The recovered PSNR for Pepper under different random sample ratio and word mask sample with  from 0 0 to 1 by LTVNN.