Recovering a large matrix from limited measurements is a challenging task arising
in many real applications, such as image inpainting, compressive sensing, and
medical imaging, and these kinds of problems are mostly formulated as low-rank
matrix approximation problems. Due to the rank operator being nonconvex
and discontinuous, most of the recent theoretical studies use the nuclear norm
as a convex relaxation and the low-rank matrix recovery problem is solved
through minimization of the nuclear norm regularized problem. However, a
major limitation of nuclear norm minimization is that all the singular values
are simultaneously minimized and the rank may not be well approximated (Hu et al., 2013). Correspondingly, in this paper, we propose a new multistage algorithm, which
makes use of the concept of Truncated Nuclear Norm Regularization (TNNR)
proposed by Hu et al., 2013, and iterative support detection (ISD) proposed by Wang and Yin, 2010, to overcome
the above limitation. Besides matrix completion problems considered by Hu et al., 2013, the proposed method can be also extended to the general low-rank matrix
recovery problems. Extensive experiments well validate the superiority of our
new algorithms over other state-of-the-art methods.
1. Introduction
In many real applications such as machine learning [1–3], computer vision [4], and control [5], we seek to recover an unknown (approximately) low-rank matrix from limited information. This problem can be naturally formulated as the following model:
(1)minXrankXs.t.AX=b,
where X∈Rm×n is the decision variable and the linear map A: Rm×n→Rp(p<mn) and vector b∈Rp are given. However, this is usually NP-hard due to the nonconvexity and discontinuous nature of the rank function. In [6], Fazel et al. firstly solved rank minimization problem by approximating the rank function using the nuclear norm (i.e., the sum of singular values of a matrix). Moreover, theoretical studies show that the nuclear norm is the tightest convex lower bound of the rank function of matrices [7]. Thus, an unknown (approximately) low-rank matrix X- can be perfectly recovered by solving the optimization problem
(2)minXX*s.t.AX=b≐AX-,
where X*=∑i=1min(m,n)σi(X) is the nuclear norm and σi(X) is the ith largest singular value of X, under some conditions on the linear transformation A.
As a special case, the problem (2) is reduced to the well-known matrix completion problem (3) [8, 9], when A is a sampling (or projection/restriction) operator. Consider
(3)minXX*s.t.Xi,j=Mi,j,i,i∈Ω,
where M∈Rm×n is the incomplete data matrix and Ω is the set of locations corresponding to the observed entries. To solve this kind of problems, we can refer to [8–13] for some breakthrough results. Nevertheless, they may obtain suboptimal performance in real applications because the nuclear norm may not be a good approximation to rank operator, because all the nonzero singular values in rank operator have the equal contribution, while the singular values in nuclear norm are treated differently by adding them together. Thus, to overcome the weakness of nuclear norm, Truncated Nuclear Norm Regularization (TNNR) was proposed for matrix completion, which only minimizes the smallest min(m,n)-r singular values [14]. The similar truncation idea was also proposed in our previous work [15]. Correspondingly, the problem can be formulated as
(4)minXXrs.t.Xi,j=Mi,j,i,i∈Ω,
where Xr is defined as the sum of min(m,n)-r minimum singular values. In this way, one can get a more accurate and robust approximation to the rank operator on both synthetic and real visual data sets.
In this paper, we aim to extend the idea of TNNR from the special matrix completion problem to the general problem (2) and give the corresponding fast algorithm. More importantly, we will consider how to fast estimate r, which is usually unavailable in practice.
Throughout this paper, we use the following notation. We let ·,· be the standard inner product between two matrices in a finite dimensional Euclidean space, · the 2-norm, and ·F the Frobenius norm for matrix variables. The projection operator under the Euclidean distance measure is denoted by P and the transpose of a real matrix by ⊺. Let X=UΣV⊺ be the singular value decomposition (SVD) for X, where Σ=diag(σi), 1≤i≤min{m,n}, and σ1≥⋯≥σmin{m,n}.
1.1. Related Work
The low-rank optimization problem (2) has attracted more and more interest in developing customized algorithms, particularly for larger-scale cases. We now briefly review some influential approaches to these problems.
The convex problem (2) can be easily reformulated into the semidefinite programming (SDP) problems [16, 17] to make use of the generic SDP solvers such as SDPT3 [18] and SeDuMi [19] which are based on the interior-point method. However, the interior-point approaches suffer from the limitation that they ineffectively handle large-scale problems which was mentioned in [7, 20, 21]. The problem (2) can also be solved through a projected subgradient approach in [7], whose major computation is concentrated on singular values decomposition. The method can be used to solve large-scale cases of (2). However, the convergence may be slow, especially when high accuracy is required. In [7, 22], UV-parameterization (X=UV⊺) based on matrix factorization is applied in general low-rank matrix reconstruction problems. Specifically, the low-rank matrix X is decomposed into the form UV⊺, where U∈Rm×r and V∈Rn×r are tall and thin matrices. The method reduces the dimensionality from mn to (m+n)r. However, if the rank and the size are large, the computation cost may also be very high. Moreover, the rank r is not known as prior information for most of the applications, and it has to be estimated or dynamically adjusted, which might be difficult to realize. More recently, the augmented Lagrangian method (ALM) [23, 24] and the alternating direction method of multipliers (ADMM) [25] are very efficient for some convex programming problems arising from various applications. In [26], ADMM is applied to solve (2) with AA*=I.
As an important special case of problem (2), the matrix completion problem (3) has been widely studied. Cai et al. [12, 13] used the shrinkage operator to solve the nuclear norm effectively. The shrinkage operator applies a soft-thresholding rule to singular values, as the sparse operator of a matrix, though it can be applied widely in many other approaches. However, due to the abovementioned limitation of nuclear norm, TNNR (4) is proposed to replace the nuclear norm. Since Xr in (4) is nonconvex, it can not be solved simply and effectively. So, how to change (4) into a convex function is critical. Obviously, it is noted that Xr=X*-∑i=1rσi(X), Tr(LrXRr⊺)=∑i=1rσi(X), where UΣV⊺ is the SVD of X, U=(u1,…,um)∈Rm×m, Σ∈Rm×n, and V=(v1,…,vn)∈Rn×n. Then Lr=(u1,…,ur)T, Rr=(v1,…,vr)T, and the optimization problem (4) can be rewritten as
(5)minXX*-TrLrXRr⊺s.t.Xi,j=Mi,j,i,j∈Ω.
While the problem (5) is still nonconvex, they can get a local minima by an iterative procedure proposed in [14] and we will review the procedure in more detail later.
A similar idea of truncation, in the context of the sparse vectors, is also implemented on the sparse signals by our previous work in [15], which tries to adaptively learn the information of the nonzeros of the unknown true signal. Specifically, we present a sparse signal reconstruction method, iterative support detection (ISD, for short), aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical l1 minimization approach. ISD alternatively calls its two components: support detection and signal reconstruction. From an incorrect reconstruction, support detection identifies an index set I containing some elements of supp(x-)={i:x-i≠0}, and signal reconstruction solves
(6)minxxT1s.t.Ax=b≐Ax-,
where T=IC and xT1=∑i∉Txi. To obtain the reliable support detection from inexact reconstructions, ISD must take advantage of the features and prior information about the true signal x-. In [15], the sparse or compressible signals, with components having a fast decaying distribution of nonzeros, are considered.
1.2. Contributions and Paper Organization
Our first contribution is the estimation of r, on which TNNR heavily depends (in Xr). Hu et al. [14] seek the best r by trying all the possible values and this leads to high computational cost. In this paper, motivated by Wang and Yin [15], we propose singular value estimation (SVE) method to obtain the best r, which can be considered as a special implementation of iterative support detection of [15] in case of matrices.
Our second contribution is to extend TNNR from matrix completion to the general low-rank cases. In [14], they have only considered the matrix completion problem.
The third contribution is based on the above two. In particular, a new efficient algorithmic framework is proposed for the low-rank matrix recovery problem. We name it LRISD, which iteratively calls its two components: SVE and solving the low-rank matrix reconstruction model based on TNNR.
The rest of this paper is organized as follows. In Section 2, computing framework of LRISD and theories of SVE are introduced. In Section 3, Section 4, and Section 5, we introduce three algorithms to solve the problems mentioned in Section 2.1. Experimental results are presented in Section 6. Finally, some conclusions are made in Section 7.
2. Iterative Support Detection for Low-Rank Problems
In this section, we first give the outline of the proposed algorithm LRISD and then elaborate the proposed SVE method which is a main component of LRISD.
2.1. Algorithm Outline
The main purpose of LRISD is to provide a better approximation to model (1) than the common convex relaxation model (2). The key idea is to make use of the Truncated Nuclear Norm Regularization (TNNR) defined in (4) and its variant (5) [14]. While ones can passively try all the possible values of r which is the number of the largest few singular values, we proposed to actively estimate the value of r.
In addition, we will consider the general low-rank recovery problems beyond the matrix completion problem. Specifically, we will solve three models, equality model (7), unconstrained model (8), and inequality model (9):
(7)minXX*-TrLrXRr⊺s.t.AX=b,(8)minXX*-TrLrXRr⊺+μ2AX-b2,(9)minXX*-TrLrXRr⊺s.t.AX-b≤δ,
where μ>0 and δ>0 are parameters reflecting the level of noise. Models (8) and (9) consider the case with noisy data. Here A is a linear mapping such as partial discrete cosine transformation (DCT), partial discrete Walsh-Hadamard transformation (DWHT), and discrete Fourier transform (DFT).
The general framework of LRISD, as an iterative procedure, starts from the initial r=0, that is, solving a plain nuclear norm minimization problem, and then estimates r based on the recovered result. Based on the estimated r, we solve a resulting TNNR model (7) or (8) or (9). Using the new recovered result, we can update the r value and solve a new TNNR model (7) or (8) or (9). Our algorithm iteratively calls the r estimation and the solver of the TNNR model.
As for solving (7), (8), and (9), we are following the idea of [14]. Specifically, a simple but efficient iterative procedure is adopted to decouple the Lr, X and Rr. We set the initial guess X1. In the lth iteration, we first fix Xl and compute Lrl and Rrl as described in (5), based on the SVD of Xl. Then we fix Lrl and Rrl to update Xl+1 by solving the following problems, respectively:
(10)minXX*-TrLrlXRrl⊺s.t.AX=b,(11)minXX*-TrLrlXRrl⊺+μ2AX-b2,(12)minXX*-TrLrlXRrl⊺s.t.AX-b≤δ.
In [14], the authors have studied solving the special case of matrix completion problems. For the general problems (10), (11), and (12), we will extend the current state-of-the-art algorithms to solve them in Section 3, Section 4, and Section 5, respectively.
In summary, the procedure of LRISD, as a new multistage algorithm, is summarized in Algorithm 1. By alternately running the SVE and solving the corresponding TNNR models, the iterative scheme will converge to a solution of a TNNR model, whose solution is expected to be better than that of the plain nuclear norm minimization model (2).
Algorithm 1 (LRISD based on (10), (11), and (12)).
Initialization: set Xre=X0, which is the solution of pure nuclear norm regularized model (2).
(2) Repeat until r reaches a stable value.
Step 1. Estimate r via SVE on Xre.
Step 2. Initialization: X1=Data (the matrix form of b).
In the lth iteration:
(a) compute Lrl and Rrl of (10) ((11) or (12)) according to the current Xl in the same way as (5) mentioned;
(b) solve the model (10) ((11) or (12)) and obtain Xl+1. Go to (a) until Xl+1-XlF2/DataF2≤ε1;
(c) l←l+1.
Step 3. Set Xre=Xl+1.
(3) Return the recovered matrix Xre.
In the following content, we will explain the implementation of SVE of Step 1 in more detail, and extend the existing algorithms for nuclear norm regularized models to TNNR based models (10)–(12) in procedure (b) of Step 2.
2.2. Singular Value Estimate
In this subsection, we mainly focus on Step 1 of LRISD, that is, describing the process of SVE to estimate r, which is the number of the largest few singular values. While it is feasible to find the best r via trying all possible r as done in [14], this procedure is not computationally efficient. Thus, we aim to quickly give an estimate of the best r.
As we have known, for (approximately) low-rank matrices or images, these singular values often have a feature that they all have a fast decaying distribution (as showed in Figure 1). To take advantage of this feature, we can extend our previous work ISD [15] from detecting the large components of sparse vectors to the large singular values of low-rank matrices. In particular, SVE is nothing but a specific implementation of support detection in cases of low-rank matrices, with the aim of acquiring the estimation of the true r.
(a) A 350×210 image example. (b) The singular values of red channel. (c) The singular values of green channel. (d) The singular values of blue channel. In order to illustrate the distribution clearly, images (b) (c) (d) are used for showing the magnitude of singular values from the 4th to the 100th in each channel.
An image example
Red channel
Green channel
Blue channel
Now we present the process of SVE and the effectiveness of SVE. It is noted that, as showed in Algorithm 1, SVE is repeated several times until a stable estimate r is obtained. For each time, given the reference image Xre, we can obtain the singular value vector S of Xre by SVD. A natural way to find the positions of the true large singular values based on S, which is considered as an estimate of the singular value vector of the true matrix X-, is based on thresholding
(13)I≔i:Si>ϵ
due to the fast decaying property of the singular values. The choice of ϵ should be case-dependent. In the spirit of ISD, one can use the so-called “last significant jump” rule to set the threshold value ϵ to detect the large singular values and minimize the false detections, if we assume that the components of S are sorted from large to small. The straightforward way to apply the “last significant jump” rule is to look for the largest i such that
(14)Sti≐Si-Si+1>τ,
where τ is a prescribed value and St is defined as absolute values of the first order difference of S. This amounts to sweeping the decreasing sequence {Si} and look for the last jump larger than τ. For example, the selected i=4; then we set ϵ=S4.
However, in this paper, unlike the original ISD paper [15], we propose to apply the “last significant jump” rule on absolute values of the first order difference of S, that is, St, instead of S. Specifically, we look for the largest i such that
(15)Stti≐Sti+1-Sti>κ,
where κ will be selected below and Stt is defined as absolute values of the second order difference of S. This amounts to sweeping the decreasing sequence {Sti} and look for the last jump larger than κ. For example, the selected i=4; then we set ϵ=S4. We set the estimation rank r~ to be the cardinality of I, or a close number to it.
Specifically, St is computed to obtain jump sizes which count on the change of two neighboring components of S. Then, to reflect the stability of these jumps, the difference of St needs to be considered as we just do, because the few largest singular values jump actively, while the small singular values would not change much. The cut-off threshold κ is determined via certain heuristic methods in our experiments: synthetic and real visual data sets. Note that, in Section 6.6, we will present a reliable rule for determining threshold value κ.
3. TNNR-ADMM for (10) and (12)
In this section, we extend the existing ADMM method in [25] originally for the nuclear norm regularized model to solve (10) and (12) under common linear mapping A (AA*=I) and give closed-form solutions. The extended version of ADMM is named as TNNR-ADMM, and the original ADMM for the corresponding nuclear norm regularized low-rank matrix recovery model is denoted by LR-ADMM. In addition, we can deduce that the resulting subproblems are simple enough to have closed-form solutions and can be easily achieved to high precision. We start this section with some preliminaries which are convenient for the presentation of algorithms later.
When AA*=I, we present the following conclusions [26]:
(16)I+αA*A-1=I-α1+αA*A,
where (I+αA*A)-1 denotes the inverse operator of (I+αA*A) and α>0.
Definition 2 (see [26]).
When A satisfies AA*=I, for δ≥0 and Y∈Rm×n, the projection of Y onto Bδ is defined as
(17)PBδY=Y+ηη+1A*b-AY,
where
(18)η=maxAY-bδ-1,0,Bδ=U∈Rm×n:AU-b≤δ.
In particular, when δ=0,
(19)PB0=Y+A*b-AY,
where
(20)B0=U∈Rm×n:AU=b.
Then, we have the following conclusion:
(21)PBδY=argminX∈Rm×nX-YF2:AX-b≤δ.
Definition 3.
For the matrix X∈Rm×n, X have the singular value decomposition as follows: X=UΣV⊺, Σ=diag(σi). The shrinkage operator Dτ(τ>0) is defined:
(22)DτX=UDτΣV⊺,DτΣ=diagσi-τ+,
where (s)+=max{0,s}.
Theorem 4 (see [13]).
For each τ≥0 and Y∈Rm×n, one has the following conclusion:
(23)DτY=argminX12X-YF2+τX*.
Definition 5.
Let A* be the adjoint operator of A satisfying the following condition:
(24)AX,Y=X,A*Y.
3.1. Algorithmic Framework
The problems (10) and (12) can be easily reformulated into the following linear constrained convex problem:
(25)minXX*-TrLrlYRrl⊺s.t.X=Y,Y∈Bδ,
where δ≥0. In particular, the above formulation is equivalent to (10) when δ=0. The augmented Lagrangian function of (25) is
(26)LX,Y,Z,β=X*-TrLrlYRrl⊺+β2∥X-Y∥F2-Z,X-Y,
where Z∈Rm×n is the Lagrange multiplier of the linear constraint and β>0 is the penalty parameter for the violation of the linear constraint.
The idea of ADMM is to decompose the minimization task in (26) into three easier and smaller subproblems such that the involved variables X and Y can be minimized separately and alternatively. In particular, we apply ADMM to solve (26) and obtain the following iterative scheme:
(27)Xk+1=argminX∈Rm×nLX,Yk,Zk,β,Yk+1=argminY∈BδLXk+1,Y,Zk,β,Zk+1=Zk-βXk+1-Yk+1.
Ignoring constant terms and deriving the optimal conditions for the involved subproblems in (27), we can easily verify that the iterative scheme of the TNNR-ADMM approach for (10) and (12) is as follows.
Algorithm 6 (TNNR-ADMM for (10) and (12)).
Initialization: set X1=Data (the matrix form of b), Y1=X1, Z1=X1, and input β.
For k=0,1,…,N (maximum number of iterations), do the following.
Update Xk+1 by
(28)Xk+1=argminXX*+β2X-Yk+1βZkF2.
Update Yk+1 by
(29)Yk+1=argminY∈Bδβ2Y-Xk+1+1βLrl⊺Rrl-ZkF2.
Update Zk+1 by
(30)Zk+1=Zk-βXk+1-Yk+1.
End the iteration till Xk+1-XkF2/DataF2≤ε2.
3.2. The Analysis of Subproblems
According to the analysis above, the computation of each iteration of TNNR-ADMM approach for (10) and (12) is dominated by solving the subproblems (28) and (29). We now elaborate on the strategies for solving these subproblems based on abovementioned preliminaries.
First, the solution of (28) can be obtained explicitly via Theorem 4:
(31)Xk+1=D1/βYk+1βZk,
which is the closed-form solution.
Second, it is easy to obtain
(32)Yk+1=Xk+1+1βLrl⊺Rrl-Zk,Yk+1∈Bδ.
Combining (32) and equipped with Definition 2, we give the final closed-form solution of the subproblem (29):
(33)Yk+1=Yk+1+ηη+1A*b-AYk+1,
where
(34)η=maxAYk+1-bδ-1,0.
When δ=0, it is the particular case of (10) and can be expressed as
(35)Yk+1=Yk+1+A*b-AYk+1.
Therefor, when the TNNR-ADMM is applied to solve (10) and (12), the generated subproblems all have closed-form solutions. Besides, some remarks are in order.
Zk+1 can be obtained via the following form:
(36)Zk+1=Zk-γβXk+1-Yk+1,
where 0<γ<(5+1)/2 in [27–29]. We make γ=1 to calculate Zk+1 in our algorithms.
The convergence of the iterative scheme is well studied in [30]. Here, we omit the convergence analysis.
4. TNNR-APGL for (11)
In this section, we consider model (11), which has attracted a lot of attention in certain multitask learning problems [21, 31–33]. While TNNR-ADMM can be applied to solve this model, it is preferred for the noiseless problems. For the simple version of model (11), that is, the one based on the common nuclear norm regularization, many accelerated gradient techniques [34, 35] based on [36] are proposed. Among them, an accelerated proximal gradient line search (APGL) method proposed by Beck and Teboulle [35] has been extended to solve TNNR based matrix completion model in [14]. In this paper, we can extend APGL to solve the more general TNNR based low-rank recovery problem (11).
4.1. TNNR-APGL with Noisy Data
For completeness, we give a short overview of the APGL method. The original model aims to solve the following problem:
(37)minTX=FX+GX:X∈Rm×n,
where G(X), T(X) meet these conditions:
G:Rm×n→R is a continuous convex function, possibly nondifferentiable function;
F:Rm×n→R is a convex and differentiable function. In other words, it is continuously differentiable with Lipschitz continuous gradient L(F) (L(F)>0 is the Lipschitz constant of ∇F).
By linearizing F(X) at Y and adding a proximal term, APGL constructs an approximation of T(X). We have, more specially,
(38)QX,Y=FY+X-Y,g+12τX-YF2+GX,
where τ>0 is a proximal parameter and g=∇F(Y) is the gradient of F(X) at Y.
4.1.1. Algorithmic Framework
For convenience, we present model (11) again and define F(X) and G(X) as follows:
(39)minXX*-TrLrlXRrl⊺+μ2AX-b2,FX=-TrLrlXRrl⊺+μ2AX-b2,GX=X*.
Then, we can conclude that each iteration of the TNNR-APGL for solving model (11) requires solving the following subproblems:
(40)Xk+1=argminX∈Rm×nQX,Yk,τk+1=1+1+4τk22,Yk+1=Xk+1+τk-1τk+1Xk+1-Xk.
During the above iterate scheme, we update τk+1 and Yk+1 via the approaches mentioned in [20, 35]. Then, based on (40), we can easily drive the TNNR-APGL algorithmic framework as follows.
Algorithm 7 (TNNR-APGL for (11)).
Initialization: set X1=Data (the matrix form of b), Y1=X1, π1=1.
For k=0,1,…,N, (maximum number of iterations), do the following.
Update Xk+1 by
(41)Xk+1=argminXX*+12τkX-Yk-τk∇FYkF2.
Update τk+1 by
(42)τk+1=1+1+4τk22.
Update Yk+1 by
(43)Yk+1=Xk+1+τk-1τk+1Xk+1-Xk.
End the iteration till Xk+1-XkF2/DataF2≤ε2.
4.1.2. The Analysis of Subproblems
Obviously, the computation of each iteration of the TNNR-APGL approach for (11) is dominated by subproblem (41). According to Theorem 4, we get
(44)Xk+1=DτkYk-τk∇FYk,
where
(45)∇FYk=-Lrl⊺Rrl+μA*AYk-b.
Then, the closed-form solution of (41) is given by
(46)Xk+1=DτkYk-τkμA*AYk-b-Lrl⊺Rrl.
By now, we have applied TNNR-APGL to solve the problem (11) and obtain closed-form solutions. In addition, the convergence of APGL is well studied in [35] and it has a convergence rate of O(1/k2). In our paper, we also omit the convergence analysis.
5. TNNR-ADMMAP for (10) and (12)
While the TNNR-ADMM is usually very efficient for solving the TNNR based models (10) and (12), its convergence could become slower with more constraints in [37]. Inspired by [14], the alternating direction method of multipliers with adaptive penalty (ADMMAP) is applied to reduce the constrained conditions, and adaptive penalty [38] is used to speed up the convergence. The resulting algorithm is named as “TNNR-ADMMAP,” whose subproblems can also get closed-form solutions.
5.1. Algorithm Framework
Two kinds of constrains have been mentioned as before: X=Y, AY=b and X=Y, AY-b≤δ. Our goal is to transform (10) and (12) into the following form:
(47)minx,yFx+Gy,s.tPx+Qy=c,
where P and Q are linear mapping, x, y, and c could be either vectors or matrices, and F and G are convex functions.
In order to solve problems easily, b was asked to be a vector formed by stacking the columns of matrices. On the contrary, if A is a linear mapping containing sampling process, we can put AY=b into a matrix form sample set. Correspondingly, we should flexibly change the form between matrices and vectors in the calculation process. Here, we just provide the idea and process of TNNR-ADMMAP. Now, we match the relevant function to get the following results:
(48)FX=X*,GY=-TrLrlYRr⊺,PX=X000,QY=-Y00AY,C1=000Data,C2=000ξ,
where P and Q:Rm×n→R2m×2n and C=C1+C2. Denote Bδ,2={ζ∈Rp:ζ≤δ} and ξ∈Rm×n, that is, the matrix form of ζ=AX-b. When δ=0, it reflects the problem (10).
Then, the problems (10) and (12) can be equivalently transformed to
(49)minXX*-TrLrlYRrl⊺,s.tPX+QY=C.
So the augmented Lagrangian function of (49) is
(50)LX,Y,Z,ξ,β=X*-Z,PX+QY-C-TrLrlYRrl⊺+β2PX+QY-CF2,
where
(51)Z=Z11Z12Z21Z22∈R2m×2n.
The Lagrangian form can be solved via linearized ADMM and a dynamic penalty parameter β is preferred in [38]. In particular, due to the special property of A (AA*=I), here, we use ADMMAP in order to handle the problem (50) easily. Similarly, we use the following adaptive updating rule on β [38]:
(52)βk+1=minβmax,ρβk,
where βmax is an upper bound of {βk}. The value of ρ is defined as
(53)ρ=ρ0,ifβkmaxXk+1-XkF,Yk+1-YkFCF<ε,1,otherwise,
where ρ0≥1 is a constant and ε is a proximal parameter.
In summary, the iterative scheme of the TNNR-ADMMAP is as follows.
Algorithm 8 (TNNR-ADMMAP for (10) and (12)).
Initialization: set X1=Data (the matrix form of b), Y1=X1, Z1=zeros(2m,2n), and input β0,ε,ρ0.
For k=0,1,…,N (maximum number of iterations), do the following.
Update Xk+1 by
(54)Xk+1=argminXX*+β2PX+QYk-C-1βZkF2.
Update Yk+1 by
(55)Yk+1=argminY-TrLrlYRrl⊺+β2PXk+1+QY-C-1βZkF2.
Update Zk+1 by
(56)Zk+1=Zk-βPXk+1+QYk+1-C.
The step is calculated with δ>0. Update ξ by
(57)C2k+1=PXk+1+QYk+1-C1-1βZk+1,ξk+1=PBδ,2C2k+122.
End the iteration till Xk+1-XkF2/DataF2≤ε2.
5.2. The Analysis of Subproblems
Since the computation of each iteration of the TNNR-ADMMAP method is dominated by solving subproblems (54) and (55), we now elaborate on the strategies for solving these subproblems.
First, we compute Xk+1. Due to the special form of P and Q, we can give the following solution by ignoring the constant term:
(58)Xk+1=argminXX*+β2X-Yk-1βZk11F2,Xk+1=D1/βYk+1βZk11.
Second, we concentrate on computing Yk+1. Obviously, Yk+1 obeys the following rule:
(59)0∈∂-TrLrlYk+1Rrl⊺β2PXk+1+QYk+1-C-1βZkF2mmmu+β2PXk+1+QYk+1-C-1βZkF2.
It can be solved as
(60)Q*QY=1βLrl⊺Rrl-Q*PXk+1-C-1βZk,
where Q* is the adjoint operator of Q which is mentioned in (24).
Let
(61)W=W11W12W21W22∈R2m×2n,
where Wij∈Rm×n; according to (24), we have 〈Q(Y),W〉=〈Y,Q*(W)〉. More specifically,
(62)QY,W=Tr-Y00AYW11W12W21W22⊺=Tr-YW11⊺-YW21⊺AYW12⊺AYW22⊺=Tr-YW11⊺+TrAYW22⊺=Y,-W11+AY,W22=Y,-W11+Y,A*W22=Y,-W11+A*W22=Y,Q*W.
Thus, the adjoint operator Q* is denoted by
(63)Q*W=-W11+A*W22.
The left side in (60) can be shown as
(64)Q*QY=Q*-Y00AY=Y+A*AY.
Then, we apply the linear mapping A (AA*=I) on both sides of (64), and we obtain
(65)AQ*QY=AY+AA*AY=2AY,(66)AY=12AQ*QY.
Combining (64) and (66), we get
(67)Yk+1=Q*QY-A*AY=Q*QY-12A*AQ*QY.
Similarly, according to the property of Q* in (64), we can get the transformation for the right side in (60). Consider
(68)Q*QY=Xk+1-1βZk11+1βLrl⊺Rrl+A*Data+ξ+1βZk22.
Based on the above, from (64) and (68), we achieve
(69)Yk+1=Q*QY-A*AY=Q*QY-12A*AQ*QY=Xk+1-12βA*ALrl⊺Rrl-Zk11+βXk+1+1βLrl⊺Rrl-Zk11+12βA*βData+βξ+Zk22.
Some remarks are in order.
The compute of ξk+1 begins with ξ1>0. In other words, the problem matches (12) when ξ1>0.
The convergence of the iterative schemes of ADMMAP is well studied in [38, 39]. In our paper, we omit the convergence analysis.
Overall, when TNNR-ADMM and TNNR-APGL are applied to solve (10)–(12), the generated subproblems all have closed-form solutions. As mentioned before, TNNR-ADMMAP is used to speed up the convergence of (10) and (12) when there are too many constraints. When one problem can be solved simultaneously with the three algorithms, TNNR-ADMMAP is in general more efficient, in the case of matrix completion [14] and in our test problems.
6. Experiments and Results
In this section, we present numerical results to validate the effectiveness of SVE and LRISD. In summary, there are two parts certified by the following experiments. On one hand, we illustrate the effectiveness of SVE on both real visual and synthetic data sets. On the other hand, we also illustrate the effectiveness of LRISD which solves TNNR based low-rank matrix recovery problems on both synthetic and real visual data sets. Since the space is limited, we only discuss model (10) using LRISD-ADMM in our experiments. If necessary, you can refer to [14] for extensive numerical results to understand that ADMMAP is much faster than APGL and ADMM without sacrificing the recovery accuracy, in cases of matrix completion. Similarly, for the low-rank matrix recovery, we have the same conclusion according to our experiments. Since the main aim of the paper is to present the effectiveness of SVE and LRISD, here, we omit the detailed explanation.
All experiments were performed under Windows 7 and Matlab v7.10 (R2010a), running on a HP laptop with an Intel Core 2 Duo CPU at 2.4 GHz and 2 GB of memory.
6.1. Experiments and Implementation Details
We conduct the numerical experiments under the following four classes, where two representative linear mappings A: matrix completion and partial DCT, are used. The first two cases are to illustrate the effectiveness of SVE. Here we compared our algorithm LRISD-ADMM with that proposed in [14], which we name as “TNNR-ADMM-TRY,” on the matrix completion problem. The main difference between LRISD-ADMM and TNNR-ADMM-TRY is the way of determining the best r. The former is to estimate the best r via SVE while the latter one is to try all the possible r values and pick the one of the best performance.
The last two are to show the better recovery quality of LRISD-ADMM compared with the solution of the common nuclear norm regularized low-rank recovery models, for example, (2), whose corresponding algorithm is denoted by LR-ADMM as above.
Compare LRISD-ADMM with TNNR-ADMM-TRY on matrix completion problems. These experiments are conducted on real visual data sets.
Compare the real rank r with r~ which is estimated by SVE under different situations, where A is a two-dimensional partial DCT operator (AA*=I). These experiments are conducted on synthetic data sets.
Compare LRISD-ADMM with LR-ADMM on the generic low-rank situations, where A is also a two-dimensional partial DCT operator. These experiments are conducted on synthetic data sets under different problem settings.
Compare LRISD-ADMM with LR-ADMM on the generic low-rank situations, where A is also a two-dimensional partial DCT operator. These experiments are conducted on real visual data sets.
In all synthetic data experiments, we generate the sample data as follows: b=AX*+ω, where ω is Gaussian white noise of mean zeros and standard deviation std. The Matlab script for generating X* is as follows:
(70)X*=randnm,r*randnr,n,
where r is a prefixed integer. Moreover, we generate the index set Ω in (5) randomly in matrix completion experiments. And, the partial DCT is also generated randomly.
In the implementation of all the experiments, we use the criterion Xl+1-XlF2/DataF2≤10-2 to terminate the iteration of Step 2 (in Algorithm 1) in LR-ADMM and LRISD-ADMM. In addition, we terminate the iteration of (b) in Step 2 by the criterion Xk+1-XkF2/DataF2≤10-4. In our experiments, we set β=0.001 empirically, which works quit well for the tested problems. The other parameters in TNNR-ADMM are set to their default values (we use the Matlab code provided online by the author of [14]). Besides, we use the PSNR (Peak Signal to Noise Ratio) to evaluate the quality of an image. As color images have three channels (red, green, and blue), PSNR is defined as 10×log10(2552/MSE), where MSE=SE/3T, SE=errorred2+errorgreen2+errorblue2, and T is the total number of missing pixels. For grayscale images, PSNR has a similar definition.
6.2. The Comparison between LRISD-ADMM and TNNR-ADMM-TRY on Matrix Completion
In this subsection, to evaluate the effectiveness of SVE, we compare the proposed LRISD-ADMM with TNNR-ADMM-TRY as well as LD-ADMM on matrix completion problems. As the better recovery quality of TNNR-ADMM-TRY than LR-ADMM on the matrix completion problem has been demonstrated in [14], we will show that the final estimated r of LRISD-ADMM via SVE is very close to the one of TNNR-ADMM-TRY.
We test three real clear images and present the input images, the masked images, and the results calculated via three different algorithms: LR-ADMM, TNNR-ADMM-TRY, and LRISD-ADMM. The recovery images are showed in Figure 2 and the numerical value comparisons of time and PSNR are shown in Table 1. We can see that, compared to LR-ADMM, both TNNR-ADMM-TRY and LRISD-ADMM achieve better recovery quality as expected. While the TNNR-ADMM-TRY achieves the best recovery quality as expected due to trying every possible r value, its running time is extremely longer than LR-ADMM. Our proposed LRISD-ADMM can achieve almost the same recovery quality as TNNR-ADMM-TRY, but with a significant reduction of computation cost. In fact, if the best precision is expected, we use the estimated r~ by LRISD-ADMM as a reference to search the best r around it, with the reasonably extra cost of computation. Here, for convenience in notation, we name the process LRISD-ADMM-ADJUST in Table 1 and Figure 2.
We compare the PSNR and time between LR-ADMM, TNNR-ADMM-TRY, LRISD-ADMM, and LRISD-ADMM-ADJUST. In TNNR-ADMM-TRY, they search the best r via testing all possible values of r. We use the estimated rank r~ as the best r in LRISD-ADMM. In LRISD-ADMM-ADJUST, we use the estimated rank r~ as a reference to search the best r.
Image
LR-ADMM
TNNR-ADMM-TRY
LRISD-ADMM
LRISD-ADMM-ADJUST
r
Time
PSNR
r
Time
PSNR
r=r~
Time
PSNR
r
Time
PSNR
(1)
0
73.8 s
21.498
6
5030 s
21.645
7
221 s
21.618
6
683 s
21.645
(2)
0
98.3 s
24.319
6
5916 s
24.366
5
150 s
24.357
6
799 s
24.366
(3)
0
106.3 s
29.740
15
3408 s
30.446
11
148 s
30.342
15
1433 s
30.446
Comparisons results of LR-ADMM, TNNR-ADMM-TRY, LRISD-ADMM, and LRISD-ADMM-ADJUST; we use three images here. The first column is original images. The second column is masked images. The masked images are obtained by covering 50% pixels of the original image in our test. The third column depicts images recovered by LR-ADMM. The fourth column depicts images recovered by TNNR-ADMM-TRY and LRISD-ADMM-ADJUST. The fifth column depicts images recovered by LRISD-ADMM where we just use the estimated r~ directly. Noticing the fourth column, we get the same image by applying two different methods, TNNR-ADMM-TRY and LRISD-ADMM-ADJUST. The reason is that the values of r calculated in the two methods are the same. But the procedure by which they find r is different. In TNNR-ADMM-TRY, they search the best r via testing all possible values (1–20). In LRISD-ADMM-ADJUST, we use the estimated rank r~ as a reference to search around for the best r.
6.3. The Effectiveness Analysis of SVE in Two-Dimensional Partial DCT: Synthetic Data
In Section 6.2, we showed that the estimated r~ by LRISD-ADMM is very close to the best r by TNNR-ADMM-TRY, on matrix completion problems. Here we further confirm the effectiveness of the proposed SVE of LRISD-ADMM, by conducting experiments for the generic low-rank operator A on synthetic data, where A is a two-dimensional partial DCT (discrete cosine transform) operator. We compare the best r (true rank) with the estimated r~ under different settings.
In the results below, r, r~, and sr denote the rank of the matrix X*, estimated rank, and sample ratio taken, respectively. We set the noise level std=0.9 and the sample ratios sr=0.5 and choose κ=10 in all of the tests. The reason of setting std=0.9 is that we want to well illustrate the robustness of SVE to noise. Next, we compare r with r~ under different settings. For each scenario, we generated the model by 3 times and reported the results.
We fix the matrix size to be m=n=300, r=20 and run LRISD-ADMM to indicate the relationship between Stt and r. The results are showed in Figure 3(a).
We fix the matrix size to be m=n=300 and run LRISD-ADMM under different r. The results are showed in Figure 3(b).
We fix r=20 and run LRISD-ADMM under different matrix sizes. The results are showed in Figure 3(c).
We give the relationship between Stt and r (tested three times) in (a). Comparisons between the true rank r and the estimated rank r~ under different ranks are shown in chart (b) and different matrix sizes in chart (c).
As shown in Figure 3, the proposed SVE performs the rationality and effectiveness to estimate r~ in Step 1 in Algorithm 1. Even if there is much noise, this method is still valid; namely, r~ is (approximately) equivalent to the real rank. That is to say, the proposed SVE is pretty robust to the corruption of noise on sample data. In practice, we can achieve the ideal results in other different settings. To save space, we only illustrate the effectiveness of SVE using the abovementioned situations.
6.4. The Comparison between LRISD-ADMM and LR-ADMM: Synthetic Data
In this subsection, we compare the proposed LRISD-ADMM with LR-ADMM on partial DCT data in general low-rank matrix recovery cases. We will illustrate some numerical results to show the advantages of the proposed LRISD-ADMM in terms of better recovery quality.
We evaluate the recovery performance by the Relative Error as Reer=Xre-X*F/X*F. We compare the reconstruction error under different conditions: different noise levels (std) and different sample ratios (sr) taken which are shown, respectively, in Figures 4(a) and 4(b). In addition, we compare the recovery ranks which are obtained via the above two algorithms in Figure 4(c). For each scenario, we generated the model by 10 times and reported the average results.
We fix the matrix size to be m=n=300, r=15, sr=0.5, and run LRISD-ADMM and LR-ADMM under different noise levels std. The results are shown in Figure 4(a).
We fix the matrix size to be m=n=300, std=0.5, and run LRISD-ADMM and LR-ADMM under different sr. The results are shown in Figure 4(b).
We set the matrix size to be m=n=300, sr=0.5, std=0.5, and run LRISD-ADMM and LR-ADMM under different r. The results are shown in Figure 4(c).
Comparison results of LRISD-ADMM and LR-ADMM on synthetic data. Different noise levels (std) are shown in (a). (b) gives the results under different sample ratios (sr). (c) shows the recovery ranks under different ranks (r).
It is easy to see from Figure 4(a), as the noise level std increases, the total Reer becomes larger. Even so, LRISD-ADMM can achieve much better recovery performance than LR-ADMM. This is because the LRISD model can better approximate the rank function than the nuclear norm. Thus, we illustrate that LRISD-ADMM is more robust to noise when it deals with low-rank matrices. With the increasing of sample ratio sr, the total Reer reduces in Figure 4(b). Generally, LRISD-ADMM does better than LR-ADMM, because LRISD-ADMM can approximately recover the rank of the matrix as showed in Figure 4(c).
6.5. The Comparison between LRISD-ADMM and LR-ADMM: Real Visual Data
In this subsection, we test three images, door, window, and sea, and compare the recovery images by general LR-ADMM and LRISD-ADMM on the partial DCT operator. Besides, we use the best r in LRISD-ADMM. In all tests, we fix sr=0.6, std=0. The SVE process during different stages to obtain r~ is depicted in Figure 5. For three images, we set κ=100,125,20 and generate r~=7,9,8 in LRISD-ADMM. Moreover, the best r=7,10,7 can be obtained. Figure 6 shows the recovery results of the two algorithms.
We present the process of SVE in LRISD-ADMM about three images in Figure 6. At the same time, we note that the first and the second singular values are much larger than others, as well as the values of Stt. To make the results more clear, we omit the first and the second singular values and Stt in each figure. We can find the observed estimated r~ are 7, 9, 8. Compared to the best r, which are 8, 10, 7, estimated r~ is approximately equivalent to the best r.
Door: the first outer iteration of LRISD
Door: the second outer iteration of LRISD
Door: the third outer iteration of LRISD
Window: the first outer iteration of LRISD
Window: the second outer iteration of LRISD
Window: the third outer iteration of LRISD
Sea: the first outer iteration of LRISD
Sea: the second outer iteration of LRISD
Sea: the third outer iteration of LRISD
Comparison results of LR-ADMM and LRISD-ADMM; we use three images here. The masked images are obtained by making partial DCT on the original images. Besides, images recovered by LR-ADMM and LRISD-ADMM are displayed in the third column and the fourth column, respectively.
Original image
Masked image
PSNR=18.779
PSNR=19.669
Original image
Masked image
PSNR=17.140
PSNR=18.576
Original image
Masked image
PSNR=16.829
PSNR=16.991
As illustrated in Figure 5, it is easy to see SVE returns a stable r~ in merely three iterations. And, the estimated r~ is a good estimate to the number of the largest few singular values. From Figure 6, it can be seen that LRISD-ADMM outperforms the general LR-ADMM in terms of smaller PSNR. More important, using eyeballs, we can see the better fidelity of the recoveries of LRISD-ADMM to the true signals, in terms of better recovering sharp edges.
6.6. A Note on κ
We note that the thresholding κ plays a critical rule for the efficiency of the proposed SVE. For real visual data, we can use κ=m*n/(3*s), s=0.5,0.8,1,3,5. For synthetic data, κ is denoted by κ=s*m*n/30, s=1,2,3. The above heuristic, which works well in our experiments, is certainly not necessarily optimal; on the other hand, it has been observed that LRISD is not very sensitive to κ. Of course, the “last significant jump” based thresholding is only one way for estimating the number of the true nonzero (or large) singular values, and one can try other available effective jump detection methods [15, 40, 41].
7. Conclusion
This paper introduces the singular values estimation (SVE) to estimate an appropriate r (in Xr) that the estimated rank is (approximately) equivalent to the best rank. In addition, we extend TNNR from matrix completion to the general low-rank cases (we call it LRISD). Both synthetic and real visual data sets are discussed. Notice that SVE is not limited to thresholding. Effective support detection guarantees the good performance of LRISD. Therefore future research includes studying specific signal classes and developing more effective support detection methods.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This work was supported by the Natural Science Foundation of China, Grant nos. 11201054, 91330201, and by the Fundamental Research Funds for the Central Universities ZYGX2012J118, ZYGX2013Z005.
AbernethyJ.BachF.EvgeniouT.VertJ.-P.Low-rank matrix factorization with attributes2006N-24/06/MMAmitY.FinkM.SrebroN.UllmanS.Uncovering shared structures in multiclass classificationProceedings of the 24th International Conference on Machine Learning (ICML '07)June 2007172410.1145/1273496.12734992-s2.0-34547979771EvgeniouA.PontilM.Multi-task feature learning19Proceedings of the Conference on Advances in Neural Information Processing Systems2007The MIT Press41TomasiC.KanadeT.Shape and motion from image streams under orthography: a factorization method19929213715410.1007/BF001296842-s2.0-0026943737MesbahiM.On the rank minimization problem and its control applications199833131362-s2.0-000232516410.1016/S0167-6911(97)00111-4FazelM.HindiH.BoydS. P.Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices3Proceedings of the American Control ConferenceJune 2003IEEE215621622-s2.0-0142215333RechtB.FazelM.ParriloP. A.Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization201052347150110.1137/070697835MR2680543ZBL1198.903212-s2.0-78549288866CandèsE. J.RechtB.Exact matrix completion via convex optimization20099671777210.1007/s10208-009-9045-5MR25652402-s2.0-71049116435CandesE. J.TaoT.The power of convex relaxation: near-optimal matrix completion20105652053208010.1109/TIT.2010.2044061MR2723472WrightJ.PengY.MaY.GaneshA.RaoS.Robust principal component analysis: exact recovery of corrupted low-rank matrices by convex optimizationProceedings of the 23rd Annual Conference on Neural Information Processing Systems (NIPS '09)December 2009Vancouver, Canada208020882-s2.0-84863367863TohK.-C.YunS.An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems201063615640MR27430472-s2.0-78049448383CaiJ.-F.ChanR.ShenL.ShenZ.Restoration of chopped and nodded images by framelets20083031205122710.1137/040615298MR2398862ZBL1161.943032-s2.0-55549123884CaiJ.-F.CandèsE. J.ShenZ.A singular value thresholding algorithm for matrix completion201020419561982MR26002482-s2.0-7795129104610.1137/080738970HuY.ZhangD.YeJ.LiX.HeX.Fast and accurate matrix completion via truncated nuclear norm regularization20133592117213010.1109/TPAMI.2012.2712-s2.0-84880896128WangY.YinW.Sparse signal reconstruction via iterative support detection20103346249110.1137/090772447MR26794362-s2.0-78651555630FazelM.HindiH.BoydS. P.A rank minimization heuristic with application to minimum order system approximation6Proceedings of the American Control Conference (ACC '01)June 2001473447392-s2.0-0034853839SrebroN.RennieJ. D. M.JaakkolaT. S.Maximum-margin matrix factorizationProceedings of the 18th Annual Conference on Neural Information Processing Systems (NIPS '04)December 2004132931362-s2.0-84898932317YuanX.YangJ.Sparse and low-rank matrix decomposition via alternating direction methods2009Nanjing, ChinaNanjing Universityhttp://math.nju.edu.cn/~jfyang/files/LRSD-09.pdfSturmJ. F.Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones199911-121–462565310.1080/10556789908805766MR17784332-s2.0-0033296299JiS.YeJ.An accelerated gradient method for trace norm minimizationProceedings of the 26th International Conference On Machine Learning (ICML '09)June 2009ACM4574642-s2.0-71149103464PongT. K.TsengP.JiS.YeJ.Trace norm regularization: reformulations, algorithms, and multi-task learning20102063465348910.1137/090763184MR2763512ZBL1211.901292-s2.0-79251515185WenZ.GoldfarbD.YinW.Alternating direction augmented Lagrangian methods for semidefinite programming201023-420323010.1007/s12532-010-0017-1MR2741485ZBL1206.900882-s2.0-79952297247HestenesM. R.Multiplier and gradient methods1969453033202-s2.0-001460430810.1007/BF00927673MR0271809PowellM. J.FletcherR.A method for nonlinear constraints in minimization problems1969New York, NY, USAAcademic Press283298MR0272403GabayD.MercierB.A dual algorithm for the solution of nonlinear variational problems via finite element approximation197621174010.1016/0898-1221(76)90003-12-s2.0-0002211517YangJ.YuanX.Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization20138228130132910.1090/S0025-5718-2012-02598-1MR29830262-s2.0-84871450765GlowinskiR.Le TallecP.19899SIAMStudies in Applied and Numerical MathematicsGlowinskiR.2008SpringerChenC.HeB.YuanX.Matrix completion via an alternating direction method201232122724510.1093/imanum/drq039MR2875250ZBL1236.650432-s2.0-84863068818BoydS.ParikhN.ChuE.PeleatoB.EcksteinJ.Distributed optimization and statistical learning via the alternating direction method of multipliers2010311122ZBL1229.901222-s2.0-8005176210410.1561/2200000016ArgyriouA.EvgeniouT.PontilM.Convex multi-task feature learning200873324327210.1007/s10994-007-5040-82-s2.0-55149088329AbernethyJ.BachF.EvgeniouT.VertJ.-P.A new approach to collaborative filtering: operator estimation with spectral regularization2009108038262-s2.0-64149107285ObozinskiG.TaskarB.JordanM. I.Joint covariate selection and joint subspace selection for multiple classification problems20102022312522-s2.0-77953322499MR261077510.1007/s11222-008-9111-xNesterovY.Gradient methods for minimizing composite objective function20072007076BeckA.TeboulleM.A fast iterative shrinkage-thresholding algorithm for linear inverse problems200921183202MR248652710.1137/080716542ZBL1175.94009NesterovY.A method of solving a convex programming problem with convergence rate O(1/k2)198327372376HeB.TaoM.YuanX.Alternating direction method with gaussian back substitution for separable convex programming201222231334010.1137/110822347MR29688562-s2.0-84865692854LinZ.LiuR.SuZ.Linearized alternating direction method with adaptive penalty for low-rank representationProceedings of the Advances in Neural Information Processing Systems (NIPS '11)2011612620HeB. S.YangH.WangS. L.Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities2000106233735610.1023/A:1004603514434MR1788928ZBL0997.490082-s2.0-0034239566YasakovA. K.Method for detection of jump-like change points in optical data using approximations with distribution functionsOcean Optics XII1994Bergen, Norway605612Proceedings of SPIE10.1117/12.190105JungJ.-H.DuranteV. R.An iterative adaptive multiquadric radial basis function method for the detection of local jump discontinuities2009597144914662-s2.0-63249115441MR251227110.1016/j.apnum.2008.09.002