Locally Regularized Collaborative Representation and an Adaptive Low-Rank Constraint for Single Image Superresolution

,


Introduction
With the rapid development of arti cial intelligence, video surveillance, biomedicine, and other elds, the requirements for high-resolution (HR) images and videos are increasingly stringent.
e limitations of low-cost imaging equipment and poor imaging environments usually result in unpleasant HR images.Without reducing these inherent limitations, single image superresolution (SISR) methods can recover missing details and produce a satisfying HR image from an observed low-resolution (LR) image.e image degradation process can be expressed mathematically as follows: where D refers to a downsampling operator, H represents a blur matrix, X and Y, respectively, denote the original HR image and the degraded LR image, and v is additive Gaussian noise.Generally, existing SISR methods can be divided into interpolation-based approaches [1][2][3], reconstruction-based approaches [4][5][6][7], and learning-based approaches [8][9][10][11][12][13][14][15][16][17].
Freeman et al. [18] rst introduced the example learning idea to the SISR model and established the mapping relationship using Markov networks.Chang et al. [19] proposed a neighbor embedding (NE) algorithm based on locally linear embedding (LLE) to produce HR patches using reconstruction weights and by linearly combining the HR neighbors.Xu et al. [15] proposed a new NE-based selflearning SR approach that explored a propagation strategy from both vertical and horizontal directions to better perform patch matching.From investigating compressive sensing to relying on the signal sparsity theory, many sparse representation-based SR methods have emerged.Sparse representation-based approaches are also known as dictionary-based approaches, as dictionary learning is crucial for this process [33].Based on the assumption that each LR image patch shares the same sparse coefficient as its HR counterpart, Yang et al. obtained a plausible HR image using sparse coefficients and two jointly trained dictionaries [8].Dehkordi et al. [21] proposed an SR method that operates under a self-learning framework using sparse representation; this method iteratively obtains high-frequency details through the updated difference between the degraded version of the SR output and LR input, which can further optimize the SR accuracy.To improve the reconstruction speed and decrease the computational burden without sacrificing the SR quality, Timofte et al. [9] utilized the NE method and a sparsely trained dictionary to develop an anchor neighbor regression (ANR) algorithm, which is a representative method of example regression-based approaches.en, they further improved the ANR algorithm to acquire the adjusted ANR (A+) algorithm [10], which learns the linear regressors using the full training materials.Zhang et al. [12] proposed an effective collaborative representation cascade (CRC) framework that extracts features from the intermediate recovered resulting image; their methods can gradually enhance the LR input while also improving the HR image quality.Recently, deep learningbased methods have been greatly developed and have achieved perfect performance.Dong et al. [17] applied a convolutional neural network (CNN) to address the SR reconstruction problem by constructing three convolution layers that reveal the end-to-end mapping between low-and high-resolution images.To improve the training stability, Tian et al. [30] proposed a coarse-to-fine CNN-based SR method.By cascading different types of modular blocks, this method combines LR and HR features and proposes a feature fusion strategy using heterogeneous convolutions that can significantly improve the reconstruction efficiency under the premise of ensuring SR quality.Anwar and Barnes [31] presented a densely residual Laplacian network (DRLN) for SR reconstruction.
is network adopts the residual structural design of the cascading residual and models the key features with Laplacian attention to improve the network's learning ability.Although deep learning-based SR methods can produce impressive SR results, they generally have a strong dependence on device configuration and training data.
From the image degradation process shown in (1), we know that SISR reconstruction is based on an observed LR image, eliminating the influence of factors such as blurring, downsampling and noise, and thereby restoring an HR image.Like other image restoration (IR) problems such as image dehazing [34], image deblurring [35], and image deraining [36,37], this problem is an ill-posed inverse process.
Matrices containing nonlocal similar patches tend to be low in rank and NLR priors are a promising type of prior information in many SR reconstruction methods.e combination of multiple complementary priors is more competitive in terms of improving the SR quality when compared to an individual single prior.Furthermore, integrating the advantages of learning-and reconstructionbased approaches can further enhance the SR performance [6,7,35,43,45,47].For example, Ren et al. enhanced the NLTV prior with the decay kernel (DK) in addition to stable group similarity reliability (SGSR) schemes, used a CNN to extract multi-directional features from external samples as a local prior, and then combined two priors to construct a new SISR model that could provide fine details while preserving edges [7].Dong et al. built a novel adaptive sparse domain selection (ASDS) scheme by integrating local autoregressive (AR) and nonlocal self-similarity (NLSS); this method could efficiently perform image deblurring and SR reconstruction [35].Merging low rank with local rank regularization, Gong et al. proposed a novel multilayer strategy and reconstruction model [43].
Motivated by the above work, in this paper, we design a locally regularized collaborative representation SR method with an adaptive low-rank constraint.
e A+ algorithm adopts the collaborative representation to determine neighborhoods from training samples and uses them to precompute the projection matrices offline; the matrices are then directly multiplied by the LR input patches to generate the corresponding HR outputs.erefore, the reconstruction complexity is reduced, while the reconstruction quality is guaranteed.However, the A+ algorithm usually assumes that the LR and HR feature spaces are structurally similar.In reality, the two spaces are not exactly similar, and the obtained projection matrix from the LR space cannot appropriately describe the HR space structure.In view of this information, as the locality prior knowledge is more important than the sparsity prior, the locality prior is introduced into the process of exploring the nonlinear mapping relationship between the LR and HR spaces, and the resultant projection matrix more accurately represents their mapping relationship.en, to further reveal the underlying structures of similar patches and make the SR results more stable, we implemented the shape-adaptive low-rank (SLR) constraint to enhance the recovered image performance.
us, this SR model can effectively combine the local priors learned from the external databases with the image's internal nonlocal priors, fully exploiting the images' prior knowledge.Specific experimental comparisons show that the 2 Mobile Information Systems proposed approach is better and more stable than some state-of-the-art approaches. is paper's primary contributions are as follows: (1) We consider the importance of the locality in terms of revealing the nonlinear mapping, and we introduce the local structure prior to constraining the reconstruction weights and constructing a localityconstraint collaborative representation model.e resultant projection matrix can better reflect the relationship between the LR image patches and the HR anchor points; this strategy is beneficial in terms of gaining high-frequency prior information from the external databases.
(2) e shape-adaptive grouping method is applied to search for similar patches, capture the image's intrinsic structure, and remove dissimilar pixels in each similar image patch.erefore, enforcing the SLR constraint between similar patches can efficiently separate noise and preserve edges, resulting in better SR results.
e rest of the paper is organized as follows: Section 2 summarizes related work on collaborative representation for SISR and NLR constraints.Section 3 introduces, describes, and analyzes the proposed method.Section 4 discusses the experiments that verify the proposed method's effectiveness, and conclusions are drawn in Section 5.

Collaborative Representation for SISR.
e A+ [10] algorithm is an SISR method based on collaborative representation (also known as ridge regression) that has achieved considerable attention from researchers due to its efficiency in terms of training and reconstruction stages as well as its excellent reconstruction performance.During the training phase, starting from LR sample patches L , . . ., d M L ] using the same strategy described by Zeyde et al. [20], where N p and M represent the size of the samples and dictionary, respectively, and N p ≫ M. Each dictionary atom d j L is also called an anchor point.
e A+ algorithm relies on two important premises.e first is that each LR image patch y i can be approximately linearly represented by the K nearest neighbors Here, N j L is searched for from among the full training samples Y L . is process is unlike that for N j L in the ANR algorithm, which is found in LR dictionary D L .en, the l 2 -norm least square regression problem for the representation coefficients can be expressed as follows: where w i denotes the coefficients of y i in the neighbor and λ is the balance parameter.e closed solution to (2) is written as follows: e other assumption is that the LR and HR feature space have similar local structures; in other words, they may share the same representation coefficients.
erefore, the corresponding HR image patch x i can be estimated using x i � N j H w i , where N j H denotes the corresponding HR neighbor.For higher computation efficiency, the projection matrix P j can be computed by during the learning stage; thus, the HR image patch can be calculated using x i � P j y i .

Nonlocal Low-Rank Constraint.
ere are many similar local patterns in different locations throughout a natural image.As a result, natural images may exhibit the characteristics of the low-rank structure.Fully exploring the low-rank characteristics of data is beneficial for robust subspace recovery [48][49][50].
erefore, in reconstruction-based SR problems, the NLR constraint is successful in terms of exploiting image NLSS, and it has received extensive attention.
In the application of NLR, an image is usually first divided into overlapped patches.
en, the l most similar patches of each input image patch x i are searched for in the search window to form matrix X i .Obviously, matrix X i contains similar content, so it may satisfy the low-rank property.However, in practice, some noise may corrupt matrix X i , resulting in the destruction of the low-rank property.erefore, the low-rank matrix  X i close to X i can be recovered via the following equation: After each low-rank matrix  X i is obtained, the underlying low-rank structure information in the image can be revealed.In reconstruction-based SR methods, the reconstruction model can be expressed as follows: where c and β are balance parameters, and R i represents the matrix extracting the similar patches for each exemplar image patch x i .

Proposed Method
In this section, we introduce our method, which combines locally regularized collaborative representation (LCR) with an SLR constraint to effectively realize the combination of the learning-and reconstruction-based methods.

Locally Regularized Collaborative
Representation.e A+ algorithm is an extension of the ANR algorithm, and it is Mobile Information Systems an efficient and high-quality SR method.As discussed in Section 2.2, we notice that the calculation of the representation coefficients only considers the relationship between LR image patch y i and the dictionary atoms, not the relationship between the dictionary atoms and the samples in the neighbor at is, y i treats each sample in neighbor N j L equally, which leads to inaccuracy in the representation coefficients.us, the obtained projection matrix cannot better reflect the nonlinear mapping between the LR space and the HR space, and dissimilar patches may be assigned larger weights, degrading the reconstruction quality.To address this issue, we introduce a local structure prior to the objective function (2) of the representation weights, which can adaptively assign reconstruction weights to each input image patch y i according to the similarity between the anchor atom and each sample in neighbor N j L , thereby optimizing the projection matrix.e details of the presented method are as follows.
During the dictionary learning stages, unlike the A+ algorithm, we measure the distance between the dictionary atoms and the atoms or samples using mutual coherence.
e specific steps are as follows: we extract the LR features from the LR training images.
en, an LR feature set is obtained via the principal component analysis (PCA) dimension reduction algorithm, and the LR dictionary D L is then trained using the following expression: where | denotes the mutual coherence between the dictionary atoms, A represents the coefficient matrix of the LR feature set Y Lf , α is the trade-off between the reconstruction error and mutual coherence, and p represents the sparsity level.e optimization problem (7) can be solved through MI-KSVD, which can be found in [51].After acquiring the learned LR dictionary, we construct the corresponding HR dictionary D H .
It has been proved that locality priors contribute to exploring nonlinear relationships in data [52,53].erefore, we consider constructing a locality-constrained collaborative representation model for reconstruction coefficients.For each LR image patch y i , (2) can be rewritten as follows: where ⊗ represents element-wise multiplication, and λ is a balance parameter.s i is a K-dimensional vector, and its k-th is the index set of the K nearest neighbors of LR dictionary atom d i L .Further, σ is used to control the decay speed of the locality adapter.Note that k-th element s i,k in vector s i is inversely proportional to the correlation corr(d i L , p k L ); this relationship means that the higher the correlation between the dictionary atoms and the neighbor samples, the larger the assigned weights.erefore, the purpose of adaptively assigning weights is achieved.en, we can solve (8) and obtain its closed-form solution as follows: erefore, the projection matrix can be counted using the following equation: e resultant projection matrix can accurately reveal the nonlinear mapping between the LR and HR feature spaces.Once the projection matrix has been calculated, the desired HR image patch can be generated via x i � P j y i .Moreover, the projection matrix is still computed during the training phase to improve the reconstruction quality without sacrificing speed.Algorithm 1 describes the projection matrix learning process.

Shape-Adaptive Low-Rank Constraint.
Natural images usually contain plentiful redundant structures, indicating that the images satisfy the low-rank characteristic.Exploring the images' NLR characteristics helps reveal their underlying internal structures, and NLR constraints have been widely used in SR reconstruction.In addition, when combined, local and nonlocal priors can complement each other and improve the reconstruction performance.
erefore, to further upgrade the LCR algorithm, we apply an NLR constraint to the proposed LCR model.
General NLR-based SR methods exploit the fixed-shape grouping to find similar patches and construct low-rank matrixes.However, the fixed-shape grouping sometimes fails to accurately find similar patches for the target patch.Figure 1 provides a simplistic example that illustrates the shortcomings of the fixed-shape NLR model.e black square represents the search window, the red square in the middle is the target patch, and the green squares are the similar patches that are searched for.We observed that the similar patches found using fixed-shape grouping strategy contain different counterparts from the target patch, i.e., the purple ellipse regions.Vectorizing the target and its similar patches, in addition to using the low-rank characteristic on the resultant matrix, results in poor performance.
To remedy the above problem, we incorporate the SLR constraint in the recovered SR image; this strategy can accurately search for similar patches, efficiently separate the image's noise, and preserve its edges.Specifically, for each target patch x i 1 in the reconstructed SR image X 0 , the steer kernel [54] is first utilized to extract the local structure around its central pixel.
en, we compare the obtained kernel value with the preset threshold and select pixels with a kernel value greater than the threshold as the homogeneous pixels.In general NLR-based methods, similar patches (i.e., squares in Figure 1) are found using the l 2 distance (i.e., 4 Mobile Information Systems where x i q is the q-th adjacent patch of x i 1 ), and then the target patch x i 1 and all similar patches are combined.Unlike general NLR-based methods, in the SLR model, after identifying all the homogeneous pixels in x i 1 , we fetch the homogeneous areas of different patches using ‖T i x i 1 − T i x i q ‖ 2 , where T i denotes the matrix used to extract the homogeneous pixels from the i-th target patch x i 1 , that is, the shape-adaptive grouping method.As shown in Figure 2, the solid red triangle represents the homogeneous pixel area of the target patch (i.e., the adaptive patch shape), the dashed red line delineates the part that must be excluded from the original fixed-shape patch, and the solid green triangles denote the shape-adaptive similar patches.Finally, we lexicographically stack all of the shape-adaptive similar patches together to form low-rank matrix  X i , which has a stronger low-rank characteristic.In addition, it is not difficult to find that each similar patch in Figure 1 is larger than each similar patch in Figure 2, indicating that the dimension of the lowrank matrix  X i that was constructed using SLR is also smaller and that the computation efficiency is higher.Low-rank matrix  X i can be recovered via the following equation: After obtaining each low-rank matrix  X i , the underlying low-rank structure of the HR image X 0 may be revealed.We can obtain the entire HR image via the following minimization problem:

e Optimization of the LCR_SLR Model.
Refining the image derived from the LCR model, we enforce the SLR constraint on it and propose the LCR_SLR method.
According to the previous analysis, the proposed method primarily includes two steps.e first step involves learning the reconstruction weights corresponding to the anchor neighbors of each LR image patch.e second step involves upgrading the resultant image using the SLR constraint.In the first step, we obtain the weights via (9), and the projection matrix can be calculated offline.Once all of the HR image patches are obtained, they are integrated together to form the required HR image X 0 .In the second step, we impose the SLR constraint on the initial reconstruction result and uncover its underlying NLSS using (11).However, (11) is NP-hard since it is a rank minimization problem.We used the weighted nuclear norm [51] to replace the matrix rank.( 11) can be expressed as follows: where ‖T i  X i ‖ w i , * �  l w li σ li , σ li is the l-th singular value of T i R i X 0 , and w i refers to the nonnegative weight vector formed by w li .When g i � T i  X i , ( 13) can be rewritten as follows: Using the method in [55], the r + 1 th iteration can be described as follows: where UΣV T is the singular value decomposition (SVD) of T i R i X 0 , (χ) + � max(χ, 0) and w i is the weight vector.After solving for all T i  X i , the entire HR image can be recovered via optimization problem (12).en, the closed solution of X can be written as follows: Step 1: Input LR and HR Training images I i H , I i L  , scale factor t, and dictionary size M; construct a series of image patches p i H , p i L  , extract the corresponding features, and train the LR and HR dictionary of size M using MI-KSVD [51].
Step 2: For each dictionary atom d m L : m � 1 to M, search all training samples to obtain its K nearest LR neighbors N j L based on the correlation vector and the corresponding HR neighbors N j H .
Step 3: Compute the projection matrix P m with a local constraint using (10) and store it for reconstruction.ALGORITHM 1: e projection matrix learning process.

Target patch
Similar patch 1

Similar patch 2
Figure 1: Graphical illustration of the low-rank model with fixed-shape grouping.
Mobile Information Systems In practice, to reduce the computational burden of ( 16), results can be produced by the conjugate gradient (CG) method.Algorithm 2 describes our proposed LCR_SLR method in detail.

Experimental Results and Analysis
In this section, we perform experiments to analyze the validity of our method.
e experiments were all implemented on a computer configured with an Intel(R) Core (TM) i7-6500U processor in the MATLAB R2017b programming environment.
In all experiments, we obtained simulated LR images using a 7 × 7 Gaussian blurring filter with a standard deviation of 1.6 and downsampling factors with different scales.Moreover, when verifying our method's robustness, additive Gaussian noise with a standard deviation of σ � 5 was added to the simulated LR images.In our LCR_SLR model, we set η � 0.001, β � 0.007, and c � 2.5.e patch size was 5 × 5, the group size was 20, and the search window size was 25.Under noisy conditions, since the dependability of the fidelity term in (13) decreases, we increased β to 0.175.

Experiments on Noiseless Images. Table 1 lists the average
values of the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) generated by various methods when there was no noise and the scale factors were ×3 and ×4.Among all of the compared methods, the SCSR method's performance was the lowest, and the LPGSSR method demonstrated the most excellent performance.As the CRC method constructs a multilayer mapping model for learning the LR and HR feature pairs, the CRC method is superior to the A+ algorithm.When the scale factor was ×4, the average PSNR and SIIM obtained by the SRCNN on the BSD100 dataset were higher than those of the CRC method.e SRCNN is a deep learning-based SR method, and due to the excellent performance of deep learning in SR reconstruction, it has developed rapidly in recent years.Both the ASCSR and LPGSSR are SR methods that include a low-rank constraint.On the four datasets, the reconstruction results for the LPGSSR are better than those for the ASCSR.As listed in Table 1, our proposed method is more competitive than others since it achieved the highest PSNR and SSIM values on the four datasets.
Aiming to visually evaluate the proposed method, Figures 3 and 4 provide a visual detail comparison of an image, Butterfly, from Set 5 and an image, ppt3, from Set 14, respectively, with a magnification set to ×3. Figure 5 shows the visual results for the img087 image from Urban100 at ×4 magnification.e sharpness of Figures 3(e)-3(g) is similar; they are clearer than those in Figures 3(b) and 3(d).e same case also appears in Figure 5. e edges of the letters in Figure 4(e) are slightly clearer than those in Figures 4(f ) and 4(g).Obviously, compared with the other methods, the results generated by our method have clearer edges and richer structural information.

Experiments on Noisy
Images.Compared methods such as A+ [10], CRC [12], and SRCNN [17] cannot effectively handle noise.erefore, before using these SR methods, we first apply the BM3D [56] denoising algorithm to the LR images.Table 2 shows the average PSNR and SSIM values for the above four datasets.Clearly, the LPGSSR method exceeds the compared methods.On all four datasets, the performance of the CRC method is superior to that of the SRCNN method.In general, our proposed method still surpasses all other methods, indicating that our method is robust to noise.
Figures 6 and 7 illustrate the visual effect on image 145086 from BSD100 and another image, Baboon, from Set 14 with a scale factor of ×3.Obviously, the SR results of the 6

Mobile Information Systems
Step 1: Input LR test image Y, learned LR dictionary D L , HR dictionary D H , scale factor t, regularization parameter η, β and c, and the nearest neighbor number M.
Step 2: For each LR image patch y i L i � 1 to s, search on D L to find the nearest neighbor and its corresponding index, obtain the HR counterpart of y i L via x i H � P j y i L , and combine all HR image patches to form a whole HR image X 0 .
Step 3: For each target patch x i in X 0 , utilize steer kernel regression to estimate matrix T i , perform shape-adaptive grouping on X 0 and calculate each low-rank matrix T i  X i via (15).Step 4: Obtain the final reconstructed HR image  X� X H via (16).

Discussion of Parameters.
is section discusses the parameters of the proposed model.For simplicity, we merely discuss all of the results under noiseless conditions where the scale factor is ×3, and the blur kernel is the same as that in Section 4.1.

Influence of Trade-Off Parameters η and β.
Parameter η controls the local structure prior weight of the objective function, and parameter β indicates the importance of the nonlocal low-rank constraint.With changes to parameters η and β, Figures 8 and 9 display the average PSNR and SSIM for all of the images contained in Set 5 that were generated by our method.Figure 8 shows that the SR performance is improved as η increases.Specifically, the SR performance is better when η is in the range 10 −6 to 10 −4 , and the curves become stable after η � 10 − 4 .erefore, we set  (d) ASCSR [23], (e) LPGSSR [46], (f ) CRC [12], (g) SRCNN [17], and (h) proposed method.
Mobile Information Systems η � 0.001 in the experiments.In Figure 9, the SR results are the best when setting β to 0.007.erefore, in our experiments, we selected β � 0.007.

Influence of the Patch Magnitude and Group
Magnitude.Tables 3 and 4 list the influence of the patch magnitude and group magnitude on the SR results, (d) ASCSR [23], (e) LPGSSR [46], (f ) CRC [12], (g) SRCNN [17], and (h) proposed method.Mobile Information Systems respectively; they list the average PSNR and SSIM values for all of the images in Set 5 for different patch sizes and group sizes.Clearly, the SR performance is the best when the patch size and the group size are 7 × 7 and 25, respectively.Both large patch or large group sizes may incur computational burdens, so we selected the patch size as 5 × 5 and the group size as 20 in our experiments.

Discussion of the Performance for Real-World Images.
To evaluate the performance of the proposed method on real-world noisy images, we performed experiments using the SUN database [57].As shown in Figure 10, we selected two noisy images, Heliport and Building Facade.e regions of interest (ROI) marked by green boxes in both figures were used for reconstruction to verify the practical effectiveness of   (c) A+ [10], (d) ASCSR [23], (e) LPGSSR [46], (f ) CRC [12], (g) SRCNN [17], and (h) proposed method.
Mobile Information Systems our proposed approach.Both ROIs with a size of 128 × 128 were used to generate HR images.Figures 11 and 12 show the SR visual comparison results produced by our method and other different SR methods at a magnification factor of 3. From the two figures, the following conclusions can be drawn: First, the HR images generated by Bicubic are the blurriest.Second, ASCSR and LPGSSR outperform other compared methods in terms of the reconstruction effect, but ASCSR produces images with artifacts that are not as sharp as those obtained by LPGSSR.Finally, the recovered HR images using the proposed method have the cleanest edges and richest details. e experimental results show that our proposed method is more robust against noise while preserving image edges.Furthermore, it can be effectively applied to real-world noisy images, as it outperforms other advanced SR algorithms.In addition, the SR model we designed realizes the complementary advantages of local and nonlocal image priors, so we firmly believe that the proposed SR model can also achieve better performances for image inpainting, image deblurring, and image denoising tasks.

Conclusion
In this paper, we propose an SR approach that combines locally regularized collaborative representation with a shapeadaptive low-rank constraint.As the locality prior plays an important role in exploring the nonlinear data relationship, we introduce the locality-constraint parameter when calculating the projection matrix.is parameter fully uses the relationship between the dictionary atoms and the neighbor samples; the resultant projection matrix can better reflect the nonlinear mapping between the LR space and the HR space.en, to further reveal the images' internal NLSS structure, we apply the SLR constraint to the generated HR image to efficiently separate sparse noise, preserve image edges, and make the generated reconstruction images more robust.e extensive experimental results illustrate that our method achieves remarkable SR performance, surpassing some other advanced SR methods.
e proposed method combines the local and nonlocal priors, which complement each other.Additionally, the reconstruction image quality is excellent, but some shortcomings remain.Searching for similar patches and optimization problems are usually complex issues, so we will try to study more effective search strategies in the future to further improve the reconstruction efficiency.Meanwhile, due to the robustness of our proposed framework to noisy images, we will also explore extending it to other IR work in complex underground mine coal environments, including image deblurring, and image denoising.[23], (e) LPGSSR [46], (f ) CRC [12], (g) SRCNN [17], and (h) proposed method.

Figure 2 :
Figure 2: Graphical illustration of the low-rank model with shape-adaptive grouping.

Figure 10 :
Figure 10: From left to right, real-world images: (a) heliport (b) building facade, and their corresponding ROI areas for testing.

Table 3 :
Influence of the patch size in the proposed LCR_SLR method.

Table 4 :
Influence of the group size in the proposed LCR_SLR method.