New Robust Regularized Shrinkage Regression for High-Dimensional Image Recovery and Alignment via Affine Transformation and Tikhonov Regularization

In this work, a new robust regularized shrinkage regression method is proposed to recover and align high-dimensional images via affine transformation and Tikhonov regularization. To be more resilient with occlusions and illuminations, outliers, and heavy sparse noises, the new proposed approach incorporates novel ideas affine transformations and Tikhonov regularization into highdimensional images. *e highly corrupted, distorted, or misaligned images can be adjusted through the use of affine transformations and Tikhonov regularization term to ensure a trustful image decomposition. *ese novel ideas are very essential, especially in pruning out the potential impacts of annoying effects in high-dimensional images. *en, finding optimal variables through a set of affine transformations and Tikhonov regularization term is first casted as mathematical and statistical convex optimization programming techniques. Afterward, a fast alternating direction method for multipliers (ADMM) algorithm is applied, and the new equations are established to update the parameters involved and the affine transformations iteratively in the form of the round-robin manner. Moreover, the convergence of these new updating equations is scrutinized as well, and the proposed method has less time computation as compared to the state-of-the-art works. Conducted simulations have shown that the new robust method surpasses to the baselines for image alignment and recovery relying on some public datasets.


Introduction
High-dimensional images for alignment and recovery [1][2][3][4] arise in different scenarios such as image processing [5] and surveillance [6,7]. However, analyzing high-dimensional data is a challenging task due to miscellaneous adverse effects of occlusions and illuminations, corruptions, and noises. As a result of this, it is important to develop new robust regression methods to mitigate the adverse influence of these annoying effects in high-dimensional data, which are resilient to various annoying effects.
Since the inception of the pioneering work of robust principal component analysis (RPCA) by Candès et al. [8], a myriad of algorithms have been addressed for robust sparse low-rank image recovery, e.g., [9,10]. However, these methods do not work well when the outliers and heavy sparse noises are heavily skewed. To overcome this drawback, a myriad of robust algorithms have been addressed to deal with outliers and heavy sparse noises in high-dimensional images. Likassa et al. [11][12][13] considered new robust algorithms via affine transformations and L 2,1 norms for image recovery and alignment which boosted the performance of the algorithms. Moreover, [14][15][16][17] proposed an efficient extension of RPCA using affine transformations. However, it lacks robustness to work well in high-dimensional data to remove the potential impacts of adverse effects. e authors of [18][19][20] developed robust algorithms to decompose the original corrupted data as clean and sparse errors. However, these algorithms are not scalable and robust when the number of observations becomes large. Podosinnikova et al. [21] developed robust PCA to minimize the reconstruction error, but it is not effective to deal with the adverse effects in high-dimensional data. To circumvent this dilemma, Shahid et al. [22] incorporated the spectral graph regularization into the robust PCA. However, it is unable to recover the images due to the adverse effects of outliers and heavy sparse noises lying in high-dimensional data. Shakeri et al. [23] proposed an online sequential framework to recover the low-rank component by pruning out the sparse corruptions. Zhang and Lerman [24] proposed robust subspace recovery to tackle the influence of outliers and heavy sparse noises; however, due to its computational complexity, it limits its scalability to the high-dimensional data. is work is also reviewed in subspace recovery [25][26][27]. Moreover, De la Torre and Black [28] proposed a parameterized component analysis algorithm to find the low-rank component and used as a robust fitting function to reduce the impact of outliers. However, it is nonconvex, which lacks a polynomial-time algorithm to solve it. Nevertheless, many real-world corrupted images contain severe intensity distortions which are dense and are thus difficult to be subtracted by RASL [17,29]. Lu et al. [30] proposed an algorithm for tensor robust principal component analysis to tackle the influence of outliers and heavy sparse noises. To tackle this dilemma, Lu et al. [31] addressed exact low-tubal-rank tensor recovery from Gaussian measurements; however, its performance in image recovery is not promising. Moreover, [30][31][32][33][34][35] addressed an extended version of RPCA for image representation, where these algorithms did not work well with the adverse effects of outliers and heavy sparse noises. To overcome these aggregated dilemmas, [33,36,37] addressed a novel low-rank regularized regression model for facial recognition, which seeks to boost the performance of the algorithm with the potential adverse effects. Additionally, there are non-rigid transformation such as [33,36,37] some new transformation estimation algorithms are proposed the L 2E estimator and apply it to nonrigid registration rather than dense correspondences, and it better uses sparse image representation. However, these methods cost computationally more due to efficient cross-correlation coefficient for the matching lower level of the hierarchy. ereby, [38,39] proposed algorithms to deal with outliers and sparse errors; however, they still need to be improved. erefore, the search of an affine transformation and Tikhonov regularization term is required to improve the performance of algorithms. To circumvent this dilemma, Likassa [11] proposed a low-rank robust regression (LR-RR) algorithm to clean the outliers and sparse errors from highly contaminated data. Although LR-RR can mitigate the impact of sparse errors inside and outside subspaces, its sensitivity to sparse errors and outliers lying in the disjoint subspaces jeopardizes its performance in some severe scenarios. e recent papers also addressed the issue of sparsity for image representation and decomposition [40,41]. Likassa and Fang [12] addressed a low-rank sparse subspace representation for robust regression (LRS-RR) approach to find the clean low-rank part by low-rank subspace recovery along with regression to deal with errors or outliers lying in the corrupted disjoint subspaces. e main challenge in image recovery and head pose estimation is to tackle the potential impact of outliers and heavy sparse noises.
In this paper, a new robust shrinkage regression algorithm through affine transformation and Tikhonov regularization is proposed for high-dimensional image alignment and recovery. To be more resilient with occlusions and outliers, the new algorithm incorporates affine transformations and Tikhonov regularization. ereby, the Tikhonov regularization is considered to mitigate multicollinearity in highly distorted data, which helps to reduce the redundancy and tackles the potential impact of the outliers, heavy sparse noises, and occlusions and illuminations. Consequently, the distorted or misaligned images can be rectified by affine transformations and Tikhonov regularization term to render more accurate image decomposition. e search of the optimal parameters and affine transformations is first cast as convex optimization programming. Afterward, the alternating direction method for multipliers (ADMM) approach is employed, and the newly developed and modified set of equations is established to update the parameters involved and the affine transformations iteratively. In this paper, the low-rank components are decomposed into the product of a basis matrix and a regression coefficient matrix, where the basis matrix consists of previously well-aligned images. To constrain the possible solutions, the new method considers the Tikhonov regularization term and nuclear and Frobenius norms with more regularized penalty parameters. Moreover, the convergence of these new updating equations is scrutinized as well, and the proposed method has less time computation as compared to the state-of-the-art works. Conducted simulations show that the new algorithm excels the state-of-the-art works in terms of accuracy on highdimensional face image alignment and face recovery on some public datasets. e major contributions of this paper include the following: (1) e affine transformations and Tikhonov regularization term are incorporated into the new model to fix the distorted or misaligned images so as to be robust and resilient with the adverse effects of outliers and heavy sparse noises (2) e ADMM approach is employed to solve the new convex optimization problem, and a set of updating equations is developed to iteratively solve this problem in an iterative process (3) To make more robust with outliers and occlusions, the Tikhonov regularization term and the nuclear norm of the sparse noises are additionally incorporated to relax, regularize, and constrain the representation of images, which leads the problem to have a unique and stable solution (4) e proposed method considers both a set of affine transformations and Tikhonov regularization parameter which perform the error minimization step and the error support step iteratively but cannot guarantee the convergence of the whole algorithm, and our method integrates error minimization and error support into one regression model, and its ADMM algorithm theoretically converges well is paper is structured as follows. Section 2 indicates the detail problem formulation of the paper. Section 3 develops and describes the updating equations to solve the proposed problem. In Section 4, experimental simulations are conducted to justify the proposed method. Section 5 provides the results of time computation of the new algorithm, and Section 6 provides some concluding remarks to summarize the paper.

Problem Formulation
Given n images I 1 , I 2 , . . . , I n ∈ R w×h , where w and h are the width and height of the images, respectively. In many situations, however, high-dimensional images are highly impacted by the potential adverse effect of outliers, occlusions and illuminations, and heavy sparse noises. We can stack these images into a matrix M � [vec(I 0 1 ) |vec(I 0 2 )| · · · |vec(I 0 n )] ∈ R m×n , where vec(·) denotes the vectorization of stacking operators. In light of the fact that the subspaces of the data are contaminated by large noises, we decompose the original corrupted data matrix into a low-rank component and sparse errors, i.e., M � Uβ + S [42,43], where U ∈ R m×n is a clean low-rank dictionary, β ∈ R n×n is a recovery subspace coefficient matrix used to represent M, and S ∈ R m×n denotes a sparse error matrix incurred by some adverse effects. When a new image I arrives, the main task is to seek an optimal transformation that warps this image with the previously aligned images.
us, invoked by the potential impacts of outliers, occlusions, illuminations, and heavy images, we need to decompose the corrupted images into the low-rank component and sparse noises. erefore, this decomposition process can be given by where Uβ is obtained from a low rank and decomposed into the basis matrix U, β ∈ R n×n is the weight regression coefficient vector, and S is independently and identically distributed. e main dilemma, in updating parameters corresponding to the constraint M oτ � Uβ + S, is intractable due to the nonlinearity of the constraint. To solve the nonlinearity in M oτ , we can assume that the changes produced by these affine transformations τ are small and an initial affine transformation of τ is known; then, we can linearize it by considering the first-order Taylor approximations as follows: where M oτ ∈ R m×n is being considered as the transformed image, Δτ ∈ R p×n , where p indicates the numbers which denote the parameters, J i � zvec(I i oτ i )/zτ i ∈ R d m ×p represents the Jacobian of the i th images with respect to τ i , v i denotes the standard basis for R n , and the operator o denotes the affine transformation incorporated into the highly and linearly illuminated and corrupted data. is linearization process adjusts the misalignment problems in the batches of images via an affine transformation. Following this procedure, we attain approximate transformations to recover the low-rank component and the batch image alignment from the underlying subspaces and all other updating parameters. e pixels of each transformed image are considered in matrix M oτ in which the affine transformation tries to align images in a similar way; then, they can be considered as samples drawn from a union of low-dimensional subspaces. Inspired by [44], instead of processing the original images, we seek an alignment in the image such that the aligned images of the newly arrived images can be decomposed as the sum of sparse error and linear composition. To constrain the possible solutions of our algorithm, we add many more parameters to relax the nonconvexity issues. To make the new robust regression more resilient to the potential impact of large outliers and sparse noise, the Tikhonov regularization term regularizes the representation of high-dimensional images to lead to a unique and stable solution for the least squares problem.
is also helps to control the effect of the noise on solving the updating parameters. Additionally, when the pixels belong to the smooth regions, the Tikhonov regularization adjusts and eliminates the potential impacts of outliers and heavy sparse noises [45,46], and it is also preferable to align and recover when the data are dominated with noises and incomplete data [47]. e overall problem can be posted as follows: where λ 1 , λ 2 , and η are the regularization parameters, Γ is the Tikhonov matrix [48,49], and λ 1 is a global regularization parameter which balances the minimization between the reconstruction error and the Tikhonov regularization term. Additionally, ‖S‖ * � min(m,n)

Proposed Method
To solve constrained convex optimization techniques as shown in (2), we consider an alternating direction method for multipliers (ADMM) as follows [50,51]: is can be solved by using ADMM. e major necessary condition of ADMM is the convexity issue; we give sufficient conditions under which the algorithm asymptotically reaches the standard first-order necessary conditions for local optimality.
en, employing an augmented Lagrangian function on (3), we can get the following Lagrangian function which is given by International Journal of Mathematics and Mathematical Sciences L μ (β, S, U, Z, Δτ) � min where μ ± 0 is a penalty parameter, Z is the Lagrange multiplier, and Tr(.) is the trace operator. Afterward, solving (4) directly is a formidable task; thus, we solve each updating parameter independently via the alternating direction method for multipliers (ADMM). Firstly, to find the optimal of β, we fix Δτ, Z, and S as constants; then, β can be updated by To get the optimal updating parameter β, first let us denote Q � [vec(U 1 ), . . . , vec(U n )] and g � vec(M oτ + JΔτ + S − (1/μ)Z), where vec(.) converts the matrix into a vector. erefore, the subproblem indicated in (5) can also be rewritten as erefore, the updates corresponding to the regression coefficient β are updated from the following expression. is problem can also be further reduced to the following: Using product matrix multiplication and the property of Frobenius, the above equation can be rewritten as given in the following equation: is equation has a form of an ordinary least square regression. So, we can obtain the solution of equation (8) by Secondly, to find the update of S, we follow along the same line as in: (5): Following the same procedures as in (6) and ignoring all the irrelevant parameters of the updating parameter S, (10) can be reduced to e above can be solved following [52] the singular value threshold operator; the above optimization problem is solved by where irdly, invoked by an affine transformation, keeping all the other parameters as constant, we obtained an extra updating parameter Δτ. Along the same line as above, 4 International Journal of Mathematics and Mathematical Sciences keeping all the other parameters as constant, we can get the update of Δτ: arg min Solving (13) with the threshold operators as similar as [53,54], we can get the update of Δτ k+1 : (15) Finally, following the same steps as above, the Lagrangian multiplier can be updated based on Input: data matrix M ∈ R m×n , β 0 ∈ R m×n , S 0 ∈ R m×n , Δτ 0 ∈ R p×n , λ 1 , λ 2 , ρ While not converged Do (1) Update: β (k+1) by (9) (2) Update: S (k+1) by (14) (3) Update: Δτ (k+1) by (15)   column, the low-rank component by [31]; 3 rd column, the low-rank component by [30]; 4 th column, the low-rank component by [38]; 5 th column, the low-rank component by [39]; 6 th column, the low-rank component by the proposed method. column, the low-rank component by [31]; 3 rd column, the low-rank component by [30]; 4 th column, the low-rank component by [38]; 5 th column, the low-rank component by [39]; 6 th column, the low-rank component by the proposed method.
International Journal of Mathematics and Mathematical Sciences ese updating equations proceed in a round-robin manner until convergence. For easy reference, the updating equations of the proposed algorithm are summarized in Algorithm 1.

Simulations and Discussion
In this section, we need to see the effectiveness of the new algorithm for image alignment and recovery based on public datasets. Four different state-of-the-art works, including [30,31,38,39], and the proposed method are considered for comparisons to examine the robustness of the proposed method. For image alignment and recovery, three public datasets such as Labeled Faces [55] databases, Algore talking video taken from [53], and complicated windows taken from [53] are taken into consideration. e protocols and procedures are directly related to related works [30,31,38,53,56,57]. In these simulations, the effectiveness of the proposed method is compared with the aforementioned methods based on natural face images, video face images, and windows. Secondly, we furthermore evaluated checking the effectiveness of the algorithm through using the mean square error [11][12][13]58].

Recovery of Natural Face Images.
In this section, the effectiveness of the new proposed robust algorithm on natural face images for solving the problem of image

Methods
Natural face Video face Window face TRPCA [30] 4.9750 * 10 − 2 2.8530 * 10 − 1 3.203 * 10 − 1 ELTRT [31] 5.223 * 10 − 2 3.045 * 10 − 1 3.248 * 10 − 1 NRPP [38] 4.6097 * 10 − 4 6.000 * 10 − 3 1.59090 * 10 1 RIVZ [39] 9.3270 * 10 − 1 0. International Journal of Mathematics and Mathematical Sciences alignment and recovery based on different types of images is evaluated. In this case, first we consider corrupted face images upon which we employ an algorithm to align and recover the corrupted images. ereby, the simulations on corrupted natural face images are illustrated in Figures 1 and  2 under column 1. We employed the proposed method on these distorted data to see the effectiveness of the proposed method as compared with the aforementioned approaches. e result of the recovered one by the proposed method has improved the natural face images as compared with [30,31,38,39]. is improvement is due to incorporation of an affine transformation and Tikhonov regularization term.
is finding is also justified based on the root mean square error result shown in Table 1, entailing the new algorithm is more robust and resilient to outliers and heavy sparse noises in high-dimensional natural face images.

Recovery of Video Face
Images. Next, the effectiveness of the new method is evaluated further based on distorted video face images with 81 × 107 size to judge the effectiveness of the proposed method in recovering the corrupted highdimensional images. Following this, the proposed method is employed on distorted video images as shown in Figures 3  and 4 under column 1. ereby, the proposed method has improved the performance of the developed algorithm by pruning out the potential impact of outliers and heavy sparse noises as shown in Figures 3 and 4 (column 6). e proposed method is more superior to the others [30,31,38,39]. is entails inclusion of an affine transformation and Tikhonov regularization has boosted the performance of the new method. As it can be seen from Figures 3 and 4, on the last column, the result obtained by our method has improved the visual quality based on the complicated videos as compared with the baselines.
We can recognize that, by including an extra term both an affine transformation and Tikhonov regularization, the new approach has obtained a minimum mean square error [30,31,38,39], entailing more better image alignment and recovery having a potential to prune out the errors and outliers in high-dimensional video face images (Table 1). e new incorporation of the affine transformation and Tikhonov regularization has boosted the performance of the new approach as compared with the state-of-the-art methods.

Recovery of Complicated Windows.
e effectiveness of the proposed method is further assessed in recovering complicated windows with size 1600 × 1200 as given in Figure 5. As we can see from Figure 5 (column 6), the proposed method perfectly recovers the windows and removes the trees which are considered as occluded. is entails incorporating the affine transformation and Tikhonov regularization term has boosted the visual quality of very complicated and corrupted windows as compared with [30,31,38,39]. column, the low-rank component by [31]; 3 rd column, the low-rank component by [30]; 4 th column, the low-rank component by [38]; 5 th column, the low-rank component by [39]; 6 th column, the low-rank component by the proposed method. column, the low-rank component by [31]; 3 rd column, the low-rank component by [30]; 4 th column, the low-rank component by [38]; 5 th column, the low-rank component by [39]; 6 th column, the low-rank component by the proposed method. International Journal of Mathematics and Mathematical Sciences Instead of recognizing that invoked by novel ideas of an affine transformation and Tikhonov regularization, the new approach has obtained a minimum mean square error [30,31,38,39], entailing more better image alignment and recovery having a potential to prune out the errors and outliers in high-dimensional and very complicated windows (Table 1).

Time Complexity
e time complexity of the proposed method as compared to the state-of-the-art works is described in this section. On a very standard desktop computer, RIVZ [39] has relatively less time computation as compared to TRPCA [30], ELTRT [31], and NRPP [38] as the RIVZ [39] has a less number of parameters involved in updating the parameters. Additionally, our algorithm can handle batches of over one hundred images in a few minutes on a standard PC as the number of parameters involved is small as compared to the state-of-the-art works.
e new algorithm is guaranteed faster convergence compared to the state-of-the-art algorithms as shown in Table 2.

Conclusion
In this paper, a new robust regularized shrinkage regression method is proposed for high-dimensional image alignment and recovery via considering the issue of an affine transformation and Tikhonov regularization, to be more robust, the dilemma of outliers, heavy sparse noises, and occlusions and illuminations. e newly formulated algorithm is casted as convex optimization techniques. Afterward, the ADMM approach is employed, and a new set of equations is established to update the parameters involved in an iterative manner. e problem is formulated as convex optimization programming, and a set of equations is established to iteratively update the parameters and the affine transformations. e result obtained from the mean square error also reveals that the proposed method outperforms the main state-of-theart works. is is obtained due to incorporating an affine transformation and Tikhonov regularization term. e proposed method has less time complexity as compared to the state-of-the-art methods. Moreover, the convergence of these new updating equations is scrutinized as well and has less time computation as compared to the state-of-the-art works.
e experimental simulation has shown that the proposed method is more superior to the state-of-the-art works.
Data Availability e data used in this article are freely available for the user.

Conflicts of Interest
e authors declare that there are no conflicts of interest.