Dimensionality Reduction with Sparse Locality for Principal Component Analysis

. Various dimensionality reduction (DR) schemes have been developed for projecting high-dimensional data into low-dimensional representation. The existing schemes usually preserve either only the global structure or local structure of the original data, but not both. To resolve this issue, a scheme called sparse locality for principal component analysis (SLPCA) is proposed. In order to eﬀectively consider the trade-oﬀ between the complexity and eﬃciency, a robust L 2, p -norm-based principal component analysis (R2P-PCA) is introduced for global DR, while sparse representation-based locality preserving projection (SR-LPP) is used for local DR. Sparse representation is also employed to construct the weighted matrix of the samples. Being parameter-free, this allows the construction of an intrinsic graph more robust against the noise. In addition, simultaneous learning of projection matrix and sparse similarity matrix is possible. Experimental results demonstrate that the proposed scheme consistently outperforms the existing schemes in terms of clustering accuracy and data reconstruction error.


Introduction
In recent years, the development of high-throughput data processing schemes in diverse fields including pattern recognition, data mining, and computer vision has resulted in an exponential growth in the amount of harvested data with respect to both the dimensionality and size. However, large amount of redundancy and noise in the data cause significant spatial instability, computational complexity, and unfavorable representation. In relation to this problem, dimensionality reduction (DR) has been identified as an effective approach due to its capacity in dealing with a large amount of data and potential for overcoming what is called the "curse of dimensionality" [1]. It also offers a greater scope of model generalization and accomplishes the tasks with a high degree of computational efficiency.
To date, a variety of DR techniques have been developed for projecting the original data of high-dimensional space into a lower dimensional subspace. ey can be classified in two ways: global dimensionality reduction (GDR) and local dimensionality reduction (LDR) [2]. e GDR techniques assume that all pairwise distances of the data are of equal importance. e globally correlated data are reduced using the magnitude or rank-over to choose the optimal low-dimensional pairwise distance, thereby eliminating irrelevance and redundancy [3]. With the LDR technique, only the local distances are assumed to be reliable in high-dimensional space. More emphasis is therefore put on correctly modeling the locally correlated data to eliminate the noise [4]. A great deal of research has been conducted using realistic nonimage and image datasets, with the results showing that a significant percentage of redundancy and noise is eliminated by both the techniques [5][6][7].
Recently, a number of hybrid global-local methods have been proposed. e authors of [5], for instance, have developed an algorithm of hybrid sampling-based clustering ensemble with global constitution, which encodes the local and global cluster structure of input partitions in a single representation. Here deciding a final consensus candidate involves significant computational cost. In [6], a DR algorithm was proposed which uses pairwise similarity measurement to effectively capture the local structure of data manifolds. While offering considerable advantages, the hyperparameters of the selected model remain as an issue. In [7], a global-local structure preservation framework was introduced for feature selection based on three algorithms: local linear embedding (LLE), local tangent space alignment (LTSA), and locality preserving projection (LPP). In this framework, the copious amount of noise generated during the process may degrade the reliability of the data. A welldesigned DR model still needs to be developed that can effectively reduce the data redundancy and noise while ensuring robustness.
In this paper, thus, a global-local DR model called sparse locality for PCA (SLPCA) is proposed, which introduces robustness into both GDR and LDR by employing a regularizer and constraint, separately. e SLPCA model aims to reduce unnecessary information while preserving the data correlation in both the global and local structure. In order to effectively consider the trade-off between the complexity and efficiency, a robust L 2,p -norm constraint-based PCA (R2P-PCA) is introduced for GDR, while sparse representation (SR) and LPP are combined for LDR. e R2P-PCA algorithm implements PCA with a robust distance metric L 2,pnorm by maximizing the sum of the variations. In this way, R2P-PCA can increase the robustness against the noise. It is also for preserving the global structure of the samples. e SR-LPP algorithm seeks a set of projection matrices by capitalizing the merits of SR and LPP, which are merged into one analytical process. SR enables adaptive construction of the graph because it is parameter-free and robust against the noise [8]. is makes SR-LPP capable of simultaneous and adaptive learning of the projection matrix and sparse similarity matrix. rough the joint learning of the two algorithms, the learned functions can be optimized using an efficient iterative algorithm, alternating direction method of multipliers (ADMM) [9]. It monotonically decreases the value of the augmented Lagrangian function as the iteration continues. Computer simulation reveals that the proposed SLPCA scheme consistently outperforms the existing schemes in terms of clustering accuracy and data reconstruction error. e main contributions of the paper are summarized below: (i) A means of delivering robustness in DR is developed that simultaneously considers both the local and global structure of sample data. (ii) A scheme effectively considering the trade-off between the complexity and efficiency of DR is proposed, where a robust L 2,p -norm constraint-based PCA (R2P-PCA) is introduced for global DR, while sparse representation (SR) and LPP are combined for local DR. (iii) Typically, the graph-based DR methods calculate a projection matrix on the basis of a learned graph. A novel approach is developed that can simultaneously integrate both the projection matrix and nonparametric graph construction. e remainder of the paper is organized as follows. In Section 2, the work related to the preservation of global and local data structure is surveyed and summarized. e proposed sparse locality for PCA (SLPCA) model is introduced in Section 3. is section also presents the key algorithms of SLPCA, R2P-PCA, and SR-LPP and explains how they achieve convergence. In Section 4, the performance of SLPCA is evaluated in terms of clustering accuracy and data reconstruction error with two kinds of datasets. e paper is concluded in Section 5, with future research directions.

Notation and Definition. Given the vector
Given the matrix M ij ∈ R n×m , the i-th row and j-th column are denoted by m i and m j , respectively. L 0 -norm ||M|| 0 denotes the number of nonzero elements in the matrix M. e L 1 -norm of M is defined as e Frobenius norm of M is e L 2,1 -norm of M was first introduced in [10] as a rotational invariant for the rows. It is now widely employed to encourage row sparsity and is defined as

DR Based on Global Structure.
ere exist three main approaches for the DR based on the global structure in the literature.

Principal Component Analysis.
PCA is the most commonly used algorithm in this category because of the simplicity and efficiency. It aims to reduce the dimension of the original data by projecting them across the direction of maximum variance. e Graph-Laplacian PCA algorithm was developed by Jiang et al. [11], which learns a lowdimensional representation of vector data by incorporating the graph structure encoded in the graph data. In [12], rotational invariance L 1 -norm PCA minimizes the reconstruction error by imposing an L 2 -norm on the spatial distance, whereas an L 1 -norm is applied to different data points. e authors of [13] have proposed the convex sparse PCA approach that reduces redundant information by building a compact and informative subspace. ese kinds of algorithms are useful in the fields including image processing, speech enhancement, and reduction of data transmissions. e typical distance preserving algorithms are based on maximum variance unfolding (MVU) and isometric mapping (Isomap). Although both the approaches result in a similar computational structure, Isomap merely preserves the global geodesic distance, whereas MVU preserves the local distance but maximizes the global variance. In [14], a hybrid incremental landmark MVU model was developed that was combined with a dual-tree complex wavelet transform. Saxena et al. [15] introduced a nonlinear MVU-PCA algorithm, which aims to enhance the core components of the framework for higher robustness. e multilevel MVU technique proposed in [16] aims to reduce and distribute the computational load across parallel multilevel machines. e algorithms mentioned above share a weakness of erroneous graph connection.

Autoencoder.
Autoencoder (AE) [17] focuses on learning the underlying manifold in the encoding and decoding procedure. e multilayer AE [18] usually has a higher number of connections, but it may cause slow convergence via backpropagation even to local minima. e spectral AE [19] aims to detect global and community anomaly by mapping the attributed network to two types of low-dimensional representations. e AE-based approaches are rather limited in their capacity in learning the local structure, which contains important information required for understanding the data.

DR Based on Local Structure.
e DR schemes based on local data structure usually adopt the graph embedding technique such as LPP, LLE [20], Laplacian eigenmaps [21], LTSA [7], and neighborhood preserving embedding (NPE) [22]. ey approximate the embedded manifold through a mapping function with the proximity preserving property. e LPP algorithm uses the k-NN-based graph Laplacian regularizer to preserve the local structure of the samples. Yu et al. [23] improved the LPP scheme where the L 1 -norm is used to provide the robustness while the similarity between the pairs of vertices is effectively preserved simultaneously. In [24], a supervised global-locality preserving projection approach is proposed that preserves not only the locality characteristics of LPP but also the global discriminative structure associated with maximum margin criterion. e aforementioned algorithms can be hampered by the presence of noise. e low-rank learning method can help to reduce the disturbance caused by noise in the data.
In [25], a neighborhood preserving projection method was developed that integrates low-rank learning and robust learning. It enhances the robustness of the NPP method and diminishes the disturbance caused by noise in the data. Similarly, the low-rank preserving projection method introduced in [26] applies an L21 norm to the noise matrix as a sparse constraint and a nuclear norm to the weight matrix as a low-rank constraint. It maintains the global structure of the data during DR, and the learned low-rank weight matrix reduces the noise-related disturbance. e scheme of [27] preserves the structure of nonlinear manifold data, and it is robust because the learned data are unaffected by noises or outliers. In [28], a structurally incoherent low-rank nonnegative matrix factorization method was proposed that jointly considers the structural incoherence and low-rank property of data. Since the scheme employs low-rank learning method, it allows to capture the global structure of the data and robustness to the noise. e proposed scheme is presented next.

The Proposed Scheme
3.1. R2P-PCA for Preserving Global Structure. R2P-PCA is for the preservation of the global structure of the data. PCA finds a few orthogonal directions in the data space that preserve the most information in the data while minimizing the reconstruction error. Assuming an input data matrix, X�(x 1 , . . ., x n )∈R m×n , that contains m data column vectors in n dimensional space, PCA finds the optimal low-dimensional subspace of U and V by solving the following optimization problem: represents the projected data points in the new reduced subspace.
Recently, various PCA-based methods have employed different criterion functions such as L 1 -norm to improve the robustness to noise [29]. Although L 1 -norm is robust to outliers, most existing L 1 -norm-based PCA methods do not effectively minimize the reconstruction errors which is one of the main goals of PCA. ey are also not invariant under rotation, which is an important property for learning algorithm. In the proposed scheme, we therefore focus on formulating a robust learning metric formulation, the L 2,pnorm [30][31][32]. Note that most existing PCA methods based on L 2,1 -norm can be viewed as special cases of the proposed R2P-PCA approach. It is well known that ||·|| 2,1 is convex with respect to the matrix variables, and thus it can be extended to a generalized robust learning metric formulation for matrix M, namely, the L 2,p -norm (0 < p < 2), which can be defined as follows: On the basis of equations (4) and (5), the objective function of R2P-PCA can now be obtained: Mathematical Problems in Engineering

Original Locality Preserving
Projection. LPP can be viewed as a linear approximation of nonlinear Laplacian eigenmaps. e first step in the original LPP is to construct an adjacency graph, which significantly affects the performance of the algorithm. en, the projection matrix is calculated using the learned graph. e objective function of LPP can be formally stated as follows: Let p denote the transformation vector. rough simple algebraic reformulation, the objective function above can be rewritten as follows: where p T x i , p T j v, and W ij are numerical values, D ii = j W ij , and L = D − W. Here, -L denotes the Laplacian matrix, which is used to minimize P T XLX T P.
e LPP objective function can therefore be changed to min P P T XLX T P, For the heat kernel method, the weight assignment can be obtained using In [33], it was established that adjacent graph structure and graph weight are highly interrelated and should not be separated.
is makes it preferable to develop one ideal model that can perform the two tasks simultaneously. e traditional weight assignment methods require parameter selection to construct the adjacency graph by means of the ε-ball or k-NN method.
is comes with a significant computational cost if they need to identify the neighborhoods for a whole dataset. is also makes them sensitive to data noise [34]. Hence, instead of using them, here we attempt to automatically derive the similarity matrix and ensure that it preserves the discriminative information by using sparse representation (SR).

Graph Construction Based on SR.
e main idea of SR is that a body of sampled data can be sparsely represented when given an appropriate basis. If a given test sample, y, belongs to the i-th class, SR assumes that y can be represented as a linear combination of the training samples in the i-th class, In other words, y can be represented as e above model can be expressed as follows: Note that the L 0 -norm optimization problem is an NPhard problem. A recent study has demonstrated that L 0norm is equivalent to the L 1 -norm optimization problem if the solution is sparse enough [35], in which case it can be solved using SR avoids parameter selection and makes the intrinsic graph construction more robust to data noise. Having established the ground of theoretical inspiration related to LPP and SR, the basic idea of the proposed SR-LPP method for local structure preservation is presented next.

SR-LPP.
SR-LPP learns the similarity matrix, W, and projection matrix, P, simultaneously and adaptively so that the intrinsic properties of the structure can be preserved. For this, the LPP objective function needs to be changed to In addition, in order to ensure that each sample is represented by unique base, an L 1 -norm constraint is added to W. According to equation (9) in [36], the objective function for SR-LPP can now be formulated as follows: where λ 1 and λ 2 (≥0) are parameters balancing the contribution from the two parts.

e Objective Function.
For joint learning with R2P-PCA preserving global structure and SR-LPP preserving local structure, equations (6) and (15) are combined in the following model: where α (≥0) is a parameter balancing the contribution from the three parts. It should be pointed out that the update of P in equation (16) can be derived by solving the generalized eigenvector problem, which corresponds to the eigenvectors of the k smallest eigenvalues. Let V � P T X. As v i in PCA takes exactly the same role as p T x i in LPP, the objective function can be simplified to where L � D − W. ADMM can be expanded to solve the subproblem. Letting E � X − UV T , the standard Lagrange function in the equation above can be augmented as follows: Hence, the iterative computation consists of W, U,V, and E-minimization step and an update of the Lagrangian parameter, ρ. e detailed iterative computational steps are formulated below.

Optimization Analysis
3.4.1. Fixing E and W to Update U and V. According to eorem 3.1 in [11], let F � E − X + u; U can be updated by solving the following optimization problem: Taking the partial derivative of U and setting it to 0, the following can be obtained: en, U is updated. As U � −FV, the optimization problem with respect to V in equation (18) can be simplified into the following equation: With similar operation on U, e optimal V * can be obtained by calculating the ei- is is the first k smallest eigenvalue of the matrix Q � −F T F + 2βL/ρ. According to Proposition 3.2 in [11], the solution of V can be expressed by the eigenvectors of where σ m and σ l are the largest eigenvalue of matrix M T M and L, respectively; θ is the parameter substituting α; and e (� (1. . .1) T ) is an eigenvector of Q α : Q α e � (1 − θ)e. Although (ee T /n) is applied in place of the Laplacian matrix, the eigenvectors and eigenvalues do not change. is guarantees that e is not included in the lowest k eigenvectors.

Fixing U, V, and W to update E. Let
S � X − U(V) T − u. E can now be updated by solving the following optimization problem: Taking the partial derivative of E and setting it to 0, according to the proximal operator of L 2,p norm proposed in [30], change L p (E) to n i�1 (‖E i ‖ 2 2 + ε 0 ) p/2 + (ρ/2)‖E − S‖ 2 F , where E i is the i-th column of E and 0 < ε 0 ≤ 1. en, E can be obtained using where N is the weight matrix corresponding to R(E); here, N ii � (‖E i ‖ 2 2 + ε 0 ) p/2− 1 . e L 2,p -norm minimization problem can then be solved by updating N and E iteratively. e procedure for this is presented in Algorithm 1.

Fixing U, V
, and E to Update W. W can be updated by solving the following optimization problem: Taking the partial derivative of W and setting it to 0, apply the nonnegative projection proposed in [37]. en, obtain W * using Mathematical Problems in Engineering

Updating the Parameters.
Drawing upon [9], the parameters need to be updated as follows:

Convergence
Property. e proposed sparse localityregularized PCA model (SLPCA) consists of two main parts: solving the subproblems and updating the parameters. For the update of the first subproblem, the optimal solution of each column of E is obtained using equation (24). erefore, the value of the objective function decreases after solving E in each iteration. For the other two subproblems, the minimization of the augmented Lagrangian occurs with respect to the variables, U, V, and W. When updating U, because the Lagrange function of equation (18) is also convex and differentiable to U, the optimal projection matrix can be obtained by using a closed-form solution of equation (20).
is means that the value of the objective function decreases after solving U in each iteration. e same procedure is applied for updating V and W. e proposed SLPCA algorithm is therefore convergent. e procedure for the proposed SLPCA is shown in Algorithm 2, which is applied when the subproblems are solved. It searches for an optimal value in each iteration and finally stops when the value of the objective function becomes stable.

Experiment Setup.
is section reports an experiment conducted to verify the effectiveness of the proposed SLPCA scheme. A comparison is also made regarding the performance in finding low-dimensional representation and classification in relation to the two baseline algorithms, PCA and LPP, and three state-of-the-art hybrid global-local DR methods, i.e., GRSDA [38], L1/2-GLPCA [39], and p-GLPCA [40]. In the experiment, the value of the parameter θ is chosen from the set [0.1, 0.5, 0.9], and p is chosen from the set [0. 5,1]. According to the result of experiment, the proposed method is superior to the others when the parameter α is larger than 0.6. e value of λ 1 , λ 2 , and ρ are empirically determined, while it was found that their values do not significantly influence the performance.
In order to evaluate the performance of the schemes on various characteristics including classification and data reconstruction accuracy, the experiment is performed using two different types of datasets: nonimage-based dataset and image-based dataset. For the nonimage data, three UCI datasets are employed: Iris, Seeds, and Soybean that are publicly available at https://archive.ics.uci.edu/ml/datasets. php. For the image data, three commonly adopted datasets are used, the Extended Yale-B, ORL face, and CMU PIE dataset.
e K-means test is applied to assess the effectiveness of the proposed scheme. e experiment is also extended to test its robustness and generalizability for different values of p. Table 1 summarizes the main features of the DR models compared.

Experiment on Nonimage Datasets.
is section details the results of the experiment conducted to assess the effectiveness of the schemes in terms of subspace representation and clustering accuracy. e Iris dataset is one of the well-known pattern recognition databases published in the literature. It contains three classes of 50 instances each, where each class refers to a type of iris plant. Here, one class Input: matrix S; Step 1: Select parameters: p, ρ, ε; Step 2: Optimization Initialize: (25); (3) Update k � k + 1; (4) Check the convergence condition: End and Output: k-dimensional vector E. is linearly separable from the other two classes, which are not linearly separable from each other. e Seeds dataset covers three different varieties of wheat-Kama, Rosa, and Canadian-each containing 70 elements, which were randomly selected for the experiment.
e Soybean dataset has 19 classes, only the first 15 of which have been used in prior work. It contains 35 categorical attributes, some nominal and some ordered.
By using equation (5), L 2,p norm can be applied in the SLPCA algorithm to improve the robustness. e previous research has established that L 2,p -norm (0 < p < 2) performs better than other constraints [32]. To apply it, some parameters need to be set in advance, including the rescaling coefficient. Different p values are tried while other parameters are unchanged to find the optimal p value. e other parameters are then decided in the similar way. Figure 1 shows the distribution of 2D data after DR with the Iris dataset. In this experiment, α � 0.6, λ 1 � 0.5, λ 2 � 0.5, ρ � 1.0, p � 0.5, and θ � 0.1. For the original LPP, the neighborhood size, k, also needs to be determined, and k � n i � 1 where n i is the number of samples in the i-th class. It can be seen from the figure that the proposed SLPCA scheme represents the Iris data pattern more compactly than the other algorithms. e clustering accuracies for the training set and test set of the schemes are compared in Table 2. e best results are highlighted in bold. Note that outside of a few exceptional cases, the proposed SLPCA scheme consistently displays the highest accuracy for both the training and test data, across the different datasets.

Experiments Using Image Datasets.
In this section, the performances of the different schemes are compared with regard to image classification and reconstruction. e experiments were conducted on three face datasets: CMU PIE, Extended Yale-B, and ORL. For each image, salt and pepper noise were added, with a noise density of 0.01.

CMU PIE Dataset.
e CMU PIE dataset is publicly available at http://www.cs.cmu.edu/afs/cs/project/PIE/ MultiPie/Multi-Pie/Home.html. It has 41,368 images of 68 individuals, with 4 expressions, 13 poses, and 43 illumination conditions for each person, and all the images have different illuminations and expressions. In our experiment, the Pose C09 subset was chosen, which contains 1632 images of 68 people, with 24 different images of each person. In the experiment, all the images are in gray scale and normalized to a resolution of 64 × 64 pixels for the sake of efficient computation. 70% of the CMU PIE images are used as the training dataset. e rest forms the test dataset.

ORL Face Dataset.
e ORL face dataset consists of 400 images of 40 individuals, with 10 different images per person. It is publicly available at http://www.face-rec.org/ databases/. e images of each person were taken at different times with different facial expressions and details and under varying lighting conditions. In the experiment, all the images are in gray scale and are manually cropped to 64 × 64 pixels. For this dataset, 60% of the images per subject were randomly chosen to form a training set. e remaining images Input: Training set X � {x1, x2, . . ., xN}, the reduced dimension k (with k ≤ D).
Step 2: Alternative Optimization Initialize ρ � 0, θ � 0, U � 0, V � 0. While no convergence do (1) Fix the other variables and update U using equation (20); (2) Fix the other variables and update V using equation (22); (3) Fix the other variable and update the auxiliary E using equation (25); (4) Fix the other variables and update W using equation (27); (5) Update the Lagrange multiplier and other parameters using equation (28); End and Output: k-dimensional vector U, V, W, E.
Proposed    training set, with the remaining images being formed as the test dataset. e same procedure was applied as with the ORL dataset.
To illustrate the data representation aspect of SLPCA, ORL face dataset is applied here. In this experiment, α � 1.0, λ 1 � 1.0, λ 2 � 1.0, ρ � 0.7, and θ � 0.9. Figure 2  e last three columns contain the reconstructed images of SLPCA with p � [1, 0.5]. Only the images of three ORL persons (out of 40 persons) are shown due to space limitation. Observe from the figure that the images of a person with the proposed SLPCA scheme tend to be more similar than those with the other methods. is indicates that the class structures are generally more apparent with the SLPCA representation, and this motivates us to use it for data clustering and classification. Table 3 compares the clustering accuracy for the GLPCA and SLPCA schemes for different p values. Here, L1/2-GLPCA and p-GLPCA used an L 2,1 -norm and a p-norm for the PCA aspects, respectively. SLPCA used an L 2,p -norm. It can be seen from the table that SLPCA (with p � 0.5) achieved the best performance out of all the schemes. In this experiment, α � 1.0, λ 1 � 1.0, λ 2 � 1.0, ρ � 0.7, p � 0.5, and    θ � 0.9. Figure 3 compares the schemes with regard to the reconstruction error and residual error (‖X − UVT‖F/‖X‖F) when the subspace dimension was changed for the 30% CMU PIE training dataset. It can be seen that SLPCA with a proper p value produced the smallest reconstruction and residual error rate. Figure 4 compares the error rates for different θ values. is reveals that SLPCA has a lower reconstruction error for any value of p and the lowest residual error when p � 1.0, except for p-GLPCA where p � 0.5. Note that in the case of SLPCA, the reconstruction error decreases as θ becomes larger, while the residual error increases slowly. Figure 5 shows a comparison of the schemes across the three different image datasets. It can be seen that 20% CMU PIE and ORL datasets have the lowest and largest reconstruction error rate, respectively. It seems that there is no relationship between the two types of error rates for the given datasets. Here, α � 1.0, λ 1 � 1.0, λ 2 � 1.0, ρ � 0.5,p � 0.5, and θ � 0.5.
From the experiment results given above, SLPCA turns out to be the most robust for handling DR for various types of data because it retains both the local and global structure of the data.

Conclusion
In this paper, a robust global-local scheme called SLPCA has been proposed for dimensionality reduction. It increases robustness against noise and more effectively preserves the global and local structure of the samples. In seeking a tradeoff between complexity and efficiency, robust L 2,p -normbased PCA (R2P-PCA) was introduced for the GDR elements, while joint sparse representation and LPP (SR-LPP) was used for the LDR. In addition, the SR-LPP algorithm is parameter-free so as to avoid the difficulty of determining the neighborhood size. is allows the graph construction and weight assignment to be finished in one step. SR-LPP can also learn a projection matrix and a sparse similarity matrix simultaneously and adaptively. Experimental results show that the proposed scheme is capable of more accurate classification and better representation than other typical DR schemes.
In future work, the proposed scheme will be extended to take nonlinear settings into account. As a linear model cannot capture nonlinear distortion, a nonlinear model is required to obtain effective and reliable results. e identification of nonlinear models requires more data and involves more comprehensive analysis than their linear counterparts. At present, the performance of the representation has been studied only in linear settings. How to make the linear models work in nonlinear settings is another issue that requires further investigation. Analytical models deciding the values of the parameters employed in a DR scheme also need to be developed.

Data Availability
All data included in this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.