Density Peaks Clustering Based on Feature Reduction and Quasi-Monte Carlo

Density peaks clustering (DPC) is a well-known density-based clustering algorithm that can deal with nonspherical clusters well. However, DPC has high computational complexity and space complexity in calculating local density ρ and distance δ, which makes it suitable only for small-scale data sets. In addition, for clustering high-dimensional data, the performance of DPC still needs to be improved. High-dimensional data not only make the data distribution more complex but also lead to more computational overheads. To address the above issues, we propose an improved density peaks clustering algorithm, which combines feature reduction and data sampling strategy. Specifically, features of the high-dimensional data are automatically extracted by principal component analysis (PCA), auto-encoder (AE), and t-distributed stochastic neighbor embedding (t-SNE). Next, in order to reduce the computational overhead, we propose a novel data sampling method for the low-dimensional feature data. Firstly, the data distribution in the low-dimensional feature space is estimated by the Quasi-Monte Carlo (QMC) sequence with low-discrepancy characteristics. /en, the representative QMC points are selected according to their cell densities. Next, the selected QMC points are used to calculate ρ and δ instead of the original data points. In general, the number of the selected QMC points is much smaller than that of the initial data set. Finally, a two-stage classification strategy based on the QMC points clustering results is proposed to classify the original data set. Compared with current works, our proposed algorithm can reduce the computational complexity from O(n2) to O(Nn), where N denotes the number of selected QMC points and n is the size of original data set, typically N≪ n. Experimental results demonstrate that the proposed algorithm can effectively reduce the computational overhead and improve the model performance.


Introduction
With the advent of the era of big data, the importance of data mining is increasingly prominent [1]. As an unsupervised learning method, clustering is widely used in many different fields including image processing, medicine, and archaeology. ere are various classical clustering algorithms, such as K-means [2], DBSCAN [3], and AP [4]. According to different standards, clustering algorithms are classified into different categories. Generally speaking, clustering algorithms are divided into partition-based methods, hierarchy-based methods, density-based methods, and grid-based methods.
In recent years, a new density peaks clustering (DPC) algorithm has been proposed [5]. It is a typical density-based clustering algorithm with excellent advantages. One advantage is that the DPC relies on the decision graph to select the clustering center. Specifically, DPC draws the decision graph of the data set by defining local density ρ and distance δ. en, DPC determines the cluster centers based on the decision graph. e obtained cluster centers have two characteristics: (1) e local density of the cluster centers is large and the density of its neighborhood is not greater than itself. (2) e distance between the cluster centers and other data points with a higher density is relatively large. Hence, the cluster centers are data points with high local density and high distance, which are called density peaks. Another advantage is that DPC can not only deal with clusters of arbitrary shape but also does not need to determine the number of categories in advance.
Although DPC has achieved good performance in many situations, it still has some drawbacks. Firstly, DPC needs to calculate the local density and distance of each data point, which makes the computational complexity O(n 2 ). e expensive computational overhead limits the application of DPC in large-scale data sets. To address this issue, the study in [6] proposed a distributed density peaks clustering algorithm (EDDPC). EDDPC aggregates large-scale data sets into MapReduce and integrates local results to approximate the final results. However, EDDPC is a distributed algorithm and is not suitable for single CPU scenarios. e study in [7] proposed a density-based and grid-based clustering algorithm (DGB). Instead of calculating distances between all data, only a smaller number of grid points are calculated. However, DGB is only suitable for dealing with high-dimensional data set. In general, the data distribution in highdimensional space may be more complex and contain more noise. Although [8,9] are proposed to filter the noise, additional operations increase the computational overhead.
To address the above problems, an improved density peaks clustering algorithm combining feature reduction and data sampling strategy is proposed in this paper. Firstly, the original data feature space is compressed by some classical feature reduction methods. en, the low-dimensional feature data are sampled by super-uniformly Quasi-Monte Carlo sequence, and the selected high-density Quasi-Monte Carlo points are used to replace the original data points for clustering. Finally, we perform a two-stage strategy to determine the category for the original data. e proposed method has the following advantages: (1) e proposed algorithm reduces the computational complexity from O(n 2 ) to O(Nn), where N and n represent the number of selected QMC points and the size of original data set, respectively. In general, there is N ≪ n (2) rough feature reduction, the proposed algorithm reduces the noise form the original data and decreases the complexity of high-dimensional feature space (3) Extensive experiments have demonstrated the effectiveness of our proposed algorithm in terms of computational overhead and model performance

Related Work
2.1. Feature Reduction. Feature reduction indicates mapping the data from the high-dimensional feature space to a lowdimensional space. e features of the high-dimensional data will be extracted by linear or nonlinear transformation.
Hence, efficient low-dimensional features of the original data set can be obtained by various feature reduction methods. An ideal low-dimensional feature should retain the classification information as much as possible and filter the noise.
Generally speaking, feature reduction can be divided into linear and nonlinear feature reduction methods. Principal component analysis (PCA) is a classical linear feature reduction method [10]. PCA transforms a group of variables that may correlate with linearly uncorrelated variables by orthogonal transformation. Auto-encoder (AE) and t-distributed stochastic neighbor embedding (t-SNE) are nonlinear feature reduction methods. AE can be regarded as a self-supervised manner that consists of the encoder and the decoder [11].
e input data will be mapped to the hidden layer by the encoder, while the decoder transforms the hidden layer features back to the input. Its goal is to combine some high-order features to reconstruct itself. e t-SNE is a machine learning method basing stochastic neighbor embedding (SNE) for feature reduction [12]. t-SNE maps high-dimensional data to two or more dimensions and alleviates the congestion problem in the process of feature reduction. All the above methods have been applied in many fields [13][14][15].

Density Peaks
Clustering. Density peaks clustering (DPC) is proposed in [5], and it can efficiently deal with arbitrary shape data sets without specifying the cluster number k in advance. e cluster center selected by DPC has two characteristics: (1) the local density of the cluster center should be larger than the local density of its neighbors; (2) Data points with low local density should be far away from other data points with high local density. To describe these characteristics, DPC defines two concepts for each data point x i : the local density ρ i and the minimum distance δ i . e local density ρ i is formulated as where d ij represents the distance between x i and x j . d c is the intercept, which is the only artificially defined parameter in DPC. In the code provided by [5], d c is formulated as where d N d is the size of the distance matrix, which defines the distance between any data point pairs. When the data set is small, the Gaussian kernel function is used to calculate ρ i . ρ i is formulated as

Scientific Programming
In addition, δ i is formulated as DPC draws the decision graph based on ρ i and δ i . en, DPC selects the data points with both ρ i and δ i as the cluster centers and assigns the remaining data points to the nearest class. DPC is a simple and efficient algorithm, and a series of works have been carried out [16][17][18][19][20][21][22]. However, DPC requires a huge computational overhead. e computational complexity of the DPC is O(n 2 ), which makes it unsuitable for large-scale data set. To address this problem, a feasible strategy is to sample the data set [23]. Our work is based on the sampling strategy to reduce the computational overhead.

Quasi-Monte Carlo.
As a statistical test method, the Monte Carlo method has been widely used in machine learning. e Quasi-Monte Carlo method is similar to the Monte Carlo method, but there are theoretical differences between them. e superiority of the Quasi-Monte Carlo method is to generate the deterministic super-uniformly distributed sequence (called low-discrepancy sequence in mathematics) instead of the pseudo-random sequence generated by the Monte Carlo method. e Quasi-Monte Carlo method has been widely used in the field of machine learning [24,25]. Specifically, the study in [24] utilizes the Quasi-Monte Carlo method to reduce the computational overhead that occurs in the parameter optimization process of neural networks. e study in [25] generates the Quasi-Monte Carlo sequence to perform the feature map and obtains the low-rank features. Similarity, we generate Quasi-Monte Carlo sequence for data sampling. Next, we briefly describe the Quasi-Monte Carlo sequence.
e Quasi-Monte Carlo Random sequence is a deterministic super-uniformly distributed sequence with low deviation. It has the property that any long subsequences are uniformly distributed in the feature space. Recently, the most widely used Quasi-Monte Carlo Random sequence mainly includes Halton sequence [26], Faure sequence [27], and Niederreiter's (t, s) sequence [28]. In our work, the Halton sequence is selected to perform the sampling strategy. e Halton sequence is one of the standard low-discrepancy sequences, which is used to generate super-uniformly distributed random numbers. Compared with pseudo-random numbers generated by the Monte Carlo method, it is mathematically proved that the volatility of the Halton sequence is smaller. Specifically, the approximate error of the Halton sequence is determined by the degree of difference of the sequence x 1 , . . . , x N . e approximate error is formulated as the following equation: where |1/N N k�1 f(x k ) − I s f(x)dx| is the error term, V(f) is the Hardy-Krause variation of the function f, and D N is the deviation of (x 1 · · · x N ).
Because the order of D N is O((log N) s− 1 /N), the approximate error order of the Quasi-Monte Carlo method is O (1/N). Similarly, the error order of the pseudo-random ). Compared with the above error orders, the error order of the Quasi-Monte Carlo method is smaller than that of the Monte Carlo method. Note that the above discussion only gives the upper limit of approximate error. In fact, the convergence rate of the Halton sequence is much faster than the rate obtained by the upper limit. Generally speaking, the Quasi-Monte Carlo method greatly speeds up the convergence compared with the Monte Carlo method, and the random numbers generated by the Quasi-Monte Carlo method are more uniform.
e Monte Carlo method generates the pseudo-random numbers, and the Quasi-Monte Carlo method generates the quasi-random numbers. Figure 1 shows the comparison between the quasi-random numbers and the pseudo-random numbers on a two-dimensional plane. As shown in Figure 1, the pseudo-random numbers are not uniformly distributed in some places. However, the Halton sequence is highly uniformly distributed in the whole space. Intuitively, the Quasi-Monte Carlo method may be more comprehensive, while the Monte Carlo method has more blank areas. Hence, this paper adopts the Halton sequence to sample the original data and further proposes a new density peaks clustering algorithm.

Description of the Algorithm
In this section, a novel improved density peaks clustering algorithm based on the Quasi-Monte Carlo method (QMC-DPC) is proposed to improve the performance of DPC. Specifically, the proposed method includes two components: the feature reduction module and the data sampling module.

e Feature Reduction Module.
In this module, we aim to reduce the feature dimension of data sets. e original data set X ∈ R n×m will be transformed to X ′ ∈ R n×d by various feature reduction methods, where d ≤ m. Our goal is to retain the original information as much as possible while reducing the dimension of the data.
In practice, we utilize linear and nonlinear feature reduction methods, including PCA, AE, and t-SNE, respectively. Firstly, we perform the zero-mean normalization for X. For x 1 , x 2 , . . . , x n ∈ X, we calculate the mean x � 1/n n i�1 x i and the standard deviation Hence, we can obtain the normalized numbers x i � x i − x/s, i ∈ 1, . . . , n. en, PCA, AE, and t-SNE are implemented on the normalized data set X. For PCA, we choose the number of principal components that are smaller than the original dimension of the data set (except for the two-dimension data set). We keep the original dimension for the two-dimension data set. For AE, we set the AE with three layers, including an encoder, a decoder, and a hidden layer. e dimension of encoder and decoder is equivalent to d and the number of hidden layer units is equivalent to d. For the input data X, we select the hidden layer features as the X ′ . For t-SNE, the similarity between data points is measured by probability instead of Euclidean distance. Specifically, the similarity of data points in the original feature space is calculated by Gaussian joint probability, while the heavy-tailed student t-distribution is used in the low-dimension to measure the similarity. en, we minimize the KL divergence to obtain the reduced features X ′ . Figure 2 shows the obtained two-dimensional features of PCA, AE, and t-SNE on Waveform and Landsat. e original dimensions of Waveform and Landsat are more than 20. From Figure 2, it can be seen that the low-dimensional features that map from higher-dimensional data are distinguishable. In Section 4, we will discuss how to select the feature reduction method by experimental analysis.

e Data Sampling Module.
Although we compress the feature dimension of data sets through the feature reduction module, the computational complexity of the DPC is still O(n 2 ). In this module, we aim to reduce the time overhead of DPC. Hence, an improved Density Peaks Clustering algorithm based on super-uniformly Quasi-Monte Carlo sequence (QMC-DPC) is proposed. In summary, we utilize the super-uniformly Quasi-Monte Carlo sequence to sample the low-dimensional feature space of the data set. en, the representative Quasi-Monte Carlo points are used to calculate δ i and ρ i instead of the original data. Generally speaking, the number of selected Quasi-Monte Carlo points is much smaller than the size of original data set n. e detailed description of QMC-DPC is given in the following.
Specifically, we first define two basic concepts as follows: (1) Circular data unit C: the circle with the Quasi-Monte Carlo points as the center and radius r (2) Unit density C u d : the number of data points contained in circular data unit c . , x n ′ , X ′ εR n×d is the lowdimensional feature data set obtained by the feature reduction module. We randomly generate N 0 Quasi-Monte Carlo points in the feature space. With the Quasi-Monte Carlo points as the centers, the corresponding C are determined under the appropriate r (When d is small, the parameter r � d/2 ��� N 0 after experiments). en, according to whether C contains data points or not, the circular data units are divided into two categories: nonempty unit set and empty unit set, where a nonempty unit set C ne � C|C u d > 0 and empty unit set C e � C|C u d � 0 . Next, since the empty unit set indicates that it does not contain any data, empty unit set and corresponding Quasi-Monte Carlo points are eliminated. e effect is shown in Figure 3.
As shown in Figure 3, the remaining nonempty Quasi-Monte Carlo points are distributed around the sample points, while the removed empty Quasi-Monte Carlo points are far from the sample points. Hence, the distribution of the original data set can be sampled by nonempty Quasi-Monte Carlo points. Furthermore, the local density of the original data set can be estimated by the unit density C u d . erefore, it is reasonable to utilize the nonempty Quasi-Monte Carlo points to calculate local density ρ and minimum distance δ instead of the original data points. Next, for all the nonempty Quasi-Monte Carlo points (assuming that the number of the nonempty Quasi-Monte Carlo points are N, there is generally N ≪ N 0 ), the distance of the nonempty Quasi-Monte Carlo point pairs is calculated to obtain the distance matrix A: where AεR N×N is a symmetric matrix with diagonal elements that are zero. d N is the ascending order of all elements in A.
When N is too small, the d c � d N × 2% may be zero which indicates that the function of intercept d c is eliminated. Hence, we remove the zero elements in A and take the first 2% distance from the remaining N 2 − N elements as the d c . en, we use equations (3) and (4) to calculate ρ i and δ i of each nonempty Quasi-Monte Carlo point and draw the decision graph. Figure 4 shows the decision graph of QMC-DPC and DPC on Waveform.
As shown in Figure 4, the density peaks obtained by QMC-DPC are easier to distinguish than that of DPC, especially on the low-dimensional features generated by AE and t-SNE. Meanwhile, the number of data points in the decision graph of QMC-DPC is smaller than DPC. Specifically, QMC-DPC (PCA), QMC-DPC (AE), and QMC-DPC (t-SNE), respectively, calculate 2742, 2499, and 2989 data points in the decision graph, while DPC calculates 5000 data points in the decision graph. e above discussion further proves the effectiveness of the Quasi-Monte Carlo sampling method. Specifically, it can be summarized as the following three aspects: (1) Combined with the super uniformity of the Quasi-Monte Carlo sequence, the data sampling is more comprehensive, so as to reduce the bias. is conclusion is described by Figure 1. (2) e number of selected nonempty Quasi-Monte Carlo points is small, which greatly reduces the time and space overhead. is conclusion is described in Figure 3. (3) Based on δ and ρ, data points located in dense areas are difficult to distinguish, because their δ and ρ are similar. On the contrary, Quasi-Monte Carlo points essentially sample the local density, and the distinction between selected nonempty Quasi-Monte  Carlo points is enlarged. Finally, according to the nearest distance principle, we propose a two-stage classification strategy: (i) e density peaks are selected as the class centers, and the remaining nonempty Quasi-Monte Carlo points are assigned to the nearest density peak. e first step obtains the clustering results of all nonempty Quasi-Monte Carlo points. (ii) e data points of X ′ are assigned to the nearest nonempty Quasi-Monte Carlo point. As the feature mapping is unique, the classification result of X is equivalent to the classification result of X ′ . e second step obtains the final clustering results of all data points of X.
After the above discussion, QMC-DPC is depicted in Algorithm 1 and the whole process is shown in Figure 5.

Algorithm Complexity Analysis.
e key of DPC is to draw the decision graph based on ρ i and δ i . Our work retains the idea of choosing cluster centers, but QMC-DPC only calculates ρ i and δ i for N nonempty Quasi-Monte Carlo points after the screening, making the computational complexity far less than DPC.
For the data set X ′ , the DPC takes the space complexity of O(n 2 ) to store the distance matrix. e space complexity of QMC-DPC mainly includes: O(N 0 ) is required to generate Quasi-Monte Carlo points, O(N) is required to retain nonempty Quasi-Monte Carlo points, and O(N 2 ) is required to store the distance matrix of nonempty Quasi-Monte Carlo point pairs. erefore, the spatial complexity of However, when n is relatively small, the space complexity of QMC-DPC becomes larger due to generating N 0 Quasi-Monte Carlo points.
When calculating ρ i and δ i , DPC needs to calculate the distance matrix with the time complexity of O(n 2 ). After selecting the cluster centers, the time complexity of classifying data points is also (n × k). erefore, the time complexity of DPC algorithm is O(n 2 ) + nk.  N + N 2 ). In general, there are always N ≪ N 0 and N 0 ≪ n, making the time complexity of the QMC-DPC less than that of the DPC. However, when n is relatively small, the time cost of QMC-DPC is more than that of DPC. In the experiment, we will further prove that even with the addition of the feature reduction module, the proposed algorithm still has time superiority.

Experimental Setup.
To verify the performance of QMC-DPC, the proposed method is compared with related clustering algorithms, including DPC-KNN-PCA [17], SNN-DPC [18], DLORE-DP [16], DPC [5], AP [4], DBSCAN [3], and K-means [2]. e nearest neighbor number is set to 4 in SNN-DPC. e ratio of low-density points in DLORE-DP is set to 0.2. For DBSCAN, the parameter Min Pts is set to 3 and ε is empty. K-means needs to specify the number of classes in advance. e data sets adopted in this section include two major categories: unlabeled data sets and labeled data sets. e details of these data sets are listed in Table 1. In labeled data sets, all data sets are UCI data sets. In unlabeled data sets, Flame, Aggregation, and S2 are Synthetic data sets. KDD is a biological data set, which is used to verify the superiority of our proposed algorithm on large-scale and high-dimensional feature data sets. Four evaluation criteria are adopted to evaluate the model performance on labeled data sets, i.e, the Accuracy (Acc) and F-measure (F), Normalized mutual information (NMI), and Adjusted rand index (ARI). ese evaluation criteria are described as follows: Assume that X � x 1 , x 2 , . . . , x n is the data set. Y � y 1 , y 2 , . . . , y n and Y � y 1 , y 2 , . . . , y n represent the real labels and the predicted labels, respectively. Acc is denoted as where map(.) is a permutation mapping function, which uses Hungarian algorithm to match the predicted labels with the real labels.  Scientific Programming identified by the classifier. Prec, Rec, and F-measure are defined by the following equations: where β is a nonnegative real number that is set to 1. For the U i divided by each real label, the nearest one in V j is selected as its F value: en, we use the weighted average of F(U i ) to get the final F value: e Normalized Mutual Information (NMI) measures the information that the predicted labels Y share with the ground truth Y. NMI is defined as the following equation: where I(Y, Y) is the mutual information between clustering result and ground truth. H(Y) and H(Y) denote the entropy of clustering result and ground truth, respectively. e Adjusted Rand Index (ARI) is the extension of Rand Index (RI). ARI is defined as the following equation: where RI � (uv + uv)/(uv + uv + uv + uv), uv denotes the data pairs which are in the same class in U and in the same class in V, uv denotes the data pairs which are in different classes in U and in different classes in V. uv denotes the data pairs which are in different classes in U and in the same class in V. uv denotes the data pairs which are in the same class in U and in different classes in V. e value of ARI is in the range [− 1, 1]. e upper bound of these evaluation criterions is 1. e larger these criterions are, the better the clustering results are.
In the feature reduction module, some parameters are set in advance. For t-SNE, the learning rate is 500, the number of perplexity is 30, and the number of epochs is 800. For AE, the learning rate is 0.01, optimizer is Adam, and the number of epochs is 300.

Experimental Results on Labeled Data Sets.
In this section, 9 UCI data sets in Table 1 are used to verify the performance of QMC-DPC. All data are normalized to between [0, 1]. To avoid extreme cases, each algorithm runs 10 times and records the average results. e values of evaluation criteria are shown in Table 2 and the best values are highlighted in bold. e relevant parameters of the QMC-DPC are recorded in Table 3.
As shown in Table 2, our proposed algorithm is superior to other algorithms on the whole. Acc indicates the ratio of the number of correct predicted samples to the number of total samples. In terms of Acc, QMC-DPC achieves the highest performance on all data sets except Waveform and Landsat. In particular, QMC-DPC is 33.6% and 34.3% higher than DPC on Zoo and Pima, respectively. F-measure indicates the matching degree between the predicted labels and the true labels of the data set, which is the weighted harmonic mean of precision and recall. In terms of F-measure, QMC-DPC achieves the highest performance on nearly half the data set. NMI quantifies the similarity between the predicted labels and the true labels, which measures the robustness of the algorithm. In terms of NMI, QMC-DPC achieves the highest performance on all data sets except Landsat, Pima, and Zoo. In particular, QMC-DPC is 21.4% higher than DPC on Waveform. ARI is used to measure the degree of coincidence of the two data distributions. In terms of ARI, QMC-DPC achieves the highest performance on all data sets except Breast and Landsat. e ARI value of QMC-DPC is 73.2% higher than DPC on Zoo.
In addition, the evaluation criterion values of QMC-DPC (PCA), QMC-DPC (AE), and QMC-DPC (t-SNE) are similar, and the model performance is better than that of DPC on the whole. e above results indicate that the combination of the feature reduction module and the feature sampling module can improve the model performance.

Experimental Results of Unlabeled Data Sets.
Since there are no real labels for the unlabeled data sets, the evaluation criteria Acc, F-measure, NMI, and ARI cannot be applied to the unlabeled data sets. To compare the performance on the unlabeled data sets, the evaluation criteria Silhouette Coefficient (SC) and Calinski-Harabasz (CH) are defined. For SC, we first calculate the silhouette coefficient for each data point i: where a(i) is average dissimilarity between data point i and other data points in the same class, b(i) is the minimum value of the average dissimilarity between data point i and other categories. Next, we obtain the silhouette coefficient for data set based on s(i): where trace B � n i�1 n i ‖u i − u‖ 2 , n i is the number of data points in class i, u i is the average of data points in class i, and u is the average of all data points. trace W � k j�1 n i�1 ‖x i − u j ‖ 2 , k is the cluster numbers. e larger the CH value is, the better the clustering result is.
In this section, three synthetic data sets and KDD are selected to verify the performance of QMC-DPC. Flame, Aggregation, and S2 are the classical synthetic data sets. KDD is a large-scale data set with high-dimensional features. Table 4 shows the SC and CH of all algorithms on unlabeled data sets. e best values are highlighted in bold. e relevant parameters of the QMC-DPC are recorded in Table 3.
As shown in Table 4, our proposed method obtains the best clustering results on the whole, especially QMC-DPC (AE). e "-" in Table 4 indicates that the algorithm cannot execute because it exceeds the virtual memory. For the SC, QMC-DPC (t-SNE) is higher than DPC on Flame. And, DPC obtains the same results as our proposed method on Aggregation and S2. DPC-KNN-PCA also obtains the same results as our proposed method on S2. In general, QMC-DPC (AE) and QMC-DPC (t-SNE) achieve the better performance than QMC-DPC (PCA) except KDD. Limited by the t-SNE method, QMC-DPC (t-SNE) fails to perform clustering on KDD. In Section 4.6, we will make further comprehensive analysis. In addition, we visualized the classification results on synthetic data sets. Figure 6 shows the classification results on Aggregation ans S2.

Experimental Results of Running Time.
In this subsection, we further verify that our proposed method can effectively reduce the computational overhead. We select data sets with more than 2000 data points and record the running time in Table 5.
As shown in Table 5, compared with DPC, SNN-DPC, and AP, QMC-DPC achieves the best performance in terms of running time. QMC-DPC is at least 34.47%, 61.80%, 25.59%, and 50.85% lower than DPC on Segment, Waveform, Landsat, and s2, respectively. Generally speaking, the larger the data size, the more the time saved. For KDD, QMC-DPC (PCA) and QMC-DPC (AE) obtain the results, while QMC-DPC (t-SNE) will exceed memory.
is is limited by the t-SNE method. In addition, DPC, SNN-DPC, and AP also exceed memory.
is further confirms the effectiveness of our method. How to select QMC-DPC (PCA), QMC-DPC (AE), and QMC-DPC (t-SNE) will be discussed in Section 4.6. In addition, it can be seen that the running time of the QMC-DPC (PCA) and QMC-DPC (AE) is close. However, the computational overhead of QMC-DPC (t-SNE) is higher than that of QMC-DPC (PCA) and QMC-DPC (AE). e reason is that t-SNE requires a huge computational overhead, while auto-encoder only has a shallow structure and does not contain a large number of training parameters. Furthermore, we compare the time complexity of our proposed method with that of the baselines methods. e results are recorded in Table 6. In this part, we set the number of data points to n, the number of cluster categories to k, the number of neighbors to m, the number of iteration to t, and the number of selected Quasi-Monte Carlo points to N. Although the time complexity of QMC-DPC is square, N is much smaller than n in practice. Hence, the time overhead of QMC-DPC will be significantly reduced and the conclusion can also be proved in Table 5.

Experimental Results of Sensitivity Analysis.
In this section, we conduct parameter sensitivity analysis from multiple aspects, such as how feature dimensions affect model performance and running time. Specifically, we first calculate Acc, F, NMI, and ARI on UCI data sets where the feature dimension is in the range [16,24]. e final results are recorded in Tables 7-10, respectively.
From Tables 7 to 10, it can be seen that the performance of the model will decrease slightly with the increase of dimension on the whole.
is is limited by the loss of information caused by the sampling strategy as the dimension increases. As the dimensions increase, data distribution will become more complex. To address this issue, there are two methods to reduce the information loss caused by sampling: (1) increase the number of Quasi-Monte Carlo points and (2) appropriately increase the radius r of the circular data unit. If we adopt the first method, the time complexity of QMC-DPC in generating and storing Quasi-Monte Carlo points is O(N 0 ), which increases the time and space overhead i as the number of Quasi-Monte Carlo points increases. If the second method is adopted, the selection of radius r is very important. When r is too large and contains the entire data set, the QMC-DPC does not perform sampling operation. Since the main purpose of this paper is to reduce the time overhead of DPC, we give priority to the second method.
In addition, we further study the impact of feature dimension on model performance and running time, where the feature dimension is extended to [2,9]. In this part, KDD is selected and the results are shown in Figure 7. As shown in Figure 7, QMC-DPC (AE) and QMC-DPC (PCA) achieve high performance in terms of SC. On the contrary, QMC-DPC (AE) and QMC-DPC (PCA) have a poor value in terms of CH when the feature dimension is 7. However, the value of CH increases heavily when the feature dimension is 9. e reason is that when we generate more Quasi-Monte Carlo points to execute sampling strategy, the corresponding running time also increases to a great extent. e relevant parameters on KDD are recorded in Table 11. 4.6. Algorithm Summary. Based on the above experiments, we have a comprehensive discussion on QMC-DPC. Specifically, as shown in Tables 2 and 4, it can be found that QMC-DPC achieves the best performance on the whole. On the UCI data sets, QMC-DPC (PCA), QMC-DPC (AE), and QMC-DPC (t-SNE) obtain the highest values of 8, 7, and 9   QMC-DPC combined with nonlinear feature reduction methods achieves better performance, on the whole. In terms of running time, it is obvious that our proposed       performance and various evaluation criteria are affected differently. In general, the model performance will decrease as the feature dimension increases. is is due to the loss of information caused by sampling. In Section 4.5, we propose two methods to overcome this problem, including generating more Quasi-Monte Carlo points and increasing the radius r. e purpose of both methods is to expand the sampling area. For our proposed method, we make a tradeoff between running time and model performance, which generates fewer Quasi-Monte Carlo points and sets fewer iterations for t-SNE and AE.
e above operations will reduce the running time while reducing the model performance. In particular, we also increase the radius r to reduce the information loss.
We summarize the following views on QMC-DPC: (i) In general, we choose QMC-DPC combined with nonlinear feature reduction methods, such as QMC-DPC (AE) and QMC-DPC (t-SNE). When dealing with a large-scale data set, we prefer QMC-DPC (AE). (ii) To reduce information loss, we give priority to expanding the radius r. Secondly, we consider adding Quasi-Monte Carlo points.
In addition, there are still exploration directions for our proposed algorithm in the future, which are summarized as follows: (i) How to select feature dimensions is a heuristic work.
In future work, we hope to build a multi-layer autoencoder and construct the loss function based on hidden layer features. We aim to design the automatic encoder as a multi-tasks neural network. (ii) We hope to propose a more comprehensive sampling method to reduce the loss of information. We can take the sample point itself as the center for sampling, and then filter out the data samples in the sparse area. Finally, we need to have a strategy for the classification of outliers. (iii) We hope to propose a more comprehensive sampling method to reduce the loss of information. We can take the sample point itself as the center for sampling, and then filter out the data samples in the sparse area. Finally, we need to have a strategy for the classification of outliers.

Conclusion
In this paper, a new density peaks clustering algorithm with high computational efficiency is proposed. e original feature space is compressed by different feature reduction methods. We sample the reduced feature space based on the super-uniformly distributed sequence generated by the Quasi-Monte Carlo method. Our work can effectively overcome the high computation overhead of DPC while improving the model performance. eoretically, the time complexity can be reduced from o(n 2 ) to o(Nn), where N ≪ n.
e experimental results show that QMC-DPC improves the model performance of the DPC while greatly reducing the time overhead with the increase of data set size.

Data Availability
e data used to support the findings of this study were supplied by https://archive.ics.uci.edu/ml/index.php.

Conflicts of Interest
e authors declare that they have no conflicts of interest.