A Large-Scale k -Nearest Neighbor Classification Algorithm Based on Neighbor Relationship Preservation

Owing to the absence of hypotheses of the underlying distributions of the data and the strong generation ability, the k -nearest neighbor (kNN) classi ﬁ cation algorithm is widely used to face recognition, text classi ﬁ cation, emotional analysis, and other ﬁ elds. However, kNN needs to compute the similarity between the unlabeled instance and all the training instances during the prediction process; it is di ﬃ cult to deal with large-scale data. To overcome this di ﬃ culty, an increasing number of acceleration algorithms based on data partition are proposed. However, they lack theoretical analysis about the e ﬀ ect of data partition on classi ﬁ cation performance. This paper has made a theoretical analysis of the e ﬀ ect using empirical risk minimization and proposed a large-scale k -nearest neighbor classi ﬁ cation algorithm based on neighbor relationship preservation. The process of searching the nearest neighbors is converted to a constrained optimization problem. Then, it gives the estimation of the di ﬀ erence on the objective function value under the optimal solution with data partition and without data partition. According to the obtained estimation, minimizing the similarity of the instances in the di ﬀ erent divided subsets can largely reduce the e ﬀ ect of data partition. The minibatch k -means clustering algorithm is chosen to perform data partition for its e ﬀ ectiveness and e ﬃ ciency. Finally, the nearest neighbors of the test instance are continuously searched from the set generated by successively merging the candidate subsets until they do not change anymore, where the candidate subsets are selected based on the similarity between the test instance and cluster centers. Experiment results on public datasets show that the proposed algorithm can largely keep the same nearest neighbors and no signi ﬁ cant di ﬀ erence in classi ﬁ cation accuracy as the original kNN classi ﬁ cation algorithm and better results than two state-of-the-art algorithms.


Introduction
K-nearest neighbor classification algorithm is a lazy learning method that does not require a training process but simply stores training instances [1]. When given a test instance, kNN classification algorithm first calculates the similarity between the given instance and all instances in the training set, then finds k-nearest instances according to the similarity, finally predicts its label by the majority voting based on the category of these instances. Owing to its advantages of substantial theoretical foundation, strong generalization performance, and no assumptions on data distribution, the kNN classification algorithm has been widely used in many fields [2][3][4][5][6]. It is selected as one of the top 10 classic algorithms in data mining [7].
With the rapid development of sensing and Internet technology, data from all walks of life is increasing by orders of magnitude; big data becomes the focus of government, academia, and industry; and the research results of data analysis and mining have been widely used in the Internet of Things, healthcare, e-commerce, finance, and so on. However, kNN needs to compute the similarity between the aim instances and all the training instances so that its execution efficiency faces a great challenge in the big data environment. An increasing number of acceleration algorithms are proposed to improve the efficiency of kNN classification algorithms to process the large-scale data [8][9][10]. The existing accelerating algorithms for kNN classification can usually be divided into two categories from the perspective of data preprocessing: kNN classification based on data partition (DP-kNN) algorithm and kNN classification based on instance selection (IS-kNN) algorithm [11,12].
The basic ideology DP-kNN algorithm divides the training set into several subsets by feature space partition, then classifies the test instances using some of the divided subsets. Specifically, the feature space of the training set is divided into several subregions, then determines which divided subregions the test instance belongs to, and finally finds k-nearest neighbors in the subset of instances corresponding to that region. These algorithms mainly take advantage of the local learning characteristics of kNN classification algorithms: the label of the test instance in the prediction process is only related to the most similar instances in the training set. Therefore, it tries to ensure that k-nearest neighbors of each instance in its divided subset are consistent with the ones in the original dataset. However, most of the existing data partition algorithms scarcely analyze this consistency from a theoretical point of view, so they are difficult to guarantee that the algorithm has high generalization performance.
Different from the DP-kNN algorithm, the IS-kNN algorithm does not use all the training examples. At the same time, it finds k-nearest neighbors of the test instance from a representative subset of the training set, where the subset is obtained by using the instance selection algorithm. Because the size of the representative subset is smaller than the original training set, it can greatly improve the efficiency of finding neighbors for the test instance. Instance selection is an important data preprocessing method; it removes noisy instances and those instances far away from the classification decision plane from the training set according to the similarity and label differences of the training instances. Since there are more instances far from the classification decision plane in most datasets than those close to the classification decision plane, an instance selection algorithm can greatly reduce the size of the training set and keep the classification accuracy relatively unchanged. However, the time complexity of most existing instance selection algorithms is the square of the training set size, which makes it difficult to effectively process large-scale data. Furthermore, it only uses the information of the part data rather than all the data, so its generalization performance could be negatively affected.
For the problem of the lack of consistency analysis about nearest neighbors under data partition, this paper analyzes its classification performance theoretically from the perspective of optimization. The contribution of this paper is as follows: (1) Theoretically analyzing the effect of data partition on the classification performance of the kNN classification algorithm and giving the difference measurement between k-nearest neighbors obtained with data partition and without data partition (2) Obtaining the fact that minimizing the similarity of the instances in different divided subsets can largely reduce the effect of data partition on the classification based on the theoretical analysis (3) Adopting the minibatch k-Means clustering algorithm to execute data partition, because it divides the dataset into several subsets with a large difference in similarity (4) Searching k nearest neighbors from the union of several candidate divided subsets for the test instance, where the candidate divided subsets are selected by the similarity between the test instance and cluster centers (5) Compared with the two existing typical algorithms, the experimental results on the public dataset show that the proposed algorithm could largely hold k same nearest neighbors and similar classification accuracy of the original kNN classification algorithm The rest of this paper is organized as follows. Section 2 reviews related methods about kNN classification acceleration algorithm. Section 3 analyzes the effect of data partition on the classification performance of the kNN classification algorithm and proposes a novel algorithm, called the largescale kNN classification algorithm based on neighbor relationship preservation (NPR-kNN algorithm). Section 4 reports the experimental results through the comparison with existing methods. Section 5 gives the conclusion of this paper and the future work.

Related Work
The existing acceleration algorithm for k-nearest neighbor classification from the perspective of data preprocessing can be categorized into the acceleration algorithms based on data partition (DP-kNN algorithm) and the acceleration algorithms based on instance selection (IS-kNN algorithm).
The DP-kNN algorithm is mainly divided into three steps: the feature space of the current training instance is firstly divided into several subregions; then, the divided region where the test instances stay is determined, and finally, k-nearest neighbors are found from the subset of instances within this region. Because the kNN classification algorithm is a local learning algorithm, it is necessary to ensure that the neighboring sequences of the instance before and after data partition are consistent when dividing the training set. Most of the existing data partitioning algorithms for kNN classification algorithms are based on the binary tree structure; the current data is recursively divided into two subsets of similar capacity until the termination condition is met starting from the original set. Friedman et al. [13] firstly have proposed the concept of the KD tree, which uses the attributes of the data to recursively divide the k-dimensional feature space into several subregions and treat the data falling in each region as a subset. However, in the face of high-dimensional complex data, there will be a phenomenon that some attributes with a large amount of information are not used in the process of building a tree. To solve this problem, Verma et al. [14] have proposed a KD tree that maximizes variance (MKD-tree) algorithm, which selects the attribute with the largest variance of the attribute value on the current data as the node for division. The MKD-tree algorithm uses only a certain attribute each time the current data is divided, which will cause partial 2 Wireless Communications and Mobile Computing information loss. For this reason, a binary tree algorithm based on principal component analysis is proposed [15], which divides the current data based on the score of the first m principal components and the corresponding median value. In addition, there also exist some data partition algorithms based on the structure of the nearest neighbor graph and hash approximation [16][17][18][19]. However, most of the existing data partitioning algorithms do not theoretically study the effect of data partitioning on the kNN classification algorithm.
The IS-kNN algorithm mainly searches k-nearest neighbors of the test instance in a representative subset of the training set with a relatively small size. The representative subset is obtained by various instance selection algorithms [12]. Hart [20] has proposed a compressed nearest neighbor based on 1NN (CNN) algorithm, which obtains a subset S of the training set T so that the instances in the set T − S are correctly classified by S. That is, the instances in T − S have the same labels as their neighbors in S. The CNN algorithm first randomly selects an instance from the training set into the set S. Then each time select an instance from T − S and determine whether it is the same as the label of its neighbor in the set S: if it is consistent, put it in S, and repeat the above process until the set T − S is empty. Although the CNN algorithm can obtain a relatively small subset S, this algorithm is very sensitive to the order of reading data, and the time complexity is the square of the number of instances in the training set. To overcome this difficulty, Angiulli [21] has proposed a fast compressed nearest neighbor (FCNN) algorithm. The FCNN algorithm first selects the instances closest to each center. It puts them into the set S and then iteratively selects representative instances from the set of instances in T − S that are not correctly classified by the set S and puts them into the set S and repeats the selection process until S can correctly classify all instances in T − S. The FCNN algorithm is independent of the order of reading data, but its time complexity is OðjTjjSjÞ, where T is the size of the set T. To improve the efficiency of the FCNN algorithm in processing large-scale data, [22] has proposed an FCNN algorithm based on parallel distributed computing. Although the proposed algorithm can achieve the purpose of greatly reducing the data size, it does not consider the impact of noise instances. To solve this problem, many editing algorithms have been proposed; its main idea is to remove instances that are inconsistent with their nearest neighbor labels. CNN series of algorithms and editing algorithms have achieved the goal of greatly reducing the size of the training set while the training error remains relatively unchanged. However, none of these algorithms consider the local sparsity of training examples in the feature space; it takes a negative effect on the classification performance of the kNN algorithm. For this reason, Nikolaidis et al. [23] have proposed a kind of boundary preservation algorithm, which first uses an editing algorithm to remove the noise instances in the training set and then uses the geometric characteristics of the potential distribution of the training instances in the feature space to divide the training set into border instances and interior instances. Finally, the representative instances from these two kinds of instances are selected and merged into the final instance selection subset. Furthermore, there are lots of improved kNN classification algorithms based on graphics and search algorithms [24]. However, most kNN classification algorithms based on instance selection need to calculate the similarity between all instances, which makes it difficult to process large-scale data [12].

Main Content
3.1. Related Concepts. Let T = fðx 1 , y 1 Þ, ðx 2 , y 2 Þ,⋯,ðx N , y N Þg be the labeled training set of instances from l different classes, where each instance x i is expressed by m-dimensional feature vector ðx i1 , x i2 ,⋯,x im Þ and x ij is its jth feature value, y i is the label of the instance x i , m, N are the number of features and instances, i = 1, 2, ⋯, N, j = 1, 2, ⋯, m: KNN classification algorithm is learned by comparing the similarity between the unlabeled instance and all the training instances. When given a test instance x, kNN first calculates the similarity d i with each instance x i in the training set T and then sorts all the training instances according to the order of the similarity d i and takes the first k instance as k nearest neighbors of x i . Finally, the class of the k neighbors with the largest number of instances will be determined to be the label of the instance x.
The basic idea of the existing kNN classification acceleration algorithm based on data partition is quite similar. After dividing the training set into several subsets of approximately equal size, it determines which of the divided subsets the test instance is most similar to and finds its k neighbors in this subset. The kNN algorithm is also a local learning algorithm, and the predicted label of the test instance is only related to the label of its nearest neighbors in the training set. To obtain a similar classification performance with the one using all the training instances, the DP-kNN algorithm needs to guarantee the k-nearest neighbors in the divided subset of the test instance to be consistent with the original training set as much as possible. Specifically, it is ensured that the test instance and its k-nearest neighbors in the training set are still in the same divided subset. Therefore, the data partition should be carefully studied.
The test instance has randomness and is unknown before executing prediction, and its location in the feature space is difficult to be decided. This difficulty takes the trouble to perform a good data partition. Fortunately, empirical risk minimization takes an effective way to solve this problem in statistical learning theory. Minimizing the empirical risk 1/N∑ N i=1 lðy i ,ŷ i Þ can obtain the optimal solution, where lðy i ,ŷ i Þ is the loss function between the true label y i and the predicted labelŷ i predicted by k-nearest neighbors of the instance x i . In this way, it should ensure that each training instance and its k-nearest neighbors in the training set are still in the same divided subset. To this end, we analyze the effect of data partitioning on the neighbor relationship from the perspective of optimization.
where A = ½A ij N×N ∈ R N×N is a boolean matrix, A ij = 1 when instance x i is one of k nearest neighbors of the instance x j , and A ij = 0 when instance x i is not one of k nearest neighbors of the instance x j ; D is the similarity matrix and each element D ij > 0 is the similarity between the instance x i and the instance x j ; and trðA × DÞ is the trace of the matrix A × D which is the product of the matrix A and the matrix D, i, j = 1, 2, 3, ⋯, N. The optimization problem (1) has only one optimal solution under the assumption that there exist different similarities for different instances. Let A * be the optimal solution to the optimization problem (1). Suppose the training set T is divided into n disjoint subsets T l , where l = 1, 2, 3 ⋯ , n. The KNN classification algorithm based on data partition is aimed at finding k-nearest neighbors of each instance within its divided subset. For each divided instance subset T l , each element ðx i , y i Þ ∈ T l searching its k-nearest neighbors in T l can be transformed into solving the following optimization problem: where A l ∈ R n l ×n l , A l is a boolean matrix, A l ij = 1 if and only if the instance x i is one of k-nearest neighbors of the instance x j ∈ T l , otherwise A l ij = 0; the matrix D l ∈ R n l ×n l is a submatrix of D with row and column indexes V l , V l = fi ∈ N * : ðx i , y i Þ ∈ T l g, and n l = |T l | is the size of the set T l . Let A l be the optimal solution for solving the optimization problem (2) for the instance subset T l , where l = 1, 2, ⋯, n.
3.3. The Estimation of the Effect of Data Partition. Let hðx i Þ ∈ f1, 2,⋯,lg be the index of the divided subset which the instance x i belongs to, e.g., In fact, the nearest neighbor algorithm based on data partitioning approximately decomposes the optimization problem (1) into n suboptimization problems (2) and independently solves this separate suboptimization problem. Combine the optimal solutions of these n subproblems (2) into a new matrix where binary function I ða,bÞ = 1 if and only if a = b; otherwise, I ða,bÞ = 0. The matrix A is an approximation of the optimal solution matrix A * of the optimization problem (1). In order to ensure the performance of the algorithm, the difference between f ðA * Þ and f ð AÞ should be minimized. In order to measure the difference between the two, we introduce the following lemmas and theorems.

Lemma 1.
A is the optimal solution to the following problem: where the matrix Proof. According to the definition D, each element satisfies the following rule: Moreover, the result can be seen from the calculation properties of the block matrix Therefore, the original optimization problem (3) can be decomposed into n suboptimization problems (2), i.e., On the other hand, all suboptimization problems are independent and the matrix A l is the optimal solution of the suboptimization problem (2), so the matrix A is the optimal solution of the problem max A∈R n×n trðA DÞ.

Theorem 2.
For the given training set T = fðx 1 , y 1 Þ, ðx 2 , y 2 Þ, ⋯,ðx N , y N Þg and its partition index set fhðx 1 Þ, where f ðAÞ = trðADÞ, Proof. Let f ðAÞ = trðA DÞ. According to the definition of the matrix A and A * , we have Combining equations (8) and (9) and the result f ð AÞ ≥ f ðA * Þ according to Lemma 1, we have It is often assumed that each training instance and other instances in the training set have different similarity values in the kNN classification task. This assumption ensures that each training instance has k fixed nearest neighbors without considering the order of reading data, and the optimal problem (1) has a unique solution. Combined with the above theorem, reducing the difference between the objective function f ðA * Þ and f ð AÞ can help reduce the difference between the approximate solution A and the optimal solution A * . Therefore, we need to minimize the estimated difference ∑ N i=1 ∑ j∉∧ i D ji , i.e., the similarity between instances that are not in the same partitioned subset should be promoted to decrease as far as possible.
To achieve this aim, the minibatch k-Means clustering (MKC) algorithm is adopted to perform data partition for efficiently and effectively dealing with large data [25][26][27]. The MKC algorithm is one kind of the two-step k-Means clustering algorithm; it first performs k-Means algorithms on the randomly sampled instances from the original data to obtain the cluster center; then, the rest of the instances decide which cluster they belong to according to the similarity to the cluster centers. Meanwhile, the MKC algorithm is efficient because its time complexity is OðNmÞ, where m is the size of the sampled subset. An additional advantage of this algorithm is that the maximum number of clusters often does not exceed ffiffiffiffi N p and the size of divided subsets has the uniformity effect, which provides us with an important reference basis for determining the number of divided subsets [28].
3.4. NPR-kNN Algorithm. Suppose the training set is divided into n disjoint clusters T 1 , T 2 , ⋯, T n using the MKC algorithm, and each cluster is to be a subset after division. It is an important step to decide k nearest neighbors of the given test instance after the training set partition. For the given test instance x, the traditional way decides which divided subsets the instance x belongs to according to the similarity between the instance x and the cluster centers, then finds k nearest neighbors within this aim subset. However, this way could not be effective for those instances which are far from the aim cluster center because it is difficult to guarantee that these instances and their neighbors in the training set are still in the same cluster. These instances and their k nearest neighbors are very likely in several adjacent clusters because they have higher similarity in the small local region of the feature space than other instances. Therefore, the method of cluster fusion is used to solve this problem. The aim cluster where the test instance x search its k nearest neighbors extends to be the union of λ candidate clusters, where the cluster centers of these candidate clusters are the first λ most similar with the instance x among all the clusters, and λ is an integer greater than 1. In this way, it largely increases the possibility that the test instance x can find the same k nearest neighbors as the original training set. The fixed value of the parameter λ for different datasets is not desirable for the large difference in the sparseness of data distribution. We adopt the early stopping rule to adaptively determine the value of λ for different test instances. It successively merges λ candidate clusters from λ = 1 to n until k nearest neighbors in the merged set of the test instance x does not change. The following algorithm shows the detailed procedure of the NPR-kNN algorithm.

Complexity Analysis of the Proposed Algorithm.
Besides the classification performance, execution efficiency is another important evaluation. The NPR-kNN algorithm includes data partition stage and prediction stage. The minibatch k -Means cluster algorithm is adopted to perform data partition in the first stage, and it is designed for dealing with big data and gets several times more efficient than the traditional k-Means cluster algorithm [29]. The test instance searches its k-nearest neighbors using the aim divided subset rather than all the training instances in the prediction stage. The aim divided subset is obtained by only computing the similarity between the cluster centers and the test instance. Moreover, the NPR-kNN algorithm has the additional advantage of allowing distributed storage of large-scale training data. The 5 Wireless Communications and Mobile Computing training data is divided into several disjoint subsets because there is no intersection among these divided subsets. Therefore, the proposed algorithm can effectively deal with largescale data.

Experiments
To test the proposed algorithm, an extensive experiment comparison has been carried out on the real datasets with two representative kNN classification acceleration algorithms based on data partition.

Experiment Setup.
Two representative algorithms are selected in this paper: kNN classification algorithm based on KD-tree (MKD-kNN) and kNN classification algorithm based on PCA tree (PCA-kNN) [15]. Meanwhile, ten largescale public datasets are chosen to make a fair comparison with other algorithms to verify the effectiveness of the proposed algorithm [30,31], where the scale of each dataset is greater than 90000. Information of the ten selected datasets is shown in Table 1.
The NPR-kNN algorithm, MKD-kNN algorithm, and PCA-kNN algorithms are all approximations of the kNN classification algorithm. To evaluate the degree of consistency of k-nearest neighbors of each training instance before and after the training set is divided, the training matching ratio R tr = N tr /N, where N tr is the number of instances whose k-nearest neighbors in the divided subsets are the same as the training set and N is the size of the training set. The larger the value of R tr , the stronger the locality of the data maintained by the algorithm; otherwise, the weaker the locality of the data maintained by the algorithm. The test accuracy is an important index to evaluate the performance of the classifier; it mainly characterizes whether the label of the test instance is consistent with the predicted label. However, it does not reflect whether the nearest neighbor sequence of the test instance obtained by the approximate nearest neighbor algorithm is consistent with that obtained by the original kNN algorithm. To this end, we also calculated the test matching ratio R tr = N ts /N test , where N ts is the number of test instances whose k-nearest neighbors obtained by the approximate nearest neighbor algorithm are the same as the ones obtained by the original kNN algorithm, and N test is the number of all test instances. A tenfold cross-validation method is used to estimate three performance index values on different datasets. In addition, the signed-rank test [32,33] is adopted to test whether there is a significant difference in performance between the NPR-kNN algorithm and other algorithms.
In the following experiments, all attribute values of the used datasets are normalized to the interval ½0, 1 to avoid the influence of dimensions between different attributes. The Euclidean distance is used to measure the similarity between instances. The performance of the approximate nearest neighbor algorithm based on the data partition is affected by the size of the subset, so they need to be compared under different numbers of divided subsets. We choose these four different values s = 500,1000,2000,5000 as the threshold of the divided subset size according to the suggestion of the paper [17]. It determines the number of divided subsets using the formula ½n/s, where ½n/s is the minimum positive integer large than n/s. Moreover, k = 7 is chosen based on the experiment result of the paper [34]. The significant level is α = 0:05.

Input:
The training set T = fðx 1 , y 1 Þ, ðx 2 , y 2 Þ,⋯,ðx N , y N Þg, the test instance x, number of subsets n. Output: The predicted labelŷ. 1 Initialization: Δ = f0g, k-nearest neighbor set NN k = ∅ 2 Divide the set T into n disjoint T 1 , T 2 , ⋯, T n using Mini-batch k-Means clustering algorithm and get n cluster centers set C = fc 1 , c 2 ,⋯,c n g; whileðΔ ≠ ∅or C ≠ ∅Þdo 3 Find the set T υ according to the similarity between x and each instance of C, where υ = arg max Update k-nearest neighbor set NN k of x to generate a new set NN new by comparing the similarity between x and each instance of Obtain the predicted labelŷ based on the majority class on NN k 9 Returnŷ.
Algorithm 1: NPR-kNN algorithm. The following experiment analysis is made from the value of s, because it takes a great effect to the measurement R tr . From the results of Tables 2-4 under three different smaller values of s, the value of R tr of the NPR-kNN algorithm is serval times than the MKD-kNN algorithm and PCA-kNN algorithm on most datasets except the Skinnoskin dataset. Besides the value of s, the sparseness of the data distribution also takes an effect on the value R tr . If most instances of the dataset are distributed densely in the input space, then data partition takes a small effect on the nearest neighbor relationship preservation, and the value R tr could be large. The dataset Skin-noskin is relatively densely distributed and the divided subsets with hundreds of data so that the value R tr of all three algorithms have larger than 0.9 under different values of s on it. Meanwhile, the mean and median of the NPR-kNN algorithm on different datasets are close to or greater than 0.5 and have the largest value among the three algorithms under s = 500,1000,2000. Finally, the p value of the Wilcoxon signed-rank test between the NPR-kNN algorithm and one of the other algorithms is smaller than the given significant level α = 0:05. Therefore, the NPR-kNN algorithm obtains the best result of the measurement R tr compared with the MKD-kNN algorithm and PCA-kNN algorithm under the smaller s.     Table 5 under the large value s = 5000, three algorithms have the similarity value of R tr on all the datasets except the Acoustic and Aloi datasets. And the mean of R tr of these algorithms is 0.832, 0.825, and 0.825, and their median values are 0.854, 0.850, and 0.851. Moreover, these algorithms get larger values of R tr under s = 5000 than the results under s = 500, 1000, 2000. The reason for this issue is that there are lots of elements in the divided subset under the large value of s which largely increases the probability that each element and k-nearest neighbors are still in the same divided subset. However, the p value of the Wilcoxon signed-rank test between the NPR-kNN algorithm and one of the other algorithms is smaller than the given significant level α = 0:05. Therefore, there exists a significant difference between the NPR-kNN algorithm and one of the other algorithms, and the NPR-kNN algorithm also obtains the best result under s = 5000. In conclusion, the experiment result shows that the NPR-kNN algorithm largely keeps the instances and their k-nearest neighbors still in the same divided subsets, and it also verifies the correctness of Theorem 2.

Test Matching Ratio.
Besides the training performance measured by the training matching ratio R tr , we pay more attention to the test performance of the algorithm. Test matching ratio R ts is adopted to measure the extent that the test instances and their k-nearest neighbors are still in the same divided subset, and it also evaluates whether the improved algorithm using data partition can obtain a similar performance of the original k-nearest neighbor classification algorithm. Tables 6-9 list the R ts results of three algorithms under different values of s, and the statistical results are also listed in the last three lines of the tables.
The results of Tables 6-9 show that the value of R ts of the NPR-KNN algorithm is several times larger than that of other algorithms on all the datasets except the Skin-noskin dataset under different values of s. The values of R ts on the Skin-noskin dataset of three algorithms are larger than 0.9; this is because its elements are distributed densely in the input space, and each divided subset has hundreds of ele-    Finally, the p value of the Wilcoxon signed-rank test between the NPR-kNN algorithm and one of the other algorithms is smaller than the given significant level α = 0:05. Therefore, the NPR-kNN algorithm obtains the best result of the measurement R ts compared with the MKD-kNN algorithm and PCA-kNN algorithm. The NPR-kNN algorithm searches k-nearest neighbors from the union of several divided subsets rather than only one divided subset. This increases the probability that the test instance and its k-nearest neighbors are still in the candidate subset.
On the other hand, the value of the parameter s has a different effect on the values of R ts of these algorithms. The NPR-KNN algorithm has the value of R ts with a small change on each data under different values of parameter s, while other algorithms have a large difference in the value of R ts . This fact is that these algorithms use different numbers of divided subsets to find k-nearest neighbors, and the parameter s controls the number of instances in the divided subset. Each test instance finds its k-nearest neighbors using only one divided subset for both MKD-kNN algorithm and PCA-kNN algorithm, and then, this takes a great effect on the value of R ts . k-nearest neighbors are continually updated by successively merging the divided subsets until they do not change rather than only one divided subset, and this operation greatly reduces the effect of the parameter s to the NPR-KNN algorithm. Therefore, the performance of the NPR-KNN algorithm on R ts is not sensitive to the value of s, and this advantage increases its availability for dealing with practical problems.

Test Performance.
The generation ability can be measured by the classification accuracy on the test data, which is the most commonly used performance indicator. Tables 10-13 list the results of three algorithms under different values of the parameter s.
The results on Tables 10-13 also show that the classification accuracy of the NPR-KNN algorithm is not less than    that of the MKD-kNN algorithm and PCA-kNN algorithm on all the datasets under different values of s, and it obtains the similar classification accuracy as the kNN classification algorithm. Moreover, the NPR-KNN algorithm has much better test performance than the MKD-kNN algorithm and PCA-kNN algorithm on the multiclassification dataset Aloi with 1000 classes. For the mean and median of classification accuracy on all the datasets, the NPR-KNN algorithm has a better performance than other improved algorithms and obtains similar results. Finally, the p values of the Wilcoxon signed-rank test between the NPR-kNN algorithm and one of the other improved algorithms are smaller than the given significant level 0.05 under different values of s. Then, the NPR-kNN algorithm achieves better classification than them. There exists no significant difference between NPR-kNN algorithm and kNN classification algorithm because the p values between them are larger than the given significant level 0.05. The reason for this fact is that the NPR-kNN algorithm obtains k-nearest neighbors that are most likely to be the same as the original algorithm compared with other algorithms, and this conclusion is also verified in the above subsection experience results.

Summary
We have proposed a novel algorithm to explore the effect of data partition on the classification performance of kNN classification algorithm, which could largely keep k same nearest neighbors as the original algorithm. Different from previous improved kNN classification algorithms based on data partition, the proposed algorithm theoretically studies the effect of data partition from the perspective of optimization, and it proves that the similarity of instances within the different partitioned subsets to be smaller is the key factor for the generation ability of the classifier. To this end, the minibatch k -Means clustering algorithm is adopted to execute the data partition for its high efficiency and effectiveness, and an early stopping rule is designed to search k-nearest neighbors from the divided subsets. Moreover, it can effectively deal with large-scale data for its linear time complexity. Experiment results on multiple real datasets show that the proposed algorithm gets the similar k-nearest neighbors and classification performance with the original kNN classification algorithm and better results than two state-of-the-art algorithms. The method in this paper takes a paradigm to handle large-scale data, and it also offers a promising way to scalable algorithms based on data partition. In future work, we will study how to combine the result of multiple data partitions to improve the performance of the kNN classification algorithm.

Conflicts of Interest
The authors declare that they have no conflicts of interest.