AMAdvances in Multimedia1687-56991687-5680Hindawi10.1155/2020/88252058825205Research Article3D Point Cloud Simplification Based on k-Nearest Neighbor and Clusteringhttps://orcid.org/0000-0003-4011-5569MahdaouiAbdelaaziz1https://orcid.org/0000-0003-3883-6420SbaiEl Hassan2SeelingPatrick1Department of PhysicsFaculty of SciencesMoulay Ismail University of MeknesMeknesMoroccouss.rnu.tn2High School of TechnologyFaculty of SciencesMoulay Ismail University of MeknesMeknesMoroccouss.rnu.tn202015720202020532020862020136202015720202020Copyright © 2020 Abdelaaziz Mahdaoui and El Hassan Sbai.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

While the reconstruction of 3D objects is increasingly used today, the simplification of 3D point cloud, however, becomes a substantial phase in this process of reconstruction. This is due to the huge amounts of dense 3D point cloud produced by 3D scanning devices. In this paper, a new approach is proposed to simplify 3D point cloud based on k-nearest neighbor (k-NN) and clustering algorithm. Initially, 3D point cloud is divided into clusters using k-means algorithm. Then, an entropy estimation is performed for each cluster to remove the ones that have minimal entropy. In this paper, MATLAB is used to carry out the simulation, and the performance of our method is testified by test dataset. Numerous experiments demonstrate the effectiveness of the proposed simplification method of 3D point cloud.

1. Introduction

The simplification of a 3D point cloud, obtained from the digitization of a real object, is a primordial and important step in the field of 3D reconstruction. This step ensures the optimization of the number of points that constitute the 3D point cloud . The scanning of a real object is facilitated by a device called 3D scanner . This device may be broken down into three primary sorts: contact, active noncontact, and passive noncontact.

Simplification of a 3D set of points can be defined as follows: being given an original surface S presented by a point cloud X such that X=N, simplification of X consists of calculating a point cloud X such that X=M, knowing that . is a cardinality. After simplification, we obtain a simplified point cloud such that XX. It should be noted that X samples a surface S close to the original surface S that is sampled by X.

Several scientific articles have studied and presented simplification methods. Pauly et al.  proposed a method based on hierarchical decomposition of the sample of points, calculated by binary partition of space. The cutting planes are defined by the centre and the main direction of each region. The partitioning criterion depends both on a maximum number of points and on variations in local geometry in a region. Due to the spatial nature of this approach, it is difficult to control the quality of the distribution of points on the sampled surface. Wu and Kobbelt  computed an optimal set of splats to cover a sampled surface. The first step of the method consists in locally approximating the surface at each point of the sample by a circular or elliptical plane surface element called a splat. In the second step, the redundant splats are eliminated during a filtering process of the surface expansion type. To guarantee the recovery of the entire sampled surface, the algorithm proceeds as follows. For each splat processed, the points it covers are projected onto its plane, and then only the splats associated with the points projected inside the convex envelope of the projected points are eliminated. During this process, the regularity of the distribution is not checked. A relaxation phase can be applied to determine an optimal position for the remaining splats. This method makes it possible to generate high quality splat covers for smooth surfaces, by filtering noise. However, this method is penalized by the cost of its initialization and that of the relaxation phase for large point samples. Linsen  presented a technique that associates a scalar value with each point locally measuring the average variation of certain information, such as the proximity of neighbors or the direction of normal. The points with the weakest measurement are removed iteratively. The algorithm has the disadvantage of not giving any guarantee on the density of the resulting set of points. Dey et al.  used an approximation of the LFS (local feature size) of the sampled area. This approximation is calculated from the Delaunay triangulation of the sample of input points, which has the drawback of very large samples. Alexa et al.  estimated the local geometrical properties of the sampled surface using a Moving Least Squares (MLS) model of the underlying surface, which requires having oriented normal in a consistent manner. They calculate the contribution of a point to this surface by projecting it onto an MLS surface estimated from neighboring points. The distance between the position of the point and its projection on the surface provides a measure of error. The points for which this distance is the smallest are removed. This method does not guarantee the density of the resulting sample points. To compensate, Alexa et al.  proposed to enrich the sample in the undersampled regions by considering the projection of these on a plan. They calculated the plane Voronoi diagram of the projected points so as to insert new points equidistant from the first. These new points are then raised to the surface using the projection operator. The process is repeated until the Euclidean distance between the next point to be added and the nearest existing point becomes less than a certain threshold. While this method achieves quality results, the intensive use of the MLS projection operator makes it expensive for very large samples. Pauly et al.  have directly extended the mesh simplification technique of Garland and Heckbert  for point samples by considering the relations of nearest neighbors as connectivity relations. Pairs of nearest neighbors are thus contracted, replacing two points with a new point calculated as a weighted average of the first. The cost of each contraction operation is measured by adapting the error measure proposed by Garland and Heckbert, whose idea is to approximate the surface locally by a set of tangent planes and to estimate the geometric deviation of a point, with respect to the surface represented by the sum of the distances squared to these planes. This method has the advantage of controlling the distribution of the simplified sample, which also has the property of preserving the details. However, its initialization cost is high, and it requires the maintenance of an overall priority queue, which is a disadvantage for large samples of points. Xuan et al.  proposed a progressive point cloud simplification technique, founded on the theory of the information entropy and normal angle. The fundamental of this technique is to find the importance of points using the information entropy of the normal angle. Calculation of the normal angle is based on the normal vectors. The simplification operation is carried out by removing the less relevant points.

Leal et al.  proposed a simplification technique comprised of three stages. First, to cluster point cloud, the expectation maximization algorithm is used. Second, the point cloud to be removed using curvature is selected. Third, linear programming is used to simplify point cloud. Ji et al.  proposed a simplification technique named detail feature points simplified algorithm. In this technique, a rule of k neighborhood and an octree structure are used to reduce point cloud.

The first key interest of this paper is point cloud simplification. The extraordinary simplification point cloud strategies reviewed in the literature may be classified into three categories: subsampling algorithms, resampling algorithms, and a mixture of them . A first strategy for simplifying a sample of points is to break it down into small regions, each of which is represented by a single point in the simplified sample, while the resampling algorithms rely on estimating the properties of the sampled surface to compute new relevant points. In the literature, these principles have been applied according to three main simplification schemes: simplification by selection or calculation of points representing subsets of the initial sample , iterative simplification , and simplification by incremental sampling .

The second key interest of this paper is the clustering notion. Clustering is a statistical analysis method used to organize raw data into homogeneous groups. Within each cluster, the data are grouped according to a common characteristic. The scheduling tool is an algorithm that measures the proximity between each element based on defined criteria. Clustering is an integrated concept in several areas such as pattern recognition , machine learning , and 3D point cloud simplification [12, 16]. In the literature, there are many clustering techniques . The work in this article is based on clustering to optimize the number of points constituting an original 3D point cloud in order to obtain another simplified 3D point cloud close to the original.

The third key interest of this paper is generally information theory and particularly the concept of Shannon’s entropy . This work is based on this concept to select the set of points grouped into cluster in order to simplify the original point cloud. Information theory is presented in different areas such as data processing [19, 20], data clustering , and 3D point cloud simplification [1, 9].

In this work, we are inspired by the work of Wang et al.  in order to provide a robust method of simplifying the point cloud. This technique is based on the notion of entropy  and clustering algorithm .

This paper is organized as follows. In Section 2, we evoke the density function estimator and entropy definition. Then, in Section 3, we present clustering algorithm used in our method. In Section 4, we demonstrate how to evaluate simplified meshes. Afterwards, in Section 5, we lay out our 3D point cloud simplification algorithm based on the Shannon’s entropy . Section 6 lays out the experimental results and the validation of the proposed technique. Finally, we wrap up with a conclusion.

2. Clustering Algorithm

The k-means clustering  is a type of unsupervised learning and analysis. The goal of this algorithm is to find groups in data, with the number of groups represented by the variable K, in which each goal belongs to the group with the closest average. The k-means clustering will be thought of as the foremost important unsupervised learning approach, which is widely used in pattern recognition and machine intelligence. The details of k-means clustering algorithm are presented in .

3. Density Estimation and Entropy Definition

In this 3D point cloud simplification work, we use the concept of entropy to simplify point clouds. The calculation of the entropy requires the estimation of the density function. Multitudes density estimation approaches exist in literature, such as parametric and nonparametric methods. The first category makes it possible to estimate a parameterized model of a density function such as the maximum likelihood estimator method . The nonparametric category includes the kernel density estimator, also known as the Parzen-Rosenblatt method [25, 26], the k-nearest neighbor estimator (k-NN), and a combination of them . Each type has its advantages and disadvantages. For Parzen estimator, the bandwidth choice has strong impact on the quality of the estimated density . In other words, the main motivation stems from the fact that k-NN estimator represents a solution to adapt the amount of smoothing to the local density of the data [21, 27]. The parametric approach has the main disadvantage of requiring prior knowledge of the probability law of the random phenomenon under study. The nonparametric approach estimates the probability density directly from the available information on the set of observations. We are interested here rather in the nonparametric category, specifically the k-NN estimator.

3.1. Density Estimation Using <italic>k</italic>-NN Approach

In this work, an unstructured approach, so called nonparametric estimation, was used to estimate density function. There are two kinds of nonparametric estimation methods: one is the Parzen density estimator  the other is the k-nearest neighbor (k-NN) density estimator . In this paper we use k-NN technique to estimate density function. In the literature, the k-NN concept is used in several fields related to classification as in articles .

The level of the estimator is defined by k, which is an integer number of the nearest neighbors, generally proportional to the size of the sample N. Definition of the density estimate is done for any point x. The distances between objects of the sample and points x are as follows:(1)R1x<<Rk1x<Rkx<<RNx,where Riwithi=1,,N are distances sorted in ascending order.

The k-nearest neighbor estimator in d dimension can be defined as follows:(2)pknnx=N1Rkxdi=1NKRkx1xXi,where Rkx is the distance from x to the kth nearest point and Ku is the Gaussian kernel:(3)Ku=2πd/2exp12uTu.

Then, we obtain(4)pknnx=N1Rkxd2πd/2i=1Nexp12Rkx1xXi2,(5)pknnx=kNVkx=kNCdRkx,where Vkx is the volume of a sphere of radius Rkx and Cd is the volume of the unit sphere in d dimension.

Equation (5) is the special case of (2) when K is the uniform kernel. The later function is defined as follows:(6)Kx=1,ifx10,otherwise.

3.2. Shannon’s Entropy

Shannon’s entropy  is a mathematical function, developed by Claude Shannon in 1948, that corresponds intuitively to the amount of information contained or delivered by an information source. This latter can be a text, an electrical signal, or any numerical file. For a source, which is a discrete random variable x with n symbols, each symbol Xi has a probability p=p1,,pN to appear. The entropy H of the source x is defined as(7)Hx=log2p,where is the expected value operator and log2 the logarithm in base 2.

Shannon’s entropy can be found in the literature in various fields of research such as stock market , image segmentation , and cryptography .

The main reason for using Shannon’s entropy is that it is a function that intuitively quantifies the amount of information in a variable. In order to remove irrelevant points, our simplification technique is based on the estimation of the amount of information.

4. Accuracy Evaluation4.1. Simplification Error

In order to evaluate the accuracy of the novel simplification method, the geometric error between the original and simplified point cloud to be measured is used. To make a comparison between two surfaces, Cignoni et al.  developed a tool called Metro. Also, Pauly et al.  and Miao et al.  adopted a technique to measure simplification errors. In this paper, we evaluate the maximum geometric error and the average geometric error between the original model X and the simplified one X.

The geometric max error is defined in paper  as(8)ΔmaxX,X=maxqXdq,X.

The geometric average error is defined in paper  as(9)ΔavgX,X=1XqXdq,X.

The corresponding normalized geometric errors can then be obtained by scaling the above error measures according to the model's diagonal of bounding box.

For each sample point qX, the geometric error dq,X can be defined as the Hausdorff distance between the q on the original surface and its projection point q on the simplified surface X. The Hausdorff distance is defined as follows:(10)d=maxSupqXinfqXdeq,q,SupqXinfqXdeq,q,where d.,. is an Euclidian distance. If Nq is the normal vector of point q and q is the projection point on the simplified surface X, the sign of d is the sign of Nxqq.

4.2. Surface Compactness

To measure the quality of the obtained meshes, Gueziec  proposes a formula to compute the quality of the triangles. It is called compactness formula and is defined as follows:(11)=43αL12+L22+L32,where Li are the lengths of the edges of a triangle and α is the area of the triangles as shown in Figure 1. Note that this measure is equals to 1 for an equilateral triangle and 0 for a triangle whose vertices are collinear. According to , a triangle is of acceptable quality if 0.6.

Measure of quality of the meshes.

5. The Simplification Method Proposed

The goal of 3D point cloud simplification is to choose the relevant and representative 3D points and remove redundant data points. In this work, the k-means clustering algorithm , which has been extensively used in the pattern recognition and machine learning literature, is extended to simplify dense points. As noted in Figure 2, the k-means algorithm is used to subdivide point cloud into c clusters. The size of the clusters is equal to 5% of the size of the original set of points. Subsequently, to select the clusters to be deleted, Shannon’s entropy  will be used.

A diagram that shows how the new point cloud simplification method works.

In this paper, we present a new robust approach based on clustering and Shannon’s entropy. This approach allows keeping a uniform distribution of the points of the resulting cloud. In addition, it makes it easy to control the overall density of the coarse cloud by simply defining the size of the clusters. This approach, as shown in Figure 2, simplifies the 3D point cloud by saving the characteristics of the model presented by the original point cloud. Moreover, this simplification method preserves contours and sharp feature. Also, small features are maintained in the simplified point sets. This new method can be adapted to simplify nonuniformly distributed point sets.

Data clustering in small sets of points, using information theoretic clustering algorithm , makes it possible to obtain groups containing points having a great similarity, which guarantees a good quality of simplification with an acceptable calculation time. To subdivide data sample into groups of 3D points, our technique of simplification is based on information theoretic clustering algorithm .

Next, the selection of relevant points in each cluster is done using Shannon’s entropy . The set of relevant points is the representative data samples that contain more information selected from the original dataset based on the proposed sample selection algorithm .

Compared to other simplification algorithms such as those of Shi et al. , Lee et al. , and Miao et al. , the advantages of the new algorithm are analyzed from many factors.

Firstly, our simplification method allows keeping the borders. This preservation of the integrity of original border is attributed to the nature of our method, as it uses Shannon entropy, which allows keeping clusters that have a high entropy value, and this is the case for borders. Secondly, the novel algorithm preserves compactness of the surface obtained from the simplified point cloud. This characteristic is measured by calculation of the percentage of compact triangles using (11) proposed by Gueziec . The construction of surfaces used in this article is realized using ball pivoting method .

The summary of contributions is as follows:

Subdivide 3D dataset to clusters using k-mean clustering , which is widely applied in the pattern recognition and machine learning literature

Shannon’s entropy  is applied to select clusters of 3D point cloud, where it is applied to data classification

The effectiveness and performance of the novel method are validated and illustrated through experimental results and comparison with other point sampling methods

The new algorithm is validated and illustrated by the test of its efficiency and its performance through the realized experiments and the comparison with other simplification methods

The full description of the 3D point simplification algorithm, Algorithm 1, is as follows:

<bold>Algorithm 1:</bold> Simplification of 3D point cloud based on the clustering algorithm and Shannon’s entropy.

Input

X=x1,x2,,xN: the data sample (point cloud)

C: the array in which cluster indexes are stored

c: the number of clusters

n: the number of clusters to delete (n<c)

Ecmin=ER1: minimal entropy

Begin

Decomposing the initial set of points X into c small clusters denoting X=Rjj=1,2,,c, using the k-means algorithm

For i=1ton

For j=2toc

Calculate global entropy of a cluster j by using all data samples in Rj=y1,y2,,ym according to equation (7), Note this entropy ERj

If ERj<Ecmin then

EcminERj

pos ⟵ j

End if

For j=pos toc

CjCj+1

End for

End for

End for,

X=XCi=1,,kn

End.

We note that the level of simplification of our approach is mainly determined by the user. This level is defined by the number (n) of clusters to be removed and the size of these clusters. In this work, the density of the clusters constituting the original point cloud is equal to 5% of the number of points of the original point cloud.

6. Results and Discussion

The new technique was implemented using MATLAB and MeshLab software. The algorithm for this new technique was run on an Intel 64 core i5-2540M CPU 2.60 GHz PC. The David model and the Stanford Bunny model tested in this paper were developed at Stanford University . The Fandisk, Max Planck, Genus, and Bimba models were obtained from the AIM@SHAPE database .

In order to approve the robustness of the proposed technique, we apply it using various 3D objects of different sizes and topologies. To ensure a better reconstruction, the surfaces of all the point clouds of the simplified objects were reconstructed using the MeshLab software .

6.1. Computing of Compactness

Computing of the compactness of the original and simplified surface of Bimba gives, respectively, 65.9498% and 66.7420%. The two values represent the percentages of the compact triangles of the two surfaces. The two previous results, Figures 3 and 4, show that this method ensures and increases the compactness of the simplified surface of Bimba. Calculation of the compactness is done using (11).

The compactness of the original surface of Bimba point set.

The compactness of the simplified surface of Bimba point set.

6.2. Results of the Novel Simplification Method

The novel strategy can deliver balanced point cloud. Figure 5 presents three cases of different models, where the David model shows that the original number of 3D points decreased from 182996 to 177454, the Max Planck model shows that the original points set diminished from 49089 to 48481, and the Bimba model was diminished from 74764 to 73458.

Various models simplified using the novel method. Left column: the original point clouds (triangulated). Right column: the simplified point sets (triangulated).

Among the models tested in this paper, we used nonuniform objects such as the models of David, Bimba, and Max Planck. After simplification of these point clouds using the new method, we obtained satisfying results with the preservation of small details. Therefore, we can use the new technique for the simplification of nonuniform point clouds.

Figure 6 shows two models simplified using the new technique. These point sets have boundaries. The Genus model was simplified from 1234 to 1134, and the Fandisk model was reduced from 103568 to 93809. The experimental results obtained in Figure 6(b) indicate that the new technique can preserve the boundaries. Furthermore, the original sharp edges were well maintained, which again illustrates the superiority of our technique.

Fandisk and Genus model simplified using the novel method. (a) The original point sets (triangulated). (b) The simplified point cloud sets (triangulated).

The novel method can produce some sparser level-of-detail point sets while preserving the small features and the sharp edges. In Figure 7, the sharp edges of the bunny model can be clearly seen when the point set is reduced from 16130 to 15813. This example demonstrates the good performance of the proposed method.

Simplification of the bunny model at different levels of detail (triangulated).

6.3. Comparison with Other Simplification Methods

The adaptive simplification of point cloud using k-means clustering of Shi et al.  and 3D Grid method  was employed for a comparative study. The simplification results were triangulated with the software MeshLab . In Figure 8, the famous Fandisk model was simplified. Since there was no redundant data in the original model (vertices 2502, faces 5000), we increased the vertices with the Geomagic Studio . Finally, the number of vertices was 103 570. As shown in Figures 8 and 9, the new simplification technique gives better results either in terms of the number of points deleted or in terms of the error which presents the difference between original and simplified surfaces. We obtain uniformly distributed sparse sampling points in the flat areas and necessary dense points in the high curvature regions. The sharp edges of the Fandisk model are well maintained. The adaptive simplification of point cloud using k-means clustering of Shi et al.  and 3D Grid method  can also preserve sharp edges, but too many sampling points are assigned to the sharp edges. 3D Grid method  preserves fewer points in the flat areas, which leads to unbalance, unlike the proposed technique, as shown in Figure 4, which produces balanced simplified surfaces. On the other hand, as shown in Figure 4, the novel technique produces balanced simplified surfaces. Figures 8 and 9 and Table 1 show that the error of the original surface and the simplified surface obtained from the application of the new method is small compared to the error obtained from the method of Shi et al.  and 3D Grid method, which shows that our technique allows giving simplified point cloud close to that of the original one.

Simplification results of the Fandisk model (triangulated); the original number of points is 103,570. (a) Simplified using the novel method with parameters of cluster size and deleted cluster number (2590, 310) and (9759). (b) Simplified using Shi et al. method  with parameters of space interval and normal vector deviation threshold as (0.076, 0.13) and (9682). (c) Simplified using the 3D Grid method  with parameters of space interval and standard normal vector deviation as (0.0696, 0.41) and (9694).

Simplification error of the Fandisk point cloud: (a) original point set, (b) simplified point set, (c) difference between original and simplified point cloud.

Statistics of the new simplification technique applied to different 3D point clouds.

ObjectNumber of original pointsNumber of simplified pointsNumber of original clustersNumber of deleted clustersAverage errorMax error
David182996177454458200.0006900.009997
Fandisk1035689380925903100.0004340.009989
Bimba747647345837391000.0000020.005166
Max Planck49089484812455500.0000240.009972
Bunny1613015489807600.00010.009988
Genus24962436832300.0001130.009878
7. Conclusion

In this work, Shannon’s entropy, which has been largely used in data processing, and k-means clustering algorithm, which has been extensively used in pattern recognition and machine learning literature, have been extended to reduce 3D point cloud. This simplification procedure is achieved through the removal of redundant and less attractive 3D groups of points that have a minimum entropy value. Clusters are obtained using the k-means clustering algorithm. The new method is mainly impacted by two factors: number of original clusters and number of deleted clusters. The studies and illustrations made above show that, since both factors are regulated, this new method can be applied to different levels of detail and different forms of 3D point clouds and produce well-balanced surfaces, which makes it robust, as the results show.

Data Availability

The experimental data, which are in the form of 3D objects, used to support the results of this study are downloadable from the AIM@SHAPE database included in references.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

MahdaouiA.BouaziA.Marhraoui HsainiA.SbaiE. H.Entropic method for 3D point cloud simplificationInnovations in Smart Cities and Applications2018Cham. SwitzerlandSpringer61362110.1007/978-3-319-74500-8_562-s2.0-85057093397BoehlerW.MarbsA.3D scanning instrumentsProceedings of the CIPA WG, 2002September 2002Corfu, Greece918PaulyM.GrossM.KobbeltL. P.Efficient simplification of point-sampled surfacesProceedings of the IEEE Visualization, 2002October 2002Boston, MA, USAIEEE16317010.1109/VISUAL.2002.1183771WuJ.KobbeltL.Optimized sub-sampling of point sets for surface splattingComputer Graphics Forum200423364365210.1111/j.1467-8659.2004.00796.x2-s2.0-4644223294LinsenL.Point cloud representation2001Karlsruhe, GermanyKarlsruhe Institute of TechnologyTechnical reportDeyT. K.GiesenJ.HudsonJ.Decimating samples for mesh simplificationProceedings of the 13th Canadian Conference on Computational Geometry, 2001August 2001Ontario, Canada858810.1.1.159257AlexaM.BehrJ.Cohen-OrD.FleishmanS.LevinD.SilvaC. T.Point set surfacesProceedings of the Visualization, 2001October 2001San Diego, CA, USAIEEE212810.1109/VISUAL.2001.964489GarlandM.HeckbertP. S.Surface simplification using quadric error metricsProceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques—SIGGRAPH’97August 1997Los Angeles, CA, USA20921610.1145/258734.2588492-s2.0-85023921504XuanW.HuaX.ChenX.ZouJ.HeX.A new progressive simplification method for point cloud using local entropy of normal angleJournal of the Indian Society of Remote Sensing201846458158910.1007/s12524-017-0730-62-s2.0-85035760073LealN.LealE.GermanS. T.A linear programming approach for 3D point cloud simplificationIAENG International Journal of Computer Science20174416067JiC.LiY.FanJ.LanS.A novel simplification method for 3D geometric point cloud based on the importance of pointIEEE Access2019712902912904210.1109/ACCESS.2019.2939684MahdaouiA.BouaziA.HsainiA. M.SbaiE. H.Comparison of K-means and fuzzy C-means algorithms on simplification of 3D point cloud based on entropy estimationAdvances in Science, Technology and Engineering Systems Journal201725384410.25046/aj0205082-s2.0-85057076318MoenningC.MoenningC.DodgsonN. A.A new point cloud simplification algorithmProceedings of 3rd IASTED Conference on Visualization, Imaging and Image ProcessingSeptember 2003Benalmádena, Spain10271033DidayE.GovaertG.LechevallierY.SidiJ.Clustering in pattern recognitionDigital Image Processing1981Dordrecht, NetherlandsSpringer1958BrodleyC. E.DanylukA. P.Machine learningProceedings of the 18th International Conference (ICML 2001)June 2001Williamstown, MA, USAShiB.-Q.LiangJ.LiuQ.Adaptive simplification of point cloud using k-means clusteringComputer-Aided Design201143891092210.1016/J.CAD.2011.04.0012-s2.0-79956266812XuR.WunschD. C.Clustering2008Piscataway, NJ, USAIEEE PressShannonC. E.A mathematical theory of communicationBell System Technical Journal194827337942310.1002/j.1538-7305.1948.tb01338.x2-s2.0-84940644968GuoD.ShamaiS.VerduS.Mutual information and minimum mean-square error in Gaussian channelsIEEE Transactions on Information Theory20055141261128210.1109/TIT.2005.8440722-s2.0-17644366813ErdogmusD.AgrawalR.PrincipeJ. C.A mutual information extension to the matched filterSignal Processing200585592793510.1016/J.SIGPRO.2004.11.0182-s2.0-14844340206VikjordV. V.JenssenR.Information theoretic clustering using a k-nearest neighbors approachPattern Recognition20144793070308110.1016/j.patcog.2014.03.0182-s2.0-84900812378WangJ.LiX.NiJ.Probability density function estimation based on representative data samplesProceedings of the 2011 IET International Conference on Communication Technology and Application (ICCTA 2011)October 2011Beijing, ChinaIEEE69469810.1049/cp.2011.0757MacQueenJ. B.Some methods for classification and analysis of multivariate observationsProceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability1967Berkeley, CA, USA281297JiaoJ.VenkatK.HanY.WeissmanT.Maximum likelihood estimation of functionals of discrete distributionsIEEE Transactions on Information Theory201763106774679810.1109/TIT.2017.27335372-s2.0-84959405135ParzenE.On estimation of a probability density function and modeThe Annals of Mathematical Statistics19623331065107610.1214/aoms/1177704472RosenblattM.Remarks on some nonparametric estimates of a density functionThe Annals of Mathematical Statistics195627383283710.1214/aoms/1177728190SilvermanB. W.Density Estimation for Statistics and Data Analysis1986London, UKChapman & HallMüllerH.-G.PetersenA.Density estimation including examplesWiley StatsRef: Statistics Reference Online2016Chichester, UKJohn Wiley & Sons112LiC.ZhangS.ZhangH.Using the K-nearest neighbor algorithm for the classification of lymph node metastasis in gastric cancerComputational and Mathematical Methods in Medicine201220121187654510.1155/2012/8765452-s2.0-84870199899LouY.Storage and allocation of English teaching resources based on k-nearest neighbor algorithmInternational Journal of Emerging Technologies in Learning (iJET)2019141710211310.3991/ijet.v14i17.111882-s2.0-85073737797Vidueira FerreiraJ. E.da CostaC. H. S.de MirandaR. M.de FigueiredoA. F.The use of the k nearest neighbor method to classify the representative elementsEducación Química201526319520110.1016/j.eq.2015.05.0042-s2.0-84938524151GuR.Multiscale Shannon entropy and its application in the stock marketPhysica A: Statistical Mechanics and Its Applications201748421522410.1016/J.PHYSA.2017.04.1642-s2.0-85019553509NaiduM. S. R.Rajesh KumarP.ChiranjeeviK.Shannon and Fuzzy entropy based evolutionary image thresholding for image segmentationAlexandria Engineering Journal20185731643165510.1016/J.AEJ.2017.05.0242-s2.0-85020932564ShannonC. E.Communication theory of secrecy systemsBell System Technical Journal194928465671510.1002/j.1538-7305.1949.tb00928.x2-s2.0-84890522850CignoniP.RocchiniC.ScopignoR.Metro: measuring error on simplified surfacesComputer Graphics Forum199817216717410.1111/1467-8659.002362-s2.0-0001452452MiaoY.PajarolaR.FengJ.Curvature-aware adaptive re-sampling for point-sampled geometryComputer-Aided Design200941639540310.1016/j.cad.2009.01.0062-s2.0-67349118750GueziecA.Locally toleranced surface simplificationIEEE Transactions on Visualization and Computer Graphics19995216818910.1109/2945.7738102-s2.0-0033360035BankR. E.PLTMG: A Software Package for Solving Elliptic Partial Differential Equations1998Philadelphia, PA, USASociety for Industrial and Applied MathematicsLeeK. H.WooH.SukW.Point data reduction using 3D gridsThe International Journal of Advanced Manufacturing Technology200118320121010.1007/s0017001700752-s2.0-0034807324MahdaouiA.Marhraoui HsainiA.BouaziA.HsainiA. M.SbaiE. H.Comparative study of combinatorial 3D reconstruction algorithmsInternational Journal of Engineering Trends and Technology201748524725110.14445/22315381/IJETT-V48P244LevoyM.The digital michelangelo project: 3D scanning of large statuesProceedings of ACM SIGGRAPH 2000July 2000New Orleans, LA, USA131144“AIM@SHAPE Database,” 2019CignoniG. R. P.CallieriM.CorsiniM.DellepianeM.GanovelliF.MeshLab: an open-source mesh processing toolProceedings of the 6th Eurographics Italian Chapter ConferenceJuly 2008Fisciano, Italy129136Geomatic User Guide. 2013