K -Means Genetic Algorithms with Greedy Genetic Operators

The k -means problem is one of the most popular models of cluster analysis. The problem is NP-hard, and modern literature oﬀers many competing heuristic approaches. Sometimes practical problems require obtaining such a result (albeit notExact), within the framework of the k -means model, which would be diﬃcult to improve by known methods without a signiﬁcant increase in the computation time or computational resources. In such cases, genetic algorithms with greedy agglomerative heuristic crossover operator might be a good choice. However, their computational complexity makes it diﬃcult to use them for large-scale problems. The crossover operator which includes the k -means procedure, taking the absolute majority of the computation time, is essential for such algorithms, and other genetic operators such as mutation are usually eliminated or simpliﬁed. The importance of maintaining the population diversity, in particular, with the use of a mutation operator, is more signiﬁcant with an increase in the data volume and available computing resources such as graphical processing units (GPUs). In this article, we propose a new greedy heuristic mutation operator for such algorithms and investigate the inﬂuence of new and well-known mutation operators on the objective function value achieved by the genetic algorithms for large-scale k -means problems. Our computational experiments demonstrate the ability of the new mutation operator, as well as the mechanism for organizing subpopulations, to improve the result of the algorithm.


Introduction
e k-means problem is a continuous unconstrained global optimization problem which has become a classic clustering model. is problem is proved to be NP-hard [1,2], so it is necessary to find a compromise between the computation time and the solution preciseness. e aim of the problem is to find set S � X 1 , . . . , X k of k points and X 1 , . . . , X k ∈ R d called centroids in a d-dimensional space that minimizes the sum of squared distances from N known points (data vectors) A 1 , . . . , A N ∈ R d to the nearest centroid [3]: where L(·) is the distance between two points (usual-lyEuclidean) and k is given.
Data vector indexes for which the jth centroid is the nearest one form a set (cluster) C j , j � 1, k. An equivalent problem setting is as follows: where X j is the centroid of the jth cluster. e simplest and most popular local search algorithm is the k-means algorithm [4,5] also called Alternate Location and Allocation (ALA) procedure [6,7] or Lloyd algorithm. Similar procedure called EM (Expectation Maximization) [8,9] and its modifications [10][11][12] are the most popular algorithms for separating the mix probability distribution. e k-means algorithm improves an intermediate solution sequentially, which enables us to find a local minimum.
Technically, this is not a true local search algorithm in terms of continuous optimization, as it searches for a new solution not necessarily in the ε-neighborhood of an existing solution. Nevertheless, it enables us a solution which is locally optimal in ε-neighborhood.
If we use distances instead of squared distances in (1), we deal with the continuous p-median problem. e similarity of these NP-hard problems [13,14] allows us to use similar approaches to solving them. However, unlike the p-median problem with Euclidean distances, finding the exact solution of a 1-means problem (k-means problem with k � 1 or the centroid search problem) in accordance with Algorithm 1 is trivial, and finding a local minimum of the k-means problem takes less computational resources.
is allows the local search to be integrated into various effective global search strategies.
In the early attempts to solve the p-median problem by exact methods (its discrete modifications), the authors used a branch and bound algorithm [15][16][17] for solving very small problems. In [18][19][20], the authors reviewed various heuristic solution techniques for k-means and p-median problems. In [21][22][23], the authors presented local search approaches including the Variable Neighborhood Search (VNS) and concentric search. In [22], Drezner et al. proposed heuristic procedures including the genetic algorithm (GA), for rather small datasets.
Many approaches, based on the data reducing [24], simplify the problem by selection of some part of the initial dataset and then use these results as an initial solution to the k-means algorithm on the complete dataset [25][26][27][28]. Such aggregating as well as reducing the number of the data vectors [29] enables us to solve large-scale problems within a reasonable time. However, such approaches lead to a reduction in preciseness. In our research aimed at obtaining the most precise solutions, we consider only the methods which estimate the objective function (1) directly, without aggregation or approximation approaches.
Modern publications offer many heuristic procedures [19,30] for setting the initial centroids for the k-means algorithm, most of them belong to various evolutionary and random search methods. Local search algorithms and their randomized versions are widely presented. For instance, Variable Neighborhood Search (VNS) algorithms [23,31,32] or agglomerative algorithms [33,34] sometimes show good results. A large number of articles are devoted to the initialization procedures for local search algorithms, such as random seeding and estimating the distribution of the data vectors [30]. e challenge is that, in many cases, even multiple runs of simple local search algorithms from various randomly generated solutions do not lead to a solution that is close to the global optimum. More advanced algorithms enable us to get the objective function (1) values many times better than the local search methods [32]. e use of genetic algorithms and other evolutionary approaches to improve the results of the local search is a widely used idea [35][36][37][38]. Such algorithms recombine the local minima obtained by the k-means algorithm. GAs operate with a certain set (population) of candidate solutions and include special genetic operators (algorithms) of initialization, selection, crossover, and mutation. e mutation operator randomly changes the resulting solutions and provides some diversity in the population.
However, in genetic algorithms, as the number of iterations increases, the population degenerates into a certain set of solutions close to each other. Larger populations as well as dynamically growing populations improve this situation. However, simpler algorithms based on the use of the same greedy agglomerative procedures [32,39] often show better results within the same computation time.
In this research, we do not discuss the adequacy of the kmeans clustering model, which is actually questionable. We only focus on the preciseness and stability of the obtained objective function value (1) within the framework of the kmeans model.
ere are situations when the cost of error is high [9]. In these cases, as well as when comparing the accuracy of an algorithm with a certain standard solution (not necessarily globally optimal), we need to get a result that would be difficult to enhance by other known methods without meaningful increase of the computation time. Evolution of parallel processing systems such as graphics processing units (GPUs) makes multiple runs of local search algorithms very cheap. In this case, large-scale problems (up to several millions of data vectors) can be solved with the use of the most advanced algorithms providing the highest preciseness. As our study shows, for large-scale problems, further improvement in the results of the genetic algorithms with greedy heuristic crossover can be achieved by using a special mutation operator and partially isolated solution subpopulations. e aim of this paper is to introduce a new k-Means Genetic Algorithm with the greedy agglomerative crossover operator, a special greedy agglomerative mutation operator, and subpopulations. e rest of this article is organized as follows. In Section 2, we propose a brief overview of known approaches to the development of k-means genetic algorithms. In Section 3, we give an overview of known mutation genetic operators used in k-median genetic algorithms in accordance with various approaches to chromosome encoding as well as other instruments for increasing the population diversity. In Section 4, we propose new modifications to the genetic algorithms with greedy heuristic crossover operator. Such modifications include partially isolated subpopulations and the use of a new mutation operator based on the greedy heuristic procedure. In Section 5, we describe the results of our computational experiments which demonstrate the efficiency of our new modifications on large datasets.

K-Means Genetic Algorithms
e idea of various genetic algorithms is based on a recombination (interchange) of elements in a set ("population") of candidate solutions ("individuals") encoded by "chromosomes." Such elements of the chromosomes are called "genes" or "alleles." Each chromosome is a vector of genes (bits, integers, or real numbers) representing a solution. e goal of gene recombination is achieving the best value of an objective function called "fitness function." e appearance of the first genetic algorithms for solving the discrete p-median problem [40] preceded the genetic algorithms for the k-means problem (k-Means Genetic Algorithms). Alp et al. [41] proposed a rather fast and precise algorithm with a special "greedy" (agglomerative) heuristic procedure used as the crossover genetic operator for the network p-median problem. Such algorithms solve discrete network problems and use a very simple binary chromosome encoding (1 for the network nodes selected as the centers of the clusters, and 0 for those not selected).
In the genetic algorithms for the k-means and similar problems with binary-encoded chromosomes, many mutation techniques can be used. For example, in [42], the authors represent the chromosome with binary strings composed from binary-encoded features (coordinates) of the centroids. e mutation operator arbitrarily alters one or more components (binary substrings) of a selected chromosome.
If the centers or centroids are searched for in a continuous space, some genetic algorithms still use the binary encoding [38,43,44]. In the k-means algorithm, its initial solutions are usually subsets of the dataset A 1 , . . . , A N . In such chromosome code, 1 means that the corresponding data vector must be selected as an initial centroid and 0 for those not selected. In this case, some local search algorithm (k-means algorithm or similar) is used at each iteration of the GA to estimate the final value (local minimum) of the objective function (1).
In [45], the authors refer to their algorithm as "Evolutionary k-Means." However, they actually solved an alternative problem which aimed to increase the clustering stability instead of minimizing (1). eir algorithm operates with binary consensus matrices and uses two types of mutation genetic operators: cluster split (dissociative) and cluster merge (agglomerative) mutation. In [46], the chromosomes are strings of integers representing the cluster number for each of the clustered objects, and the authors solve the k-means problem with simultaneous determining the number of clusters based on the silhouette [47] and Davies and Bouldin criteria [48], which are used as the fitness functions. us, in [46], the authors also solve a problem with the mathematical statement other than (1). Similar encoding is used in [37] where the authors propose a mutation operator which changes the assignment of individual data vectors to the clusters.
In [49], the authors described the mutation operator as a procedure that guarantees population diversity (variability). Usually, for the k-means and p-median problems, the mutation randomly changes one or many chromosomes, replacing some centroids [36,37]. Mutation and crossover are the most important genetic operators playing different roles: the crossover seeks to preserve the features of parent solutions, while the mutation tries to cause small local changes in the solutions. Compared to a crossover, a mutation is usually regarded as a secondary operator with a low probability μ [50]. High frequency of mutations makes a genetic algorithm to search randomly and chaotically. Nevertheless, many studies have shown that evolutionary algorithms without a crossover can work better than a standard genetic algorithm if the mutation is combined with an effective selection operator [51][52][53][54]. Mutation is performed on a single parent solution.
In [36], the authors encode the solutions (chromosomes) in their GA as sets of centroids represented by their coordinates (real vectors or arrays). e genetic algorithms with the greedy heuristic crossover operator use the same principle [55].
us, various genetic algorithms for the k-means and similar problems can be classified into three categories in accordance with the chromosome encoding method: (a) Integer encoding each gene represents a data vector A 1 , . . . , A N , and the value is its cluster number. Such algorithms are declared for solving the k-means or pmedian problem; however, they may use other objective function than (1). A local search for the minimum of (1) is sometimes declared their mutation operator.
(b) Integer or binary encoding each gene corresponds to a centroid (cluster) and describes the data vector index 1, N selected as the initial centroid for the local search method. Such algorithms may use a wide variety of crossover and mutation operators. (c) Real (direct) encoding each gene is a centroid encoded by its coordinates. Such algorithms are able to demonstrate the most precise results. However, the modern literature offers a very limited variety of mutation operators for such algorithms. Usually, they do not use any mutation [38,41,43].
e greedy heuristic crossover operator can be described as a two step algorithm. e first step combines two known ("parent") solutions (chromosomes) into one intermediate invalid solution with an excessive number of centroids (clusters). At the second step (the greedy agglomerative procedure), the algorithm removes excessive centroids in each iteration so that the removal of the centroid results in the least significant growth of the objective function (1) [41,43], see Algorithm 2.
Algorithms 3 and 4 are known heuristic procedures [32,41,43], which implement the first step of the greedy heuristic crossover operator and run Gree dy heuristic procedure.
(1) ForEach centroid X i , i � 1, k, define its cluster C i ⊂ A 1 , . . . , A N as the subset of data vectors having closest centroid X i .

ALGORITHM 1: kMeans().
Mathematical Problems in Engineering 3 ese algorithms can be included in various global search strategies. Combining items (centroids) of solution S ′ with the items of the other solution S ″ and running Algorithm 1, we get a set of "child" solutions. ese solutions are used as the neighborhoods, in which a better solution is sought for. us, the second solution S ″ is a parameter of the neighborhood [32]. e general framework of the GA for the k-means and similar location problems can be described as Algorithm 5. e objective function F fitness is (1). We used the tournament selection (tournament replacement, see Algorithm 6) for Step 10 of Algorithm 5: Such algorithms usually operate with a very small population, and other selection procedures do not improve the results significantly [41,43,46].
In the GAs with greedy heuristic crossover [43,44], Algorithms 2 and 3 are used as the crossover genetic operator. ese operators are computationally expensive due to multiple runs of the KMeans algorithm. In the case of large-scale problems and very strict time limitation, GAs with greedy heuristic crossover operator performs only few iterations for large-scale problems. e population size is usually small, 10-25 chromosomes. Dynamically growing populations [43,44] are able to improve the results. In this case, Step 7 of Algorithm 5 is replaced by the following procedure (see Algorithm 7). us, in this paper, we intend to improve the GAs with greedy heuristic crossover operator which can be described as follows [43,46]: k-GA-ONE: GA framework (Algorithm 5) with Gree dy ONE as the crossover operator, Tournament selection (Algorithm 6), dynamic population size adjustment (Algorithm 7), and empty mutation operator. k-GA-FULL: the same but Gree dy FULL crossover operator. k-GA-RND: the same but the crossover operator Gree dy FULL or Gree dy ONE is selected randomly with equal probability. e empty mutation operator can be replaced with a known or new procedure described in Section 3.

Known Methods of Increasing Population Diversity in the Genetic Algorithms
Despite the widespread use of various genetic algorithms for the k-means problems in the modern literature, there is practically no systematization of the approaches used [56][57][58][59].
For various methods of chromosome encoding, various mutation operators have been developed: bit inversion for binary encoding [50], exchange, insert, inverse, and offset permutation is the objective function (1) (6) end for (7) Select a subset S elim of n elim centroids, S elim ⊂ S, |S elim | � n elim , with the minimal values of the corresponding variables F i′ ′ .Here,

Require: Two solutions (sets of centers)
Merge S′ and one item of S″: S←S′ ∪ X ′ ′ i′ ; S i′ ←Greedy(S) end for return the best of solutions S 1 , . . . , S p ALGORITHM 4: Greedy ONE. 4 Mathematical Problems in Engineering [60] for variable length chromosomes, Gaussian mutation [61], and polynomial mutation for real coding [62,63]. Some studies suggest a combination of the mutation operators [64] or the self-adaptive mutation operators [65][66][67]. e efficiency of various mutation operators depends on the GA parameters [53,68,69] and problem type [70,71]. However, the number of various mutation operators with real encoding for continuous problems is very limited. e GA for the network p-median problem described in [72] includes the hypermutation operator, which consists in an attempt to replace each gene in the chromosome with each gene from the set of genes that were not originally part of the processed chromosome. After each replacement, the algorithm checks for the improvement of the objective function value. e operator is computationally expensive due to numerous checks of the objective function and actually similar to the local search principle embedded in the j-means algorithm [73]. In [74], the hypermutation algorithm was further developed as the nearest four neighbors' algorithm.
e idea is to reduce computational costs by reducing the set of genes used for the replacement to the nearest neighbors of the gene being replaced. In several works [37,75,76], the authors propose using the kMeans algorithm as a mutation operator.
Each of these algorithms declares a local search as a mutation operator. e GA framework allows us to use a wide variety of genetic operator options. However, the local search is designed to improve an arbitrary solution by transforming it into a local optimum and thereby reducing, rather than increasing, the variety of chromosomes (solutions).
In [36,42], the mutation operator is as follows (uniform random mutation). Randomly generate r ∼ U[0, 1). If r < μ (where μ is mutation probability), then the chromosome will mutate. Randomly generate b ∼ U[0, 1). If the current position of a centroid is X j � (x j,1 , . . . , x x,d ), the mutation operator modifies it as follows: Signs "+" and "-" are used with the same probability [42]. is mutation operator shifts the centroid coordinates Require: Initial population size N POP (in ourExperiments, N POP � 10).
return solution S i * from the population with minimal value of f i * (6) end if (7) Selection: Randomly choose two indexes i 1 , i 2 ∈ 1, N POP , i 1 ≠ i 2 (8) Run chosen crossover operator: S C ⟵ Crossover(S i 1 , S i 2 ) (9) Run chosen mutation operator: S C ⟵ Mutation(S C ) (10) Run chosen procedure to replace a solution in the population (11) End loop ALGORITHM 5: GA with real chromosome encoding for the k-means and p-median problems.  Mathematical Problems in Engineering randomly. A similar technique with an "amplification factor" was used in [44,77]. However, the local minima distribution among the search space is not uniform [49]: the new local minima of (1) can be found with higher probability in some neighborhood of a known local minimum than in a neighborhood of a randomly chosen point (here, by a neighborhood, we do not necessarily mean an ε-neighborhood but any subset of solutions which can be obtained by application of some defined procedure to the current solution). Combining local minima (subsets of centroids from two locally minimal solutions) must usually outperform the random shift of the centroid coordinates. e idea of combining local minima is the basic idea of the greedy heuristic crossover operator in genetic algorithms [38,43] and other algorithms [21].
e greedy heuristic crossover operator for the discrete p-median problem proposed in [41] and adapted for continuous p-median and kmeans problems in [38,43] was used in the GAs without any mutation operator. Such algorithms demonstrate more accurate results in comparison with many other algorithms for practically important middle-size problems. e other common approach to increasing the diversity in a population is to create subpopulations that develop more or less autonomously. Algorithms that produce subpopulations containing individuals gathered around optima are a wide class of such methods. e fitness sharing method [78] allows the evolutionary algorithm to search simultaneously in different areas (niches) corresponding to different local (or global) optima, i.e., this method allows one to identify and localize multiple optima in search space. e group of crowding methods [79][80][81] also uses a niche approach. e general concept of crowding is for individuals to fight for survival with similar offspring and apply tournament selection to a highlikeness parent-child pair. e main idea of the genetic chromodynamics [82] is to force the formation and maintenance of stable subpopulations. e proposed scheme of local interaction provides stabilization of the subpopulation in the early stages of the search. Subpopulations co-develop and converge to several optimal solutions.
In [83], the authors present the roaming optimization method. By using subpopulations developing in isolation, multiple optima are found. is method uses the tendency of evolutionary algorithms to premature convergence, turning this disadvantage into an advantage in the process of detecting local optima.

New Modifications to the Genetic Algorithms
e essence of our new mutation operator (greedy heuristic mutation, GHM) is as follows. We perform the crossover operator to the single parent chromosome and a randomly generated chromosome improved by the kMeans algorithm. In Step 9 of Algorithm 5, the mutation operator is replaced with Algorithm 8.
Despite small populations in the genetic algorithms with the greedy agglomerative crossover, the application of a simple approach with two subpopulations allows us to improve the result of the algorithm. In our research, within the population, we organized two subpopulations of equal volume. For the crossover and tournament, both chromosomes are mainly selected within the same subpopulation. If one of the subpopulations during a certain number of iterations does not provide an improvement in solutions and its record (the best solution) is inferior to the record of the second subpopulation, its individuals are replaced by new ones (reinitialization of the population). We assumed that chromosomes in the same subpopulation tend to develop in a similar way under the influence of crossover. Mutation of a separate chromosome increases the population diversity; however, under the influence of the crossover, the differences are gradually levelled. Reinitialization of a subpopulation is a substitute for a complete restart of the algorithm while maintaining the record. us, Step 7 of Algorithm 4 (selection) is transformed as Algorithm 9.
An additional Step is added to Algorithm 5 (see Algorithm 11). e idea of the Variable Neighborhood Search with randomized neighborhoods (see [32]) is also based on applying the greedy heuristic procedures (Algorithms 2 and 3) to a current solution and a randomly generated one transformed into a local minimum by Algorithm 1. Our computational experiments (see Section 5) show that the new genetic algorithms with GHM as the mutation operator outperform both the original genetic algorithms with greedy agglomerative crossover operator (Algorithm 4 with empty mutation) and the Variable Neighborhood Search with randomized neighborhoods.
As mentioned before, the greedy agglomerative crossover operator is a computationally expensive algorithm. In Algorithm 2, the objective function calculation F i′ ′ ←F(S ′ ) is performed more than (K − k) · k times. erefore, such algorithms are traditionally considered as methods for solving comparatively small problems (hundred thousands of data points and hundreds of centers). However, the rapid development of the massive parallel processing systems (GPUs) allows us to solve the large-scale problems with reasonable time expenses (minutes).
One of the most important issues of the GAs is the convergence of the entire population into some narrow area (population degeneration) around some local minimum. On the first crossover iterations, the "child" solutions usually have significant advantages in the objective function value in comparison with their "parents" due to the ability of the greedy agglomerative crossover operator to choose much better solutions in comparison with the k-means procedure. On a single central processor unit, such GAs manage to perform only few crossover operations due to the computationally expensive Greedy(), and the population diversity problem is not important. Our computational experiments show that, with an increase in the computational capacities and increase of the population size (which grows dynamically with the iteration number), the mutation operator plays more important role.

Computational Experiments
Parallel (CUDA) implementations of the kMeans() algorithm are known [84,85], and we used this approach in our experiments. All other algorithms were realized on the central processor unit. 6 Mathematical Problems in Engineering For our experiments, we used the classic datasets from the UCI and Clustering basic benchmark repositories: (a) Individual Household Electric Power Consumption (IHEPC): energy consumption data of households during several years (more than 2 million data vectors, 7 dimensions), 0-1 normalized data; "date" and "time" columns removed.
(b) SUSY (5 · 10 6 data vectors, 18 dimensions), 0-1 normalized data. Here, we do not take into account the true labelling provided by the database, and use this dataset to search for internal structure in the data. (d) BIRCH3 [10]: groups of points of random size on a plane (100000 data vectors, 2 dimensions).   (Tables 1-6).

Randomly choose
For comparison, we used the genetic algorithms with greedy heuristic crossover (k-GA-FULL, k-GA-ONE, and k-GA-RND described in Section 2) as well as the kMeans procedure in the multistart mode and j-Means algorithm (centers are replaced with the data vectors) [73]. In addition, we ran various Variable Neighborhood Search (VNS) algorithms with randomized neighborhoods formed by greedy heuristic procedure [32], see algorithms k-GH-VNS1 and k-GH-VNS2. For algorithms launched in the multistart mode (j-Means and kMeans), only the best results achieved in each attempt were recorded. e minimum, maximum, average, and median objective function values and its standard deviation are summarized after 30 runs. For all algorithms, we used the same realization of the kMeans procedure which consumes the absolute majority of the computation time. e initial population size for all genetic algorithms consisted of N POP � 10 chromosomes.
All algorithms were classified into three groups. e first group of algorithms consists of known algorithms including the genetic algorithms with greedy heuristic crossover. Algorithms of the second group are the genetic algorithms with greedy heuristic crossover and known mutation operators (k-GA-xxx-m1 for uniform random mutation and k-GAxxx-m2 for scramble mutation [86] where a gene (centroid) Note (for all tables): "↑⇑" denotes the advantage of the best algorithm in this group over known algorithms (group A) is statistically significant ("↑" for t-test and"⇑" for Mann-Whitney U test); "↓⇓" denotes the disadvantage of the best algorithm in this group over known algorithms is statistically significant; "↕⇕" denotes the advantage or disadvantage is statistically insignificant. Significance level is 0.99.
is replaced with a randomly chosen data point). We performed our experiments with various values of mutation probability μ. Algorithms of the third group are genetic algorithms with greedy agglomerative crossover and new instruments for maintaining the population diversity: k-GAxxx-GHM are algorithms with the new GHM mutation operator, and k-GA-xxx-SUBPOP are algorithms with the new GHM mutation operator and two subpopulations.
In each group of algorithms, the best average and median values of the objective function (1) are underlined. We compared the best algorithms in the second and third groups with the best algorithm in the first group (the best of known algorithms) with the use of t-test and Mann-Whitney U test.
In the comparative analysis of algorithm efficiency, the choice of the unit of time plays an important role. e astronomical time spent by an algorithm strongly depends on the peculiarities of its implementation, the ability of the compiler to optimize the program code, and the fitness of the hardware to execute the code of a specific algorithm. Algorithms are often estimated by comparing the number of iterations performed (for example, the number of population generations for a GA) or the number of evaluations of the objective function. In our case, some of the algorithms are not evolutionary, and in genetic algorithms, the execution time of the crossover operator with the embedded kMeans algorithm can differ hundreds of times. erefore, comparing the number of generations is unacceptable. Comparison of the objective function calculations is also not quite correct. Firstly, the kMeans algorithm which consume almost all of the processor time, do not calculate (1) directly. Secondly, during the operation of the greedy agglomerative crossover operator, the number of centroids changes (decreases from 2k down to k or from k + 1 down to k), and the time spent on computing the objective function also varies. erefore, we nevertheless chose astronomical time as a scale for comparing algorithms. Moreover, all the algorithms use the same implementation of the kMeans algorithm launched under the same conditions.
In our computational experiments, the time limitation was used as the stop condition for all algorithms. As can be       seen from Figures 1 and 2, the result of each algorithm depends on the elapsed time. Nevertheless, an advantage of the new algorithms remains regardless of the chosen time limit. e range of values in all tables is small; nevertheless, the differences are statistically significant in several cases. In all cases, new algorithms with the greedy heuristic mutation outperform known ones or demonstrate approximately the same efficiency (difference in the results is statistically insignificant). Moreover, new algorithms demonstrate the stability of results (narrow range of objective function values). In most cases, the best results were achieved by the genetic algorithms with nonempty mutation operators.

Conclusions
When solving some large-scale clustering problems, traditional local search algorithms often give a result very far from the optimal solution. In this research we aimed at developing not only fast but also the most accurate algorithm, based on genetic algorithms with greedy heuristic crossover operator, for solving related optimization problems. Methods for obtaining solutions in a fixed time, which would be difficult to improve by known methods without a significant increase in computational costs, include genetic algorithms with a greedy agglomerative crossover operator. As the computational results presented in this article show, further improvement in the achieved result of such algorithms is possible by increasing the diversity in their populations.
Computational experiments show that the population diversity maintaining mechanisms such as mutation genetic operator and subpopulations improve the features of genetic algorithms with greedy heuristic crossover for the large-scale k-means problem. Moreover, the best results can be shown by algorithms with a mutation operator based on greedy heuristic crossover operator with a randomly generated chromosome (new greedy heuristic mutation). e similarity in mathematical formulations of k-means, k-medoids, and p-median problems, as well as the problem of a mixture probability distribution separation, gives us a reasonable hope for the applicability of similar approaches to improving the results of solving those problems which determine possible directions for further research.

Data Availability
In our work, we used only data from the UCI Machine Learning and Clustering Basic Benchmark repositories which are available at https://archive.ics.uci.edu/ml/index. php and http://cs.joensuu.fi/sipu/datasets.