Fuzzy Rules for Ant Based Clustering Algorithm

,


Introduction
How do ants optimize food search?How do social spiders build communal nest?Why does a flock of birds fly in a vshaped formation?How do termites build collectively their sophisticated nest structure?How do honey bee swarms cooperatively select their new nesting site?How does firefly flash its light in a wonderful pattern?How does a colony coordinate its behavior?How is it possible for social insects and animals to coordinate their actions and create complex patterns?How do such agents perform complex tasks without any direction and coordination between themselves?How agents in colony perform a work locally for global goal with sufficient flexibility as they are not controlled centrally?Collective behaviors in swarms of insects or animals have attached the attention of researches.They have proposed several intelligent models to solve a wide range of complex problems.This branch of artificial intelligence is addressed as swarm intelligence.The key components of swarm intelligence are self-organization, emergence, and stigmergy.
Self-organization is "a process whereby pattern at the global level of a system emerges solely from numerous interactions among the lower-level components of the system.Moreover, the rules specifying interactions among the system's components are executed using only local information, without reference to the global pattern" [1].In short it can be "a set of dynamical mechanisms whereby structures appear at the global level of a system from interactions of its lower-level components" [2].
Emergence seems to be the explication of what selforganizing systems produce.In this context the whole is not just the sum of its parts; it gets a surplus meaning that it is not captured by its part alone.The idea of emergence was firstly developed in [3] to explain indirect task coordination in the context of building behavior of termites.Grassé [3] showed that the coordination of building activities does not depend on the workers themselves but is mainly achieved by the nest structure.
The underlying idea of this paper is to propose a new approach to data clustering problem.We will show that the use of fuzzy logic combined with swarm intelligence technique yields robust results.
The remainder of the paper is organized as follows.In Section 2, we present an overview of data clustering problem 2. Literature Review 2.1.Problem Definition.Cluster analysis is a technique that organizes data by abstracting underlying structure either as a grouping of objects.Each group consists of objects that are similar between themselves and dissimilar to objects of other groups.
Each object corresponds to a vector of  numerical values which correspond to the  numerical attributes.The relationships between objects are generated into a dissimilarity matrix in which rows and columns correspond to objects.As objects can be represented by points in numerical space, the dissimilarity between two objects can be defined as distance between the two corresponding points.Any distance can be used as dissimilarity measures.The most commonly used dissimilarity matrix is the Minkowski metric: where   is a weighting factor that will be set to 1 thereafter.According to the value of  ( ≥ 1), the following measures are obtained: Manhattan distance ( = 1), Euclidean distance ( = 2), and Chebyshev distance ( = ∞).As mentioned in [4], Euclidean distance is the most common of the Minkowski metric.

Clustering Algorithms.
The grouping step can be performed in a number of ways.In [5] different approaches to clustering data are described: (i) Partitioning/Hierarchical Classification.Partitional clustering technique identifies the partition that optimizes a clustering criterion defined on a subset of objects (locally) or over all of the objects (globally).Hierarchical clustering technique builds a sequence of nested partitions that are visualized, by example, by a dendrogram.
(ii) Hard/Fuzzy Classification.A hard clustering algorithm allocates each object to a single cluster during its operations.Hence the clusters are disjoint.A fuzzy clustering algorithm associates each object with every cluster using a membership function.The output of such algorithm is a clustering but not a partition.
(iii) Deterministic/Stochastic.Optimization in partitional approach can be accomplished using traditional technique or through a random search of the state space consisting of all possible labeling.
(iv) Supervised/Unsupervised Classification.An unsupervised classification uses only the dissimilarity matrix.
No information on the object class is provided to the method (objects are said unlabeled).In supervised classification, objects are labeled while knowing their dissimilarities.The problem is then to construct hyperplanes separating objects according to their class.The unsupervised classification objective is different from that of the supervised case: in the first case, the goal is to discover groups of objects while in the second, known groups are considered and the goal is to discover what makes them different or to classify new objects whose class is unknown.
Our proposed technique presented in this paper, which we call F-ASClass, belongs to fuzzy-semisupervised partitional clustering technique.It uses fuzzy rules and stochastic behavior to partition dataset into specified number of clusters.For the present paper, it suffices to note that the following techniques (-means, -medoid, and FCM) are used to improve F-ASClass algorithm.A comparative study between us will be presented in Section 4.
-means algorithm is a hard-unsupervised learning algorithm that appears to partition dataset into a specified number of clusters.The technique presented in [6] consists of starting with  groups, each of which consists of a single randomly selected object, and thereafter adding each new object to its closest cluster center.After an object is added to a group, the mean of that group is adjusted in order to take account of the new added object.The algorithm is deemed to have converged when the assignments no longer change.
The -medoid algorithm described in [7] is based upon the search of representative objects of each cluster (called medoid), which should represent the various aspects of the structure of the data.-medoid algorithm is related to the -means algorithm.The main difference between -means and -medoid stands in calculating the cluster center.The medoid is a statistic which represents that data member of a dataset whose average dissimilarity to all the other members of the set is minimal.Therefore a medoid unlike mean is always a member of the dataset.It represents the most centrally located data item of the dataset.
The fuzzy -means (FCM) algorithm firstly presented in [8] and improved in [9,10] allows one samples to belong to two or more one clusters.The first idea which aims to characterize an individual objects' similarity to all the clusters was introduced in [11].In this context, the similarity an object shares with each cluster is represented with a membership function whose values are between zero and one.Object in dataset will have a membership in every cluster; memberships close to unity indicate a high degree of similarity between the object and a cluster while memberships close to zero involve little similarity between the object and that cluster.
FCM method differs from previously presented -means and -medoid algorithms by the fact that the centroid of a cluster is the mean of all samples in the dataset, weighted by their degree of belonging to the cluster.The degree of belonging is presented by a function of the distance of the sample from the centroid, which includes a parameter controlling for the highest weight given to the closest sample.All these techniques are sensitive to initial condition.
Another fuzzy classification model is studied in [12] which constructs the membership function on the basis of available statistical data by using an extension of the wellknown contamination neighborhood.Reference [13] presents a new fuzzy technique using an adaptive network of fuzzy logic connectives to combine class boundaries generated by sets of discriminant functions in order to address the "curse of dimensionality" in data analysis and pattern recognition.
Reference [14] is intended to solve the problem of dependence of clustering results on the use of simple and predetermined geometrical models for clusters.In this context, the proposed algorithm computes a suited convex hull representing the cluster.It determines suitable membership functions and hence represents fuzzy clusters based on the adopted geometrical model that it is used during the fuzzy data partitioning within an online sequential procedure in order to calculate the membership function.

Swarm Intelligence Tools for Data Clustering Problem.
We start with an illustration of swarm intelligence tools that have been developed to solve clustering problems: Particle Swarm Optimization [15], Artificial Bee Colony [16], Firefly algorithm [17], Fish swarm algorithm [18], and Ant Colony Algorithm.In [19] the basic data mining terminologies are linked with some of the works using swarm intelligence techniques.A comprehensive review of the state-of-the-art ant based clustering methods can be found in [20].
The first model of ants' sorting behavior has been done by Deneubourg et al. [21] where a population of ants are randomly moving in a 2-dimensional grid and are allowed to drop or load objects using simple local decision rules and without any central control.The general idea is that isolated items should be picked up and dropped at some other locations where more items of that type are present.Based on this existing work, Lumer and Faieta [22] have extended it to clustering data problems.The idea is to define dissimilarity between objects in the space of object attributes.Each ant remembers a small number of locations where it has successfully picked up an object.And so, when deposing a new item this memory is used in order to bias the direction in which the ant will move: ant tends to move towards the location where it last dropped a similar item.From these basic models, in [23] Monmarché has proposed an ant based clustering algorithm, namely, AntClass which introduces clustering in a population of artificial ants capable to carry heaps of objects.Furthermore, this ant algorithm is hybridized with the -means algorithm.In [24], a number of modifications have been introduced on both LF and AntClass algorithm and authors have proposed AntClust, which is an ant based clustering algorithm for image segmentation.In AntClust, a rectangular grid was replaced by a discrete array of cells.Each pixel is placed in a cell and all cells of the array are connected to the nest of ants' colony.Each ant performs a number of moves between its nest and the array and decides with a probabilistic rule whether or not to drop its pixel.If the ant becomes free, it searches for a new pixel to pick up [24].
According to [25], another important real ant's collective behavior, namely, the chemical recognition system of ants, was used to resolve an unsupervised clustering problem.In [26], Azzag et al. considers another biologically observed behavior in which ants are able to build mechanical structure thanks to a self-assembling behavior.This can be observed through the formation of drops constituted of ants or the building of chains by ants with their bodies in order to link leaves together.The main idea here is to consider that each ant represents a data and is initially placed on a fixed point, called the support, which corresponds to the root of the tree.The behavior of an ant consists of moving on already fixed ants to fix itself to a convenient location in the tree.This behavior is directed by the local structure of the tree and by the similarity between data represented by ants.When all ants are fixed in the tree, this hierarchical can be interpreted as a partitioning of the data [26].
Bird flocks and schools clearly display structural order and appear to move as single coherent entity [27].In [28,29], it has been demonstrated that flying animal can be used to solve data clustering problem.The main idea is to consider that individuals represent data to cluster and that they move following local behavior rule in a way; after few movements, homogeneous individual clusters appear and move together.In [30] Abraham et al. propose a novel fuzzy clustering algorithm, called MPSO, which is based on a deviant variety of the PSO algorithm.
Social phenomena also exists in the case of spiders: in the Anelosimus eximius case, individuals live together, share the same web, and cooperate in various activities such as collective web building: spiders are gathered in small clusters under the vegetal leaves included in the web and distributed on the whole silky structure.In [31], the environment models the natural vegetation and is implemented as a square grid in which each position corresponds to a stake.Stakes can be of different heights to model the environmental diversity of the vegetation.Spiders are always situated on top of stakes and behave according to three several independent items (a movement item, a silk fixing item, and a return to web item).This model was transposed to region detection in image.
In [32] Hamdi et al. propose a new swarm-based algorithm for clustering, based on the existing work of [21,23,28] which uses ants' segregation behavior to group similar objects together; birds' moving behavior to control next relative positions for a moving ant; and spiders' homing behavior to manage ants' movements with conflicting situations.
In [33] we proposed using the stochastic principles of ant colonies in conjunction with the geometric characteristics of the bee's honeycomb.This algorithm was called AntBee and it was improved in [34].In this context, we used fractal rules to improve the convergence of the algorithm.
Another example of Ant Clustering algorithm, called AntBee algorithm, is developed in [29].The proposed approach uses the stochastic principles of ant colonies in conjunction with the geometric characteristics of the bee's honeycomb and the basic principles of stigmergy.An improved AntBee called FractAntBee was proposed [24] that incorporated main characteristics of fractal theory.
According to [35], a novel approach to image segmentation based on Ant Colony System (ACS) is proposed.In ACS algorithm an artificial ant colony is capable of solving the traveling salesman problem [36].As in ACO for the TSP, in ACO-based algorithms for clustering, each ant tries to find a cost-minimizing path, where the nodes of the path are the data points to be clustered.Like in the TSP, the cost of moving from data point   to   is the distance (, ) between these points, measured by some appropriate dissimilarity metric.Thus, the next point to be added to the path tends to be similar to the last point on the path.An important way in which these algorithms deviate from ACO algorithms is that the ants do not necessarily visit all data points [37].

Proposed Methodology
In ASClass algorithm, we have assumed that a graph  = { 1 , . . .,   } of  object has been collected by the domain expert where each object is a vector of  numerical values V 1 , . . ., V  .For measuring the similarity between objects we will use in the following Euclidean distance between two vectors, denoted by , which is used for edges in graph  [38].The complete set of parameters of our model will be presented in Section 3.3.
Initially all the objects will be scattered randomly on the graph ; each node in the graph represents an object  in the datasets.The edge that connects two objects in the graph-data represents a measure of dissimilarity between these objects in the database.A class is represented by a route connecting a set of objects.In ASClass we chose to use more than one colony of ants, the number of colonies needed here is equal to the number of classes in the database.Initially, for each colony,  artificial ants are placed on a selected object, called "nestobject."For each cluster we randomly chose one and only one "nest-object."The simulation model is detailed in Figure 1.
The objective is to find the shortest route between the given objects and return to nest-object, while keeping in mind that each object can be connected to the path only once.The path traced by the ant represents a cluster in the dataset.Figure 2 shows a possible result of ASClass algorithm execution on the graph of Figure 1.It may be noted that, at the end of algorithm, each colony gives a collective traced-path that represent a cluster in the partition.
For each colony, an artificial agent possesses three behavioral rules: (i) a movement item inspired by ant foraging behavior; (ii) an object fixing item inspired by the collective weaving in social spiders; (iii) a return to web item.

Movement
Item.An artificial ant  is an agent which moves from an object  to an object  on a dataset graph.It   Advances in Fuzzy Systems 5 decides which object to reach among the objects it can access from its current position.The agent selects the object to move on according to a probability density   depending on the trail accumulated on edges and on the heuristic value, which was chosen here to be a dissimilarity measures.
and  are two parameters which determine the relative influence of the pheromone trail and the dissimilarity measures and   is the set of objects which ant  has not yet connected to its tour.

Object Fixing Item.
When an ant reaches an object it can fix it on its path according to a contextual probability  fix ; if a decision is made, ants draw a new edge between the current object and the last fixed object; otherwise, it returns to web and updates its memory (it decides to delete the object among the objects it can access from its current position).
The probability to fix the object to the path is defined as where  fix is constant.(  ) is measure of the average similarity of the object   with the objects   forming the path created by the ant  and it is calculated as follows: where  is a scaling factor determining the extent to which the dissimilarity between two objects is taken into account and   is the number of objects forming the tour constructed by the ant .
At each time step, if an ant decides to fix new objects, it updates the pheromone trail on the arcs it has crossed in its tour.This is achieved by adding a quantity Δ  to its arc and it is defined as follows: where   is the length of the tour   built by the ant  and  0 is constant.
After all ants have constructed their tours, the pheromone trails are lowered.This is done by the following rules: where 0 <  < 1 and it is the pheromone trail evaporation.Equation (6) becomes 3.3.Return to Web Item.If the decision is made, ant returns to the last fixed object and selects the objet to move on according to its updated memory.This process is achieved for each colony.At the end of process we obtain  routes which represent the  clusters in the datasets.
All notations used in ASClass algorithm are shown as follows: (): the amount of pheromone on edge which connects city  to city  at time .: parameter that control the relative importance of the trail intensity   .: parameter that control the visibility   .
: the coefficient of decay.
: the number of objects forming the tour constructed by the ant .
: set of objects which ant  has not yet connected to its tour.  (): transition probability from object  to object .: scaling factor.

𝑄: constant.
: the length of the tour built by the th ant.

𝑚: number of ants in colony.
Its general principle is defined as shown in Algorithm 1.

Improvements of ASClass Algorithm.
In studying the asymptotic behavior of ASClass algorithm, we make the convenient assumption: there are still some objects which are not assigned to any cluster when the ASClass algorithm stops.We called these objects Outliers.This phenomenon is caused by the fact that an object can belong to two or more clusters (the same object can be added by several colonies to their paths).Solutions that we propose in this paper are presented in the following.

First Solution: ASClass i Algorithms.
In order to find partition for unclassifiable objects (Outliers), we propose applying, respectively, -means algorithm, -medoid algorithm, and FCM algorithm on dataset of Outliers samples.
Initialize randomly the  objects in a 2D environment Choose randomly the nest-object for each cluster, we assume  init this object.For each nest-object determine the cluster,  init ← ( selected ) End For For each cluster determine the number of objects ( init ) End For For all colony do For  = 1 to  max do  current ←  selected For all ants do Repeat  ←  current Choose in the list    (list of objects not visited) an object  according to the below formula: Place a quantity of pheromone on the route according to the following equation: Our proposed method ASClass, presented in the previous section, can also be used as an initialization step for these algorithms.As can be seen in Table 1, we called, respectively, ASClass 1 , ASClass 2 , and ASClass 3 our proposed versions of ASClass.
The ASClass i procedure consists of simply starting with  groups.The initialization part is identical to ASClass.We are able to show in Figure 3 that the output of first step consists of classified and unclassified objects.Classified samples are represented by different symbol (red, green, and blue).Unclassified ones are represented by black point.
As can be seen in Figure 4, unclassified objects resulting at the first step (black point) are given as input parameter to ASClass i , in the second step.Algorithm  (different values of  are given in Table 1) is applied on dataset of unclassified objects to reassigning them to the appropriate cluster. algorithm create new clusters with the same specified number .These clusters will be merged with existing clusters created at the first step.

Second Solution: F-ASClass Algorithm.
The result of the partition founded by ASClass in the first step is given as input parameter to FCM algorithm in the second step.An element   of partition matrix represents the grade of membership of object   in cluster   .Here,   is a value that described the membership of object  to class.
We will initialize the partition matrix (membership matrix) given to FCM function by 1 according to classified object and  according to unclassified objects.
For the classified objects, if object  is in class    = 1; otherwise   = 0.For the unclassified objects,   =  for all class  ( = 1, 2, 3, . . ., ), meaning that an unclassified object  has the same membership to all class .The sum of membership values across classes must equal one.
Figure 5 shows that the output of the first step (see Figure 3) will be given as input parameter in the second step.

Artificial and Real Data.
To evaluate the contribution of our method, we use several numerical datasets, including artificial and real databases from the Machine Learning Repository [39].Concerning the artificial datasets, the databases art1, art2, art3, art5, and art6 are generated with Gaussian laws and with various difficulties (classes overlay, nonrelevant attributes, etc.), and art4 data is generated with uniform law.The general information about the databases is summarized in Table 2.For each data file, the following fields are given: the number of objects (), the number of attributes ( Att ), and the number of clusters expected to be found in the datasets (  ).Art1 dataset is the type of data most frequently used within previous work on ant based clustering algorithm [22,23].The data are normalized in [0, 1], the measure of similarity is based on Euclidian distance, and the algorithms parameters used in our tests were always the same for all databases and all algorithms.
As can be seen in Table 2, Fisher's Iris dataset contains 3 classes of 50 instances where each class refers to a type of Iris plant.An example of class discovery of ASClass on Iris dataset is shown in Figure 7.
Five versions of the ASClass algorithm were coded and tested with test data in order to determine accuracy or results.One implementation presented basic features of ASClass without improvements.ASClass 1 , ASClass 2 , and ASClass 3 are improvements versions of the basic ASClass in which we propose applying, respectively, the first solution described in Section 3.4.1.F-ASClass is a fuzzy-ant clustering solutions described in Section 3.4.2.

Evaluation Functions.
The quality of the clustering results of the different algorithms on the test sets are compared using the following performances measures.The first performance measure is the error classification (Ec) index.It is defined as The second measure used in this paper is the accuracy coefficients.It determines the ratio of correctly assigned objects: where   is the number of objects of the th class correctly classified,   is the number of objects in cluster   , and  is the total number of objects in dataset.Not ass represents the percentage of not assigned objects in the predicted partition.Moreover, we will use separation index.It is defined in [40] as In [5]   is defined as follows:   ( = 1, . . ., ;  = 1, . . ., ) is the membership of any fuzzy partition.The corresponding hard partition of   is defined as   fl 1 if argmax  {  };   fl 0 otherwise.Cluster should be well separated; thereby a smaller  indicates the partition in which all the clusters are overall compact and separate to each other.
The centroid of  is represented by the parameter   .

4.3.
Results.This section is divided into two parts.First, we study the behavior of ASClass with respect to the number of ants in each colony using Iris dataset.Second, we compare F-ASClass with the -means, -medoid, FCM, and ASClass i ( = 1, 2, 3), using test datasets presented in Table 2.This comparison can be done on the basis of average classification error and average value of accuracy.

Parameters Setting of ASClass Algorithm.
We propose studying the speed to find the optimal solution, defined as the number of iterations, as a function of the number of ants in ASClass.So we evaluate the performance of ASClass varying the number of ants from 5 to 100, given a fixed number of iterations (100 iterations per trial).
Results presented in Figures 6(a)-6(g) show that the length of the best tour made by each colony of ants is improved very fast in the initial phase of the algorithm.Once the second phase has been reckoned, new good solutions start to appear but a phenomenon of local optima is discovered in the last phase.In this stage, the system ceased to explore new solutions and therefore will not be improved any more.This process called, in the literature of ACO algorithms, unipath behavior and it would indicate the situation in which the ants follow the same path and so create the same cluster.This was due to a much higher quantities of artificial pheromones deposed by ants following that path than on all the others.
On one hand, we can observe from Table 3 that the best results with ASClass (corresponds to the smallest value of error of classification, Ec = 0.249, and the largest value of accuracy, Ac = 0.729) are obtained when the number of ants is equal to 50 (number of samples in each cluster = 50).It is clear here that the number of ants improve the efficiency of solution quality: a run with 100 ants is more search effective than with 5 ants.This can be explained by the importance of communication in colony through trail in using many ants (in case of 100 ants).This propriety, called synergistic effect, makes ASClass attractiveness on its ability of finding quickly an optimal solution.
As a semisupervised clustering algorithm, we can use confusion matrix as a visualization tool to evaluate the performance of ASClass.It contains information about the number per class of well classifies and mislabeled samples.Furthermore, it contains information about the capability of system to not mislabel one cluster as another.An example of confusion matrix on Iris dataset is shown in Figures 8(a)-8(g).
It is important to note that Iris dataset presents the following characteristic: one cluster is linearly separable from the other two; the latter are not linearly separable from each other.We have presented this characteristic in Figure 7. Results presented in Figures 8(a)-8(g) prove that ASClass may be capable of identifying this characteristic.
It is clear to see that for all examples in Figures 8(a)-8(g), the illustrated results show that all predicted samples of cluster 1 are correctly assigned as cluster 1.Besides no instance of cluster 1 is misclassified as class 2 and class 3.However, a large number of predicted samples of cluster 2 may be misclassified into cluster 3 and a few number are misclassified into cluster 1.In case of cluster 3, there are some samples misclassified into cluster 2 and no one is misclassified into cluster 1.
As can be seen in Figures 8(a)-8(g), which represent confusion matrix for only clustered data, the confusion matrix did not contain the total 150 instances distribution of Iris data: in the case of ASClass, there are still some objects which are not assigned to any cluster when the algorithm stops.

Comparative Analysis.
In order to compare the effectiveness of our proposed improvements of ASClass, we apply them to datasets presented in Table 2.We also use tree indexes of cluster validity: error classification (Ec), accuracy, and separation () coefficient."The index should make good intuitive sense, should have a basis in theory, and should be readily computable" [4].
Note that underline bold face indicates the best and bold face indicates the second best.In -means, -medoid, and FCM algorithms the predefined input number of clusters  is set to the known number of classes in each dataset.
Tables 4 and 5 contain, respectively, the mean, standard deviations, mode, minimal, and maximal value of error rate of classification and the accuracy obtained by the four    In Tables 6 and 7, we report, respectively, the mean, standard deviations, mode, minimal, and maximal values of the value of error rate of classification and accuracy obtained by the four automatic clustering algorithms (ASClass 1 , ASClass 2 , ASClass 3 , and F-ASClass).
Table 8 contains, respectively, the mean, standard deviations, mode, minimal, and maximal value of separation obtained by all fuzzy approaches: FCM, ASClass 3 and F-ASClass.
In general, the comparative results presented in  Table 5 also conforms to the fact that ASClass algorithm remains clearly and consistently superior to the other three techniques in terms of the clustering accuracy.The only exception is also seen on Pima dataset where the best value is generated by the -means algorithm.
In Tables 6 and 7 we report, respectively, the mean, standard deviations, mode, minimal, and maximal values of error rate classification and accuracy obtained by ASClass 1 , ASClass 2 , ASClass 3 , and F-ASClass algorithms.It is clear from these tables that F-ASClass algorithm can minimize the value error classification and maximize accuracy for all datasets.The only exception is for Art3 and Pima datasets.The best values on error rate classification for these both datasets are given by ASClass 1 .The best value for

Conclusion
We have presented in this paper a new fuzzy-swarmalgorithm called F-ASClass for data clustering in a knowledge discovery context.F-ASClass introduces new fuzzy-heuristic for the ant colony inspired from the spider web construction.
We have also proposed using several colonies of ants; the number of colonies in F-ASClass depended on the number of clusters in databases to be classified.Fuzzy rules are introduced to find a partition for unclassified objects generated by the work of artificial ants.The experimental results show that the proposed algorithm is able to achieve an interesting result with respect to the well-known clustering algorithm, of cluster 1 Nest-object of cluster 2 Nest-object of cluster 3

Figure 2 :
Figure 2: Results of ASClass algorithm on graph used in Figure 1: tree clusters are drawn; each of them is created by a colony of ants in the ASClass algorithm.

Figure 7 :
Figure 7: Two-dimensional projection of Iris dataset onto first two principal components.

Table 2 :
Main characteristics of artificial and real databases used in our tests.

Table 3 :
Error classification and accuracy coefficients found on Iris dataset.Average on 20 trials, 100 iterations per trial.

Table 4
indicate that our proposed algorithm generates more compact clusters as well as lower error rates than the other clustering algorithms.The only exception is on the Pima dataset where ASClass does not achieve the minimum error rates.

Table 8 :
Mean, standard deviation, mode, min, and max values of separation index achieved by each clustering algorithm (FCM, ASClass 3 , and F-ASClass) over 20 trials on dataset presented in Table2.