Clustering Objects Described by Juxtaposition of Binary Data Tables

Recommended by Khosrow Moshirvaziri This paper seeks to develop an allocation of 0/1 data matrices to physical systems upon a Kullback-Leibler distance between probability distributions. The distributions are estimated from the contents of the data matrices. We discuss an ascending hierarchical classification method, a numerical example and mention an application with survey data concerning the level of development of the departments of a given territory of a country.


Introduction
The automatic classification of the components of a structure of multiple tables remains a vast field of research and investigation.The components are matrix objects, and the difficulty lies in the definition and the delicate choice of an index of distance between these objects see Lerman and Tallur 1 .If the matrix objects are tables of measurements of the same dimension, we introduced in Rebbouh 2 an index of distance based on the inner product of Hilbert Schmidt and built a classification algorithm of k-means type.In this paper, we are interested by the case when matrix objects are tables gathering the description of the individuals by nominal variables.These objects are transformed into complete disjunctive tables containing 0/1 data see Agresti 3 .It is thus a particular structure of multiple data tables frequently encountered in practice each time one is led to carry out several observations on the individuals whom one wishes to classify see 4, 5 .We quote for example the case when we wish to classify administrative departments according to indices measuring the level of economic idE and human development idH which are weighted averages calculated starting from measurements of selected parameters.Each department i gathers a number L i of subregions classified as rural or urban.Each department is thus described by a matrix with 2 columns and L i lines of positive numbers ranging between 0 and 1.But the fact that the values of the indices do not have the same sense according to the geographical position of the subregion, and its urban or rural specificity led the specialists to be interested in the membership of the subregion to quintile intervals.Thus, each department will be described by a matrix object with 10 columns and L i lines.This matrix object is thus a juxtaposition of 2 tables of the same dimension containing only one 1 on each line which corresponds to the class where the subregion is affected and the remainder are 0. The use of conventional statistical techniques to analyze this kind of data requires a reduction step.Several criteria of good reduction exist.The criterion that gives results easily usable and interpretable is undoubtedly the least squares criterion see, e.g., 6 .These summarize each table of data describing each object for each variable in a vector or in subspace.Several mathematical problems arise at this stage: 1 the choice of the value which summarizes the various observations of the individual for each variable, do we take the most frequent value or another value, for example an interval 7 ; why this choice?and which links exist between variables, 2 the second problem concerns the use of homogeneity analysis or multiple correspondence analysis (MCA) to reduce the data tables.We make an MCA for each of the n data tables describing, respectively, the n individuals.We get n factorial axis systems.To compare elements of the structure, we must seek a common system or compromise system of the n ones.This issue concerns other mathematical discipline such as differential geometry see 8 .The proposed criteria for the choice of the common space are hardly justified see Bouroche 9 .This problem is not fully resolved, 3 the problem of the number of observations that may vary from one individual to another.We can use the following procedure to complete the tables.We assume that L i > 1, for all i 1, . . ., n, L i is the number of observation of the individual ω i and define L as the least common multiple of L i .Hence, there exists r i such that L L i × r i .Now, we duplicate each table T , r i times, we obtain a new table is the number of variables.But if L i is large, the least common multiple becomes large itself, and the procedure leads to the structure of large tables.Moreover, this completion removes any chronological appearance of data.This cannot be a good process of completion, and it seems more reasonable to carry out the classification without the process of completion.
To overcome all these difficulties with the proposed solutions which are not rigorously justified, we introduce a formalism borrowed from the theory of signal and communication see Shannon 10 and which is used to classify the elements of the data structure 11 .Our approach is based on simple statistical tools and on techniques used in information theory physical system, entropy, conditional entropy, etc. and requires the introduction of the concept of discrete physical systems as models for the observations of each individual for the variables which describe them.If we consider that an observation is a realization of a random vector, it appears reasonable to consider that each value of the variable or the random vector represents a state of the system which can be characterized by its frequency or its probability.If the variable is discrete, the number of states is finite, each state will be measured by its frequency or its probability.This approach gives a new version and in the same time an explanation of the distance introduced by Kullback 12 .This index makes it possible to build an indexed hierarchy on the elements of the structure and can be used if the matrix objects do not have the same dimension.
In Section 2, we introduce an adapted formalism and the notion of physical random system as a model of description of the objects.We define in Section 3 a distance between the elements of the structure.The numerical example and an application are presented in Section 4. Concluding remarks are made in Section 5.

Adapted formalism
Let Ω {ω 1 , . . ., ω n } be a finite set of n elementary objects, {V 1 , . . ., V d } be d discrete variables defined over Ω and taking a finite number of values in D 1 , . . ., D d , respectively, D j {m j 1 , . . ., m j r j } and m j t is the tth modality or value taken by V j .We suppose that the observations of the individual ω i for the variable V j are given in the table and L i represents the number of the observations of the individual ω i , V j l m j t if the lth observation of the individual ω i for the the variable V j is m j t where t 1, . . ., r j .E j i is the vector with L i components corresponding to the different observations of ω i for V j .
The structure of a juxtaposition of categorical data tables is For the sake of simplicity, we transform each vector E j i in a 0/1 data matrix Δ j i : 1, if at the lth observation ω i takes the modality m j t , 0, otherwise.

2.3
The structure of a juxtaposition of 0/1 data tables is t is estimated by the relative frequency of the value 1 observed in the tth column of the matrix Δ j i .Let S j i be the random single physical system associated to ω i for V j : where the symbol S m t means that the system lies in the state m t , and is the conjunction between events.
In the multidimensional case, the associated multiple random physical system S is where The multiple random physical system associated to the marginal distributions is where is the conjunction between single physical systems, and {S j , j Pr S m j l

Entropy as a measure of uncertainty of states of a physical system
For measuring the degree of uncertainty of states of a physical system or a discrete random variable, we use the entropy which is a special characteristic and is widely used in information theory.

Shannon's [10] formula for the entropy
The entropy of the system is the positive quantity: p t log 2 p t .

3.1
The function H has some elementary properties which justify its use as a characteristic for measuring the uncertainty of a system.The characteristic of the entropy function expresses the fact that probability distribution with the maximum of entropy is the more biased and the more consistent with the information specified by the constraints 10 .

Entropy of a multiple random physical system
Let S be a multiple random physical system given by 2.6 .If the single physical systems S j ; j 1, . . ., d given by 2.9 are independent, then 3.2 The conditional random physical system S 1 / S 2 m 2 l is given by where p j/l is a conditional probability.The entropy of this system is p j/l log 2 p j/l .

3.4
The multiple random physical system S 1 /S 2 is written by p j/l log 2 p j/l .

3.7
The quantity is nonnegative.We have It is clear that K •, • is not a symmetric function, thus it is not a distance in the classical sense but characterizes from a statistical point of view the deviation between the distributions P and Q.It should be noted that K P, Q K Q, P is symmetrical.Kullback 12 explains that the quantity K P, Q evaluates the average lost information if we use the distribution P while the actual distribution is Q.
Let S Π d be a set of random physical systems with d j 1 r j states

3.10
Let dist be the application defined by dist: 3.11 P 1 and P 2 are the multivariate distributions of order d governing, respectively, the random physical systems S 1 and S 2 .K d is defined as follows: We admit that dist S 1 , S 2 0 ⇔ S 1 S 2 ⇔ ω 1 ω 2 .dist measures the similarity between physical systems.The smaller the value of dist is, the larger the uncertainty of the systems is.dist represents the lost quantity of average information if we use the distribution P 1 P 2 to manage the system while the other distribution is true.dist is nothing else than the Kullback-Leibler distance between the multivariate distributions P 1 and P 2 .Indeed, the Kullback-Leibler distance between P 1 and P 2 is given by

3.13
Developing this expression will give dist.

Procedure to estimate the joint distribution
In the case where all variables involved in the description of the individuals are discrete, we give a procedure taken from classical techniques of factor analysis to estimate the joint distribution and derive the entropy of the multiple physical system.Let Δ i Δ 1 i , . . ., Δ d i be a juxtaposition of d 0/1 data tables.For ω i ∈ Ω fixed, we have • is the number of simultaneous occurrences of the modalities m 1 l 1 , . . ., m d l d : 4.2

Algorithm
We use an algorithm for ascending hierarchical classification 13 .We call points either the objects to be classified or the clusters of objects generated by the algorithm.
Step 1.There are n points to classify which are the n objects .
Step 2. We find the two points x and y that are closest to one another according to distance dist and clustered in a new artificial point h.
Table 1: Juxtaposition of the disjunctive data tables describing the 6 objects.
Step 3. We calculate the distances between the new point and the remaining points using the single linkage of Sneath and Sokal 14 D defined by D ω, h Min dist ω, x , dist ω, y , ω / x, y.

4.3
We return to Step 1 with only n − 1 points to classify.
Step 4. We again find the two closest points and aggregate them.We calculate the new distances and repeat the process until there is only one point remaining.
In the case of single linkage, the algorithm uses distances in terms of the inequalities between them.

Numerical example
Consider 6 individuals described by 2 qualitative variables with, respectively, 2 and 3 modalities.10 observations for each individual, the observations are grouped in Table 1.

Procedure to build a hierarchy on these objects
The empirical distributions which represent the individuals are given by Table 2.
The program is carried out on this numerical example.We obtain the following results Table 3 .

4.4
Then, the objects ω 1 and ω 3 are aggregated into the artificial object ω 7 which is placed at the last line, and the rows and columns corresponding to the objects ω 1 and ω 3 are removed in the similarity matrix.

4.5
The objects ω 2 and ω 7 are aggregated into the artificial object ω 8 .
Step The objects ω 4 and ω 8 are aggregated into the artificial object ω 9 .
In Figure 1 it can be seen that two separated classes appear in the graph by simply cutting the hierarchy on the landing above the individual ω 2 .In this algorithm, we started by incorporating the two closest objects using the index of distance between corresponding physical systems.The higher the construction of the hierarchy is, the more dubious the obtained states of the mixed system are.The example shows that the index of Kullback-Leibler and the index of aggregation of the minimum bound single linkage lead to the construction of a system with a maximum of entropy, and thus lead to a system for which all the states are equiprobable.
If the total number of modalities of the various criteria is large compared with the number of observations, the frequency of choosing a set of modalities becomes small, and a lot of frequencies are zero.The set of modalities whose frequency is zero will be disregarded and does not intervene in the calculation of the distances.This can lead to the impossibility of comparing the systems.

Classification of the six objects after reduction
If each object is described by the highest frequencies "mode," we obtain the following table: This table contradicts the fact that in our procedure, the objects ω 1 and ω 2 are very close while ω 1 and ω 5 are not the same.This shows that the classification after reduction, for this type of data, can lead to contradictory results.

Application
The data come from a survey concerning the level of development of n departments E 1 , E 2 , . . ., E n of a country.The aim is to search the less developed subregions in order to establish programs of adequate development.For every i 1, . . ., n, l 1, . . ., L i , 0 ≤ idE C i l ≤ 1, and 0 ≤ idH C i l ≤ 1.The closer to 1 the value of the index is, the more the economic or human development is judged to be satisfactory.However, these indices are not calculated in the same manner.They depend on whether the subregions are classified as farming or urban zone.The ordering of the subregions according to each of the indices do not have sense anymore.The structure of data in entry is for every i 1, . . ., n: The structure is not exploitable in this form.It is therefore necessary to transform the tables in a more tractable form.The specialists of the programs of development cut the observations of each index in quintile intervals and affect each of the subregions to the corresponding quintile.We thus determine for the n series of observations of the two indices the various corresponding quintiles: idE −→ q 1 i1 , q 1 i2 , . . ., q 1 i5 , idH −→ q 2 i1 , q 2 i2 , . . ., q 2 i5 .

4.10
The quintile intervals are

4.11
For every i 1, . . ., n, the table E i is transformed into a table of 0/1 data.The problem is to build a hierarchy on all departments of the territory in order to observe the level of development of each of the subregions according to the two indices and thus to make comparisons.The observations are summarized in tables Δ 1 , Δ 2 , . . ., Δ n which constitute a structure of juxtaposition of 0/1 data matrices.These data presented are from a study of 1541 municipalities involved in Algeria.The municipalities are gathered in 48 departments.The departments do not have the same number of municipalities which have not the same specificities: size, rural, urban, and the municipalities do not have the same locations: mountain, plain, costal, and so forth.We have to build typologies of departments according to their economic level and human development according to the United Nations Organization standards.
The result of the study made it possible to gather the great departments cities which have large and old universities and the municipalities which have a long existence.Another group emerged which includes enough new departments of the last administrative cutting and develops activities and services of small and middle companies.The other groups are distinguished by great disparities between municipalities in their economic level and human development and according to surface and importance.

Conclusion
In this paper, the definition of the entropy is that stated by Shannon 10 .This definition is still used in the theory of signal and information.The suggested formalism gives an explanation and a practical use of the distance of Kullback-Leibler as an index of distance between representative elements of a structure of tables of categorical data.It is possible to extend these results to the case of a structure of data tables of measurements and to adapt an algorithm of classification to the case of functional data.

1
If one of the states is certain ∃l ∈ {1, . . ., r} such that p t Pr S The entropy of a physical system with a finite number of states m 1 , . . ., m r is maximal if all its states are equiprobable: for all t ∈ {1, . . ., r}; p t Pr S m t 1/r.We have also 0 ≤ H S ≤ log 2 r .

Table 3 :
Entropy of the conditional random physical systems associated to the 6 objects. 3.
Every department E i is constituted of L i , we measured the composite economic development index idE and the composite human development index idH.These two composite indices are weighted means of variables measuring the degree of economic i .For every i 1, . . ., n and l 1, . . ., L i