This paper describes an enhancement of fuzzy lattice reasoning (FLR) classifier for pattern classification based on a positive valuation function. Fuzzy lattice reasoning (FLR) was described lately as a lattice data domain extension of fuzzy ARTMAP neural classifier based on a lattice inclusion measure function. In this work, we improve the performance of FLR classifier by defining a new nonlinear positive valuation function. As a consequence, the modified algorithm achieves better classification results. The effectiveness of the modified FLR is demonstrated by examples on several wellknown pattern recognition benchmarks.
Much attention has been paid lately to applications of lattice theory [
The original FLR model employs a linear positive valuation function to define an inclusion measure. Liu et al. [
In this work, we apply FLR algorithm to solve pattern classification problems without feature extraction and improve its performance based on a new nonlinear positive valuation function. As a consequence, the modified algorithm achieves better classification results. The effectiveness of the modified FLR is demonstrated by examples on several wellknown benchmarks.
The layout of this paper is as follows. In Section
A lattice
A valuation on a crisp lattice
An inclusion measure
It reveals that an inclusion measure indicates the degree to which one fuzzy set is contained in another one.
A positive valuation function
In our experiments, the data have been normalized in lattice
The aforementioned valuation function operates in a more flexible manner compared with other valuations proposed in the literature. First, the performance of FLR can be optimized by selecting different values of the location parameter. Second, if the first variable
Figure
The positive valuation functions: (a)
A lattice
A fuzzy lattice is a pair
Consider the set
For lattice
Including a least (the empty) interval, denoted by
An isomorphic function
As a consequence, the degree of inclusion of an interval in another one in lattice
For two
This section presents a classifier for extracting rules from the input data based on fuzzy lattices. One of FLR important properties is the ability of dealing with disparate type of data, including real vectors, fuzzy sets, symbols, graphs, images, waves, and even any combination of the aforementioned data and this shows the ability of FLR in combining different types of data. Furthermore, FLR can handle both complete and noncomplete lattices, and it can cope with both points and intervals. Moreover, stable learning is carried out both incrementally and fast in a single pass through the training data. In some applications, we might face with “missing” or “do not care” data. In this case, FLR can manage “missing” and “do not care” data by replacing them with least element
It should be mentioned that an input datum to the FLR classifier (model) is represented as
Suppose a knowledge base
Classes
Store input
Else
Compute
Else
Note that
The Simpson benchmark is a twodimensional data set consisting of 24 points which is used for testing the performance of a clustering algorithm [
Figure
Decision boundaries generated by the modified FLR with four different threshold parameters.
As it was said in the previous section, one of the FLR properties is the ability of knowledge representation. Indeed, FLR is capable of extracting implicit features beyond the data and represents them as rules. Each rule is represented as
Three induced rules generated by the modified FLR.
Rule no.  Attributes  Class label  

a1  a2  
1  IF  [0.18, 0.27]  AND  [0.16, 0.3]  THEN  1 
2  IF  [0.34, 0.34]  AND  [0.22, 0.22]  THEN  2 
3  IF  [0.43, 0.49]  AND  [0.2, 0.28]  THEN  3 
In this section, we evaluate the classification performance of the optimized FLR in a series of experiments on six wellknown benchmarks.
We evaluate the classification performance of the FLR model using images of the Columbia Image database [
Ten objects used to train the networks.
The image segmentation data set was donated by the Vision Group, University of Massachusetts, and is included in the Machine Learning Repository of the University of the California, Irvine [
The Penbased recognition of handwritten digits dataset was taken from the UCI repository of machine learning databases [
Sample digits from handwritten digits datasets.
The letter recognition benchmark was employed from the UCI repository of machine learning databases [
Examples of the character images.
The semion hand recognition benchmark was taken from the UCI repository of machine learning databases [
The optical recognition of handwritten digits benchmark was employed from the UCI repository of machine learning databases [
Table
Characteristics of 6 used data sets.
Data set  No. of data set elements  No. of training data  No. of testing data  No. of input attributes  No. of classes 

Columbia Image  720  60  620  16384  10 
Image Segmentation  2310  210  2100  19  7 
Penbased recognition  3558  60  3498  16  10 
Letter recognition  20000  2000  18000  16  26 
Semion hand recognition  1593  162  1431  256  10 
Optical recognition of 
2182  385  1797  64  10 
In order to provide a meaningful comparison, all the algorithms have been implemented in the same environment using the C++ objectoriented programming language, the same partitioning of data sets for training and testing, the same order of input patterns, and a full range of parameters, and we have employed the isomorphic function
Recognition results along with relative ranking by different methods over 6 benchmarks.
Data set 
SOM  FuzzyART  GRNN 




Columbia Image 






Image Segmentation 






Penbased recognition 






Letter recognition 






Semion hand recognition 






Optical recognition of handwritten digits 






In all our experiments in order to achieve the best performance, we have considered GRNN for different values of variance parameter between 0 and 0.5 in steps of 0.001. For fuzzy ART, we have set the choice parameter to 0.01, and the values of vigilance and learning parameters have been adopted between 0 and 1 in steps of 0.01. Computational experiments for the SOM algorithm have been carried out using
Table
As can be seen in Table
Average classification accuracy on the entire data sets.
Algorithm  SOM  Fuzzy ART  GRNN 




Average  71.84 
69.58 
86.60 
87.50 
87.32 

In Table
Sum of ranking of Table
Algorithm  SOM  FuzzyART  GRNN 




Sum of ranks 






In this work, we introduced an improvement of fuzzy lattice reasoning (FLR) classifier using a new nonlinear positive valuation function. We have investigated the performance of new FLR model in several wellknown classification problems. Experimental results demonstrated that our proposed methods outperformed established classification models in terms of classification accuracy on the testing data.