Credit Risk Prediction Using Fuzzy Immune Learning

The use of credit has grown considerably in recent years. Banks and financial institutions confront credit risks to conduct their business. Good management of these risks is a key factor to increase profitability. Therefore, every bank needs to predict the credit risks of its customers. Credit risk prediction has been widely studied in the field of data mining as a classification problem. This paper proposes a new classifier using immune principles and fuzzy rules to predict quality factors of individuals in banks. The proposed model is combined with fuzzy pattern classification to extract accurate fuzzy if-then rules. In our proposed model, we have used immune memory to remember good B cells during the cloning process. We have designed two forms of memory: simple memory and k-layer memory. Two real world credit data sets in UCI machine learning repository are selected as experimental data to show the accuracy of the proposed classifier. We compare the performance of our immune-based learning system with results obtained by several well-known classifiers. Results indicate that the proposed immune-based classification system is accurate in detecting credit risks.


Introduction
Banks and financial agencies employ credit scoring models extensively to determine good and bad credits.Loans are usually the most significant cause of risk in banks.Using credit scoring will reduce the time of loan approval procedure [1] and save cost per loan and enhance credit decisions.This enhancement helps lenders to guarantee that they are applying the same criteria to same groups of borrowers [2].In these situations banks can supervise the existing loans much easier than before [3].Because of the fast growth of autofinancing in the last two decades, the use of data mining for credit risk prediction increases rapidly [4][5][6][7].The first investigation into credit scoring was started by Olson and Wu in 2010 to classify credit applications as good or bad payers [8].Fair and Isaac presented a credit scoring model in the early 60s [9].Since then, various models have been developed using traditional statistical methods such as discriminant analysis method in [10,11].Ordinary linear regression has also been used as another traditional statistic method for credit scoring [12,13].Recent techniques of credit risk assessment [14][15][16][17][18][19][20] treat lending decision problem as a binary classification problem [8].
The performance of bioinspired algorithms, like artificial neural networks and evolutionary computation, for various data mining problems has been demonstrated by many investigations previously [21][22][23][24][25].Many bioinspired algorithms have been proposed for credit scoring [1,21,26].Recently artificial immune systems (AIS) have been successfully employed in a wide variety of application areas.Artificial immune systems are computational systems inspired by the processes of the natural immune system.This metaheuristic emerged in the 90s as a new computational model in AI.Hunt and Cooke apply AIS to pattern recognition problems in 1996 [27].Timmis and Knight define AIS as "adaptive systems inspired by theoretical immunology and observed immune functions, principles and models, which are applied to problem solving" [28].There are various types of AIS, and researchers worked mostly on the theories of immune networks, clonal selection, and negative selection [29].In this paper, we have proposed an AIS-based classification system with a new clonal selection 2 Advances in Fuzzy Systems algorithm.Within the proposed AIS, fuzzy logic has been applied to extract interpretable fuzzy rules [30,31].
The main reason that encouraged us to use the AIS metaheuristic for credit risk prediction problem is that AIS has a nature which we can use it for our problem effectively.This nature is that AIS tends to explore the search space of the problem very efficiently.This capability is associated with the hypermutation operator of AIS.We have selected AIS for credit scoring prediction problem because previous investigations show that the so-called classification problem has a very explorative search space.We have experienced this nature of credit scoring classification problem in our experiments vividly.The main observation which shows the explorative nature of this problem is that the fitness function outputs changes drastically for very similar inputs.Moreover, AIS has proved its high performance for two-class classification problems in previous investigations.
The new proposed classification system in this paper is an improved version of fuzzy artificial immune system (FAIS) [30] and comprehensible credit scoring-FAIS (CCS-FAIS) [31] classifiers as the two previous versions of AIS-based classification system for credit risk prediction.In our proposed model, we have employed immune memory to remember good B-cells during the cloning process.We have designed two forms of memory: simple memory and -layer memory.Results demonstrate that our new definition of memory for AIS-based fuzzy rule extraction increases the final classification rate of credit scoring process considerably.The Weka Data Mining tool [32] has been used to compare our classifier with several well-known classifiers.
The rest of this paper is organized as follows.Section 2 discusses some algorithms presented for credit risk prediction problem.In Section 3, we describe immune systems and the concepts we have used in our proposed algorithm.Section 4 describes pattern classification with fuzzy logic.The proposed algorithm is presented in Section 5. Section 6 provides information of performed experiments and achieved results.Finally, Section 7 concludes the paper.

Literature Review
SVM is one of the popular learning methods presented for credit scoring classification problem.Choosing the optimal input feature subset and setting the best kernel parameters are the two problems that should be solved to propose an efficient SVM-based classifier [33].Zhang et al. [3] and Huang et al. [33] used SVM for credit scoring.They show that SVM has a high and acceptable accuracy for this classification problem.
Hybrid data mining approaches also have been proposed for effective credit scoring.Yao [34] used neighborhood rough set and SVM as a hybrid classifier.In this classifier a neighborhood rough set has been employed for feature selection.Zhang et al. [35] proposed hybrid model based on genetic programming (GP) and SVM.This model used GP to extract if-then rules and for remaining instances of dataset it employed discriminator based on SVM.Yi [36] used a combination of decision tree and simulated annealing methods to build a model.In this hybridization, authors have combined local search strategy of decision tree algorithms and global optimization of simulated annealing algorithm.
Exploring new techniques in credit scoring performance improvement can save too much money.In recent years, many bioinspired algorithms are presented for solving classification problems such as credit card fraud detection [37], credit scoring, security, and other applications [38].Among these approaches, AIS is one of the newest methods that has been applied for the credit scoring purposes.Leung et al. [9] proposed a simple AIS (SAIS) algorithm that adopted few key concepts of AIS (affinity measure, cloning, and mutation).They found SAIS a very competitive classifier.
Fuzzy logic has been used for designing classification systems drastically [39,40].The important advantage of fuzzy logic is its influential capability in managing uncertainty and vagueness [41].Most of fuzzy classifiers generate a list of fuzzy if-then rules.These rules are represented in linguistic forms that make them interpretable by users.Experts can validate and correct the rules.This increases the interaction with users.Lei and Ren-hou [42] proposed a classifier based on immune principles and fuzzy rules.They apply their algorithm on 15 well-known UCI machine learning repository [43] data sets and achieve high accuracy.The fuzzy AIS term in our proposed method is similar to Lei and Ren-hou method.The major difference between our proposed method and Lei and Ren-hou algorithm is in the definition of fitness function.They used a simple function as fitness function, but we have improved fitness function by extra terms.This function has been discussed in Section 5 in detail.

Immune Systems
Researchers have been inspired by biology in solving computational problems.There have been several techniques on biological metaphors such as evolutionary algorithms, swarm intelligence, and neural networks [29].Artificial immune systems are bioinspired algorithms that have been active and prolific over the last decade [44,45].The basis of AIS is human immune system which exploits learning and memorizing capabilities of immune systems [46].Timmis et al. proposed the relation of immunology and computation [29] at the 80s.Hunt and Cooke investigated the nature of learning in the immune system and proposed a learning algorithm [27].Timmis and Knight [28], de Castro and von Zuben [47], and Dasgupta [48] developed basic models of artificial immune systems which are the sources of current AIS algorithms.Immune-inspired models have been applied on wide variety of research fields ranging from pattern recognition, such as classification and clustering, anomaly detection [49,50], and optimization [51,52], to robotics [53,54] and image processing [55,56].The key characteristics of immune systems are learning, adaptability, memory mechanisms, and self-organization which are desirable to inspired algorithms.Clonal selection, immune networks, and negative selection are the three main immunological theories that are employed as strongly accepted perspectives in AIS.3.1.Natural Immune System.The human immune system is a distributed pattern detection system with many functional components located in specific parts throughout the body.
The immune system controls defense mechanism through innate and adaptive responses [57].Innate responses are against any invaders that enter the body, but adaptive responses are directed against particular invaders and demonstrate learning, recognition, memory acquisition, and selfregulations of the body.These invaders that infect the body are called antigens.Antigens provoke the immune responses.The core of adaptive responses is lymphocytes which are provided with a sort of receptors to recognize antigens.Lymphocytes are divided into two types as B cells and T cells.In the case of invasion, appropriate B cells attempt to clone with producing sufficient proteins to remove antigens (called antibodies).A B cell holds antibodies on its shell which can identify the antigens invading the body.The matching between antigen and antibody is complementary and is similar to "lock and key" [58].T cells do not interact with antigens directly.They circulate through the body and scan the surface of body cells for the presence of foreign antigens that have been combined with the cell.Then, T cells bind to these cells and become activated.Activated T cells secrete some chemicals as alert signals to others.B cells which take these signals from the T cells become stimulated with the detection of antigen by their antibodies.

Clonal Selection Theory.
The clonal selection theory describes the basic response of the adaptive immune system to an antigenic stimulus.The idea is that only those cells that are capable of detecting the antigen will proliferate and others cannot clone.This theory applies for both T cells and B cells.Before the receptor of B cells binds to an antigen and B cells become stimulated and differentiate into memory cells, colonies of B cells are created.During the cloning process, B cells undergo somatic hypermutation which keeps the diversity of B cell population for future strange antigens.After cloning, activated B cells (or memory cells) produce huge amounts of antibodies which results in elimination of the antigen.Some of memory cells remain within the host to generate a rapid response upon a subsequent encounter with the same or similar antigen [29].CLONALG [59] and B cell algorithm [60] are AIS algorithms which are based on clonal selection theory.These algorithms have cloning, mutation, and selection operators which makes them similar to genetic algorithms.

Fuzzy Rule-Based Pattern Classification
In this section, we briefly explain the fuzzy rule-based pattern classification method which was first proposed by Ishibuchi et al. [61] and used in many investigations [30,31,42,[62][63][64][65].This method consists of fuzzy rule generation and fuzzy reasoning procedures.

Fuzzy Rule Generation.
Let us assume that the pattern space is -dimension continuous space with  classes.For simplicity, each dimension must be in the unit interval [0, 1].
The training data set includes  labeled patterns, which is shown in The purpose is generating fuzzy if-then rules with the following form: Rule   : if  1 is  1 and . . .and   is   then   belongs to Class   with CF = CF  , where   is the label of the th fuzzy if-then rule,  1 , . . .,   are antecedent fuzzy sets in the unit interval [0, 1],   is the resultant class, and CF  is the certainty factor (or rule weight) of the fuzzy ifthen rule   which is a real number in the unit interval [0, 1] (Figure 1 demonstrates a sample fuzzy if-then rule).There might have been some do not care antecedents in the rules and these antecedents are usually omitted.Therefore, the number of antecedents of a rule is less than or equal to .Some rules may have a few antecedent conditions which makes them more understandable to users.
We have used a typical set of linguistic values as antecedent fuzzy sets.The membership function of each linguistic value is obtained by homogeneously partitioning the domain of each attribute into symmetric triangular fuzzy sets ( membership in (2)).We use such simple specification in experiments to demonstrate the high performance of our fuzzy classifier system, even if the membership function of each antecedent fuzzy set is not tailored.However, we can use any tailored membership function in our fuzzy classifier system for a particular pattern classification problem.Consider where S, MS, M, ML, L, and DC, respectively, stand for small, medium small, medium, medium large, large, and do

Advances in Fuzzy Systems
Step Product of compatibility grade of input test instance and grade of certainty for the winner rule has the most value in the rule set.

Research Procedure
This section presents the proposed algorithm and discusses about each of its steps in detail.Comprehensible credit scoring-FAIS (CCS-FAIS) [31] and fuzzy artificial immune system (FAIS) [30] are two fuzzy classifiers that we have proposed earlier using immune principles.These classifiers were based on the clonal selection theory.The clonal selection principle is used to describe the main features of an adaptive immune response to an antigenic stimulus.The main idea is that only those B cells that identify the antigens are selected to proliferate.The selected cells are exposed to an affinity maturation process, which develops their affinity to the selective antigens.In this paper, no distinction is made between a B cell and its antibody; therefore, each individual in our immune model will be called B cell.
Our previous FAIS and CCS-FAIS classification systems used population of B cells.In these classifiers, each B cell had primary age to live in the population.Age of B cells should be increased if their fitness had been improved during the maturation process; otherwise, those B cells that their current ages reach to their corresponding maximum age thresholds would die.
In this paper, we have improved the performance of FAIS and CCS-FAIS classifiers.The differences of IFAIS (current paper method) with CCS-FAIS and FAIS are as follows.
(1) In our proposed model, we have employed immune memory to remember good B cells during the cloning process.
(2) We have designed two forms of memory to remember good B cells during the cloning process: simple memory and -layer memory.
(3) The IFAIS benefits from using several diverse selection procedures to develop an efficient clonal selection algorithm.
The goal of the immune model is to obtain a set of rules with high accuracy.Each B cell represents a rule.As we mentioned in Section 3, each rule is coded according to Figure 1. 4) demonstrates the used affinity functions which have been previously presented in CCS-FAIS and FAIS [30,31].Consider method, a B cell is changed randomly.Randomness of the modification is a way of exploring in the search space.The balance of exploration and exploitation is a major problem in heuristic search algorithms.In order to exploit the previous knowledge of cloning, the memory records the changes of B cells, which enables the algorithm to produce higher quality B cells.The cloning method with this kind of memory increases the probability of modifications, which have been recorded in memory in former iterations of algorithm.We called this type of memory simple memory.In each iteration, the contents of memory degrade slightly.The effectiveness of memory decreases gradually using the proliferation procedure.When the generation of high quality B cells using the memory is stopped, the number of biased memory-based changes decreases accordingly.

Affinity Functions. Equation (
During the cloning process, it might be more effective if we consider more than one modification for the selected B cell.In a simple memory, all changes are recorded independently; therefore, we define a new type of memory which is named k-layer memory.In this memory type,  is the maximum number of simultaneous changes on a B cell.For example a 3-layer memory contains 3 kinds of memories.The first memory records just 1 modification, the second memory records 2 simultaneous modifications, and the last memory records 3 simultaneous modifications.A -layer memory needs a large amount of physical memory to run efficiently; therefore, it is not a useful method for huge data sets.The detailed implementation of these memories has been explained in the next section.

Proposed
Classifier.An overview of the proposed classifier is presented in Pseudocode 2 and Algorithm 1.The main loop of the algorithm applies the learning procedure for each class separately.This loop consists of 4 steps: initialization, rule generation, rule learning, and termination test.Rule generation phase employs an AIS-based algorithm to find a single rule, based on the initiated population.In the rule learning stage, when a rule is added to the final learned rule set, the learning mechanism reduces the weight of those training instances that are covered by the new learned rule.Therefore, in the next rule generation round, the AIS-based rule induction procedure focuses on those instances that are currently uncovered or misclassified.At the beginning of the learning process, the weights of the whole training instances are set to 1.Each step of the new proposed AIS-based algorithm is described briefly in Pseudocode 2. The details of our algorithm is presented in Algorithm 1 (IFAIS stands for improved FAIS).
(1) Initialization.In this stage, a population of B cells is generated.The number of initial population is constant.This number is a parameter which is named initial Population Size.
To generate a B cell, an instance of current class from data set is selected randomly and fuzzy terms for antecedent part of the rule (B cell) are computed according to each attribute value of the selected training instance.Consequent part of the generated rule becomes the class of selected instance.Initial age of B cell is another parameter denoted by default Age.After generation of initial population, fitness is computed for each B cell independently.
(2) Rule Generation.In this step, a population of B cells searches for optimized rule iteratively.At the first step of IFAIS, some B cells are selected to be cloned.This selection is based on roulette-wheel selection algorithm.B cells with higher fitness have more chance to be selected.The number of selected B cells is constant (selection Size).Now it is time to proliferate the selected B cells.Hypermutation occurred during the cloning process.A B cell contains a rule and the rule has antecedents.Hypermutation considers a change to these antecedents which causes a change to the corresponding B cell.Maximum number of simultaneous changes in antecedents of a B cell would be determined by a parameter which is named max Term Changes Number.We need to restrict the number of changes because increasing the number of modified antecedents of a rule increases the probability of corruption of that rule significantly.Now a random number is generated to determine the number of changes to the selected B cell (max value is max Term Changes Number), after that the algorithm determines which antecedents must be changed using immune memory.In this algorithm, simple memory is presented by a matrix.Rows are fuzzy terms, columns are attributes, and entry (, ) is the value of changing jth attribute to th fuzzy term, so the probability of choosing jth attribute is sum of jth column entries divided by sum of all entries and the probability of changing to a fuzzy term is proportional to the value of each fuzzy term.The determination of correct factor which would be able to reveal the progress of a change is critical.In this algorithm, we use relative affinity (difference of new affinity and old affinity) as value of a change to a fuzzy term.If the algorithm uses -layer memory, we should note that  is the same as max Term Changes Number.The -layer memory contains  matrices.Dimension of 1-matrix is like simple memory.-matrix is used when  simultaneous changes occurred; therefore, the number of columns would be equal to the number of attributes or  (attributes, ) and the number of rows would be the same as num of fuzzy terms  .If we do not use memory, the probability of changing an attribute value to do not care is a parameter that is called dont Care Replacement Rate.The effect of memory controls by weight means of default probability and memory probability.This weight is a parameter which is called memory Weight.Number of clones produced for each B cell is another parameter which is called clone Number.The age of generated B cells is calculated using (5).This equation controls the population size.Consider Age new = Age old + Age default × affinity if affinity new > affinity old . (5) After cloning, B cells of previous generations became older.Some B cells would be deleted from the main population because their age reaches 0.
(3) Rule Learning.When AIS algorithm is finished, the fittest B cell is selected.The rule which is represented by this B cell is added to the final resulted rule set.Then, the classification rate of current rule set is compared to the old rule set which does not contain the new rule.Classification rate is calculated using (6).If the difference is higher than a threshold (accuracy Threshold), the addition is accepted.Consider classification rate = NCP number of patterns .
(4) Termination Test.If a stopping condition is satisfied, the learning of current class is finished, and the algorithm is going to learn the next class.If the condition is not satisfied, the algorithm tries to learn another rule by initializing a new population for the next execution of AIS.We can use any stopping condition for terminating the loop.We limit the number of learned rules for each class.This is done by a parameter which is called max Rule Set Size.

Classification Reasoning Technique.
After rule extraction procedure, the classifier must employ these learned rules to predict the class of a test record.The usual reasoning method of fuzzy classifiers is based on (3) which is explained in Section 4 in detail.We use (3) to predict the class of an input test instance whenever all of the rules are not applicable for the input test instance.The algorithm finds the most similar rule to this instance.In this method, a rule is created from the instance like the initialization phase primary rules are generated from instances.The most similar rule is the rule which has the highest length of longest common subsequence (LCS) with the newly generated rule.The length of common subsequence of the selected rule must be greater than minimum length.This value is another parameter which is named min Rule Similarity Length.This method decreases the number of unclassified instances of the algorithm.

Experimental Results
In this section, two credit data sets were used to evaluate the predictive accuracy of the proposed classifier.Australian credit approval and German credit approval data sets are available from UCI Machine Learning Repository.In Australian credit approval data set, all names and values have been changed to meaningless to protect confidentiality of the data.Table 1 illustrates the information of these data sets.In Australian credit data there are 383 instances where assigning credit to them has high risk and 307 instances are creditworthy applicants.The German credit data is more unbalanced, and it consists of 300 instances where credit should not be assigned and 700 instances are creditworthy applicants.
Each value in the Australian and German data sets is normalized between 0.0 and 1.0 using the min-max transformation method.Table 2 represents the parameter settings that have been used in IFAIS.Simulations have been performed by Weka data mining tool.Table 2 shows an interesting fact about the used datasets: the German credit scoring dataset is more complicated than Australian credit scoring dataset.This is because of the need of IFAIS to work with a greater initial population size and maximum rule set size when it is applied on the German credit scoring dataset.
In Figures 2 and 3 the progress of classification rate per rule of the proposed classifier for Australian and German credit data sets has been illustrated, respectively.These figures illustrate the role of each evolved fuzzy if-then rule for the two used datasets.Our algorithm uses iterative rule learning and it employs AIS per iteration to find a rule (rule generation  phase).After the extraction of each rule the weights of instances that have been covered by the rule are decreased.In IFAIS, the instances are removed from data set which means the weights are set to zero.According to Figures 2 and 3, the first extracted rules are more general and shorter than later rules.All of the extracted rules participate in the decisionmaking process and the rules classify almost the whole test data.
According to Figures 2 and 3, we can also compare the complexity of data sets.To accomplish this, we have extracted 5 rules for each class of Australian dataset.For each class of the German dataset, we had 12 and 17 rules for negative and positive classes, respectively.
The difference in number of extracted rules shows that German data set patterns are more complex than Australian data set (In Figure 3 the increasing rate of graph for negative class is higher than the same class in Figure 2).The negative class in German data set has more complicated signature because in comparison to Australian data set, the later learned rules have more effects (the steep of the graph is very slow).This fact shows that achieving accurate knowledge for Australian data set is more difficult than German data set.In Australian data set the first rules are very important because final classification accuracy is nearly equal to the classification accuracy at those points.
In Table 1, we have demonstrated the distribution of instances over the two classes of Australian and German credit scoring datasets.In Australian data set this distribution is approximately equal.Figure 3 shows this fact too because the accuracy of classes are nearly the same.In German data set the number of negative class instances is more than instances of positive class.In Figure 3 we have seen the important role of negative class in final classification accuracy.The extracted rules of positive class have covered very few test records.We followed 10-fold cross-validation procedure to evaluate the accuracy of our classifier.The classification rate is measured with Weka machine learning software and compared with well-known classifiers in Weka including LibSVM, PART, DTNB, Kstar, LWL, DMNBtext, SMO with RBFKernel, and J48.In these classifiers, DTNB and PART extract rules, J48 uses decision tree, LWL and Kstar are lazy classifiers, LibSVM and SMO are two implementations of SVM, and DMNBtext According to Table 5 our proposed algorithm has the most value in -measure in both positive and negative classes (except in positive class for Australian data set that the value is the second best).-measure is the harmonic mean of precision and recall.The significance of precision and recall is dependent to the domain.In some application areas precision is more interested than recall and in some others recall is more important.But with -measure we consider both precision and recall measures concurrently.Therefore, IFAIS is a reliable learning algorithm for classification problems.

Conclusion
In this paper, we proposed a fuzzy classification system for credit scoring named IFAIS.The proposed classifier combines fuzzy logic and AIS concepts.The new proposed classification system was an enhanced version of FAIS and CCS-FAIS classifiers as the two earlier versions of AIS-based classification system for credit scoring.In our proposed IFAIS, we used immune memory to remember good B cells during the cloning process.We designed two forms of memory: simple memory and -layer memory.Results indicated that our new definition of memory for immune-based fuzzy rule extraction increases the final classification rate of credit risk prediction significantly.
According to the promising results that we have obtained, using the immune principles is very effective for credit risk prediction; therefore, we will consider other concepts in artificial immune systems like negative selection or immune network as our future work.

Figure 2 :
Figure 2: Progress of classification rate per rule of IFAIS for Australian credit data set.

Figure 3 :
Figure 3: Progress of classification rate per rule of IFAIS for German credit data set.
Attribute 1 Attribute 2 Attribute 3 Attribute 4 Attribute 5 Figure 1: A sample fuzzy rule which is "if Attribute 1 is Small and Attribute 3 is Large. ..andAttribute  is Medium Large then class is ." 1. for each pattern   do (1.1)Compatibility:   (  ) = ∏  =1  membership (  ) Step 2. for each class ℎ do (2.1)Calculaterelativesum of compatibility grades:  ℎ (  ) = (∑   ∈ℎ   (  ))/ ℎStep 3. Find class ĥ which has maximum  ℎ (  ).Pseudocode 1: Pseudocode for calculating grade of certainty CF for   .notcare.The grade of certainty (CF) for each fuzzy rule is determined by (Pseudocode 1).Compatibility () of a pattern to a rule is the product of membership amount of pattern at each dimension.Zero compatibility of a pattern to a rule means that the rule has not covered the pattern.After calculating  for each pattern, the relative sum of compatibilities () is calculated per class and finally the certainty factor is achieved by relative difference of maximum value of  and sum of other s.
4.2.Fuzzy Reasoning.When the antecedent fuzzy sets of each rule are given, we can determine consequent class and the grade of certainty of each rule by fuzzy rule generation method which has been described in previous section in detail.The proposed classifier in this paper generates a set of fuzzy if-rules.The achieved rule set is then employed to predict unknown instances.The fuzzy reasoning procedure ensures us which rules can vote for class of the test instance.We use single winner rule in our algorithm.Let us assume that we have a set of fuzzy rules  extracted from training data set.The input pattern   = ( 1 ,  2 , . . .,   ) is classified by a single winner rule   in  which is determined as follows: // a population of B cells searches for optimized rule iteratively Rule Learning(); // From the final population, the best B cell based on fitness is selected.Termination Test // If a stopping condition is satisfied, the learning of current class is finished and the algorithm is going to learn the next class.An overview of the proposed classifier.At initialization, a population of B-cells is generated from instances of class . then some B-cells are selected to proliferate in rule generation phase.The life cycle of B-cells is controlled by age.The best B-cell is added to rule set if the classification rate increases more than a threshold.At last, if the termination test satisfies the classifier learns rules for class .
) 5.2.Immune Memory.The memory cells in natural immune system are used for eliminating similar foreign substances.In this paper, we have employed immune memory during the cloning process for selected B cells.According to the cloning

Table 1 :
UCI datasets used in our experiments.Instances of the Australian are almost equally distributed between classes, but in the German they are more unbalanced.

Table 2 :
Parameter specification of IFAIS in our experiments.

Table 3 :
Predictive accuracy of FAIS and different classifiers.The results have been obtained using Weka machine learning tool.The order of classifiers is alphabetical.The most accurate is bold and the second most is italic.Comparison of predictive accuracies illustrated our proposed algorithm is competitive with other classifiers.
* means classifiers which have been manually added to Weka and are available at http://wekaclassalgos.sourceforge.net.

Table 5 :
(7)paring precision, recall, and -measure of IFAIS and selected classifiers.The results are measured by Weka machine learning software.The order of classifiers is alphabetical.The best result is bold and the second most is italic.usesBayesiandecisiontheory.Table3summarizes the prediction accuracies of the proposed algorithms and other classifiers.In this table, other AIS-based algorithms are also demonstrated.Other performance measures for comparing our classifier with mentioned classifiers are precision, recall, and -measure.These measures can be obtained using(7)and according to Table4.Consider