The Reputation Evaluation Based on Optimized Hidden Markov Model in E-Commerce

Nowadays, a large number of reputation systems have been deployed in practical applications or investigated in the literature to protect buyers fromdeception andmalicious behaviors in online transactions. As an efficient Bayesian analysis tool, HiddenMarkov Model (HMM)has been used into e-commerce to describe the dynamic behavior of sellers. Traditional solutions adopt Baum-Welch algorithm to train model parameters which is unstable due to its inability to find a globally optimal solution. Consequently, this paper presents a reputation evaluation mechanism based on the optimized Hidden Markov Model, which is called PSOHMM. The algorithm takes full advantage of the search mechanism in Particle Swarm Optimization (PSO) algorithm to strengthen the learning ability of HMM and PSO has been modified to guarantee interval and normalization constraints in HMM. Furthermore, a simplified reputation evaluation framework based on HMM is developed and applied to analyze the specific behaviors of sellers. The simulation experiments demonstrate that the proposed PSOHMMhas better performance to search optimal model parameters than BWHMM, has faster convergence speed, and is more stable than BWHMM. Compared with Average and Beta reputation evaluation mechanism, PSOHMM can reflect the behavior changes of sellers more quickly in e-commerce systems.


Introduction
The rapid growth of computer network technologies has promoted the development of electronic commerce, which is also called e-commerce.In e-commerce, all commercial transactions take place over the Internet and other computer networks.Participants play different roles according to their position in transactions.They can be sellers or buyers.Any seller and buyer can enter and leave e-commerce settings at any time without restriction.Thus, the way to design an effective evaluation mechanism to assess trustworthiness of sellers and guarantee the security of buyers in complex e-commerce environments has become more and more important.Since in many cases, the involved participants have not had any previous experience, traditional and social rules of trust cannot be applied directly to virtual settings of these environments.In such scenarios, reputation systems can be used to assess service quality and behavior of sellers in e-commerce and provide references for subsequent buyers [1].Usually, reputation is the opinion or view of one about something [2].This kind of opinion is formed and updated along time according to direct interactions and indirect information provided by other participants in society about past experience.
Recently, researche on reputation systems has caught numerous interests of scientists in economics and computer science [1,[3][4][5][6][7][8].In computer science, researchers mostly focus on computational models of the reputation system.Reference [2] has classified reputation systems into six different classes according to the mathematical model.Among them, Bayesian model has been extensively researched due to its flexibility and usability [9,10].It is based on the assumption that the behavior of a seller can be modeled by a specific probability distribution.Then, the reputation value is a function of the expected value of the probability distribution.After each transaction, reputation value is updated.Currently, there exist two mainstream techniques based on Bayesian model: Binomial and Multinomial Bayesian reputation evaluation mechanisms.Binomial methods only have two states to represent reputation value, good or bad, and often use Beta probability density function to estimate reputation value [11][12][13].Multinomial methods express reputation value with several different levels.They are modeled with Dirichlet probability distribution [14][15][16].Some examples in the literature using Beta model or Dirichlet model are proposed, either implicitly or explicitly, including Beta reputation system [9], Dirichlet reputation system [10], TRAVOS [11], and Regret system [17].In fact, these systems have contributed to different applications of reputation, such as online auctioning [18], peer-to-peer file sharing [17], and mobile ad hoc routing [19].
However, both Beta and Dirichlet models assume a fixed probabilistic behavior for each participant.Actually, in many situations, a participant may change its behavior over time, which makes the result of each transaction does not satisfy fixed probabilistic distribution.So Beta and Dirichlet models ignore the dynamic of reputation value and are not fit for reputation systems [20].In order to overcome this drawback, [9,21] propose the decay-based technique.Older rating scores are given lower weights than newer ones, which make newer rating scores have greater influence on reputation value.Reference [22] has indicated that the decay principle is useful only when the behavior of participants is highly stable.Furthermore it is also hard to determine the optimal value of the decay factor only from observations.
Within an e-commerce society, each transaction usually happens within a time duration and reputation is regarded as the prediction of future behavior.As an effective time sequence analysis tool, Hidden Markov Model (HMM) is applied into reputation systems [20,22].The behavior of a seller is approximated by a finite state HMM; each HMM is used to decide whether or not the seller is trusted.Then the HMM is updated from observations in the form of rating scores based on direct experiences or recommendations of other sellers.Therefore, even if sellers change their behavior, HMM can track these changes and avoid the fixed behavior assumption [9,10].Reference [20] proposes HMM-based trust model, while [23] makes a comparison between HMM trust model and Beta reputation system.It demonstrates that HMM has better performance to detect changes in behavior of sellers.Therefore, HMM based trust model [20] is more realistic than Beta and Dirichlet models in dynamic environments.However, [20,23] estimate trustiness of sellers with specific model parameters assigned by users, which is unrealistic in real e-commerce settings.In order to find optimal model parameters, traditional approaches often use Baum-Welch (BW) algorithm to optimize model parameters based on expectation-maximization (EM) algorithm.However, BW algorithm often converges into local optimum.Reference [24] re-estimates model parameters with reinforcement learning but still cannot overcome local convergence problems.Recently, various intelligent evolutionary algorithms are introduced to optimize HMM and achieve good performance.Reference [25] optimizes HMM by tabu search algorithm; [26,27] propose to train HMM structure with genetic algorithm (GA); [28] trains HMM by Particle Swarm Optimization (PSO) algorithm; [29,30] make a comparison between PSO and GA for HMM training and demonstrate that the hybrid algorithm based on PSO and BW is superior to BW algorithm and the hybrid algorithm based on GA and BW.For HMM, model parameters need to satisfy statistical characteristics: the optimization of model parameters in HMM can be considered as a constraint problem.Nevertheless these evolutionary algorithms usually combine evolutionary algorithms with HMM directly and leave these parameter constraints in HMM out of consideration.
In this paper, we employ Particle Swarm Optimization to search optimal model parameters of HMM (PSOHMM) to avoid local optimum in BW algorithm.We solve the parameter constraints in HMM with remapping and renormalization mechanism, and we propose a reputation evaluation framework based on HMM, which adopts historic rating scores to train model parameters of HMMs, estimates reputation values using these model parameters, and updates model parameters using new rating scores.Furthermore, we use the framework to predict the reputation of sellers aiming at their specific behavior in e-commerce.The simulation experiments demonstrate that PSOHMM achieves better optimization performance than BWHMM and responds to the behavior changes of sellers quickly in e-commerce environment.
The remainder of this paper is organized as follows.In Section 2, some definitions of HMM are given and HMM is attributed into a constrained optimization problem.Section 3 introduces Particle Swarm Optimization into Hidden Markov Model to enhance its search capability.Since Particle Swarm Optimization only focuses on unconstrained optimization problem, we employ remapping and renormalization methods to settle the interval and normalization constraints in HMM.In Section 4, we discuss the performance of the proposed algorithm and compare it with related work.Finally, Section 5 concludes this paper and proposes future works.

Hidden Markov Model
Given a set of  observation states  = {V 1 , . . ., V  }, HMM consists of a finite set of  hidden states  = { 1 , . . .,   } with an associated probability distribution.Suppose that HMM regularly undergoes a state-change along a certain constant period of time according to a set of probabilities associated with its current state; HMM is a probabilistic model with a collection of random variables { 1 , . . .,   ,  1 , . . .,   } where  = { 1 , . . .,   } is the sequence of observation states and   ∈ , and  = { 1 , . . .,   } is the sequence of hidden states and   ∈ .
Generally speaking, two conditional independence assumptions are given to make associated algorithms tractable as follows.
(1) The th hidden variable, given the ( − 1)st hidden variable, is independent of previous variables: (2) The th observation depends only on the th state: HMM can be formally defined as follows.
In terms of Definition 1, we have the state transition probabilities matrix  , and the emission probabilities matrix  , as follows, respectively: ) . ( Model parameters must satisfy the following constraints: Since HMM is based on Bayesian theory, all model parameters are related to the probabilities and should belong to [0, 1], as shown in ( 4), (6), and (8), which are called the interval constraints.In addition, the probabilities of all initial hidden states should add up to 1, the probabilities from one hidden state to another hidden state should add up to 1, and the probabilities of one observation state corresponding to each hidden state should add up to 1, as in ( 5), (7), and (9), which are called the normalization constraints.
(1) The evaluation problem: given  and , how to compute the probability of observing sequence, that is, ( | ).
(2) The decoding problem: given  and , how to find a corresponding hidden state sequence that most probably generated an observed sequence.
(3) The learning problem: given , how to adjust the model parameter  to maximize ( | ).
In the context of reputation systems, rating scores from buyers reflect the behavior changes of a seller and are called the observable state sequence in HMM.All observable states form a Markov chain with time order.The purpose of HMM is to predict hidden states according to these observable states.The hidden states correspond to reputation values.According to the decoding and learning problems, before the hidden state sequence is found corresponding to the observed sequence with most probability, it is necessary to compute optimal model parameters  to maximize the following loglikelihood objective function: Given model parameters , the probability of  is computed according to sum joint probability over all possible state sequences  as follows: According to Bayesian theory, we can state the following: Given a hidden state sequence, the likelihood of an observation sequence is equal to the product of the emission probabilities computed along the specific path.In other words, Given model parameters , the probability of a state sequence  = { 1 , . . .,   } can be determined by the product of the transition probabilities from one state to another state: Using ( 13) and ( 14), the objective function in (10) becomes The method traditionally used to solve (15) is the Baum-Welch algorithm, also called forward and backward algorithm.The algorithm makes an initial estimation of all model parameters firstly.Then it refines model parameters to find maximum-likelihood with iterative expectationmaximization (EM) algorithm.However, BW algorithm is a hill-climbing algorithm, it only updates model parameters along one direction at each iteration.So BW algorithm not only degrades the computational efficiency but also is easily trapped into local optima due to the sensitivity to the initial estimation.Thus, it is desirable to seek an efficient method that can search optimal solution along multiple directions simultaneously.Inspired by this idea, we will employ Particle Swarm Optimization in the next section to search for optimal model parameters more efficiently.

A PSOHMM Reputation Evaluation Mechanism
3.1.Particle Swarm Optimization.Particle Swarm Optimization (PSO) algorithm is one kind of evolutionary methods proposed by Kennedy and Eberhart [32].It imitates the social behavior of birds and fish to find global optimum for optimization problems and has been applied in various scientific fields [33][34][35][36][37].
Similarly to swarm behaviors in nature,  particles are produced in PSO initially.Each particle  is represented by a  dimensional position vector   ().The velocity vector V  () determines the searching direction of particle  at th iteration.During each iteration,  represents the current best position of a particle, which means the particles' personal experience and  represents the current best position of the swarm, which means the social knowledge of the swarm.The velocity vector V  () is updated on each iteration using the following rule: The parameter  is called inertia weight to scale the velocity on the previous time step;  1 and  2 are the scalar factors to control the influence of the personal experience and social knowledge, respectively;  1 and  2 represent the random numbers that satisfy a uniform distribution and  1 ,  2 ∈ [0, 1].Then the new position of the particle is updated during each iteration: If the best position of a swarm  is searched from the positions of all particles, then the PSO algorithm is called global best PSO; while if  is searched only from the neighborhood   () of the particle   , that is, |  ()| < , then the PSO algorithm is called local best PSO.

Hidden Markov Model Based on Particle Swarm Optimization.
In order to solve the learning problem in HMM efficiently, this paper introduces PSO into HMM to use  particles search optimal model parameters simultaneously.The solution space consists of all possible parameters with a given number of observation states and hidden states.For convenience, the position   of a particle  is expressed by model parameters  = (, , ) with the data structure described in Figure 1, where both transition matrix  and emission matrix  are vectorized.Consequently, the position of each particle is represented by a multidimensional vector.
The fitness function () is derived from (15) to find optimal model parameters  in order to maximize the probability of observation sequence : Since all model parameters  represent the probability, any parameter   in  should satisfy the following constraint: However, sometimes the position update rule in (17) cannot only satisfy the interval constraints in ( 4), (6), and (8); that is, it cannot guarantee that model parameters are within [0, 1], but it also cannot satisfy the normalization constraints of the model parameters ( 5), (7), and ( 9).The reason is that PSO was originally proposed to handle unconstrained optimization problems.So it is necessary to modify the PSO algorithm to cope with constrained optimization problems.
In order to apply the unconstrained optimization algorithm PSO into a constrained optimization problem and make the solutions satisfy the constraints in (4)- (9), this paper employs two different methods to guarantee the interval and normalization constraints, respectively, as shown in (4)- (9).Concerning the interval constraints (4), (6), and (8) in which model parameters exceed the bounds, we employ remapping method to adjust these parameters.Let   = 1 denote upper bound, Let   = 0 denote lower bound, and let  denotes the model parameters exceeding limits.The corrected parameter   with the remapping method considers the following cases: where  ∈ [0, 1] is the adjusting parameter and is generated by randomly.Meanwhile, velocity vector V  () is also updated: Furthermore, in order to ensure that the transition matrix and the emission matrix satisfy the normalization constraints  in ( 5), (7), and ( 9), the following renormalization method is employed:

The Reputation Model Based on PSOHMM.
In order to optimize HMM with PSO, we propose the reputation evaluation mechanism based on PSOHMM.The optimized PSOHMM algorithm is described as Algorithm 1.
In e-commerce, buyers always give feedbacks to express their satisfaction after a transaction took place.Based on the PSOHMM algorithm, we propose a simplified reputation system framework as illustrated in Figure 2.For  sellers,  HMMs are trained based on historic rating scores from buyers.Each HMM corresponds to a seller.According to these trained HMMs, each seller has a corresponding reputation value.When a new rating score of the th seller arrives, the th HMM should be updated.And then the updated reputation value of the th seller is predicted based on the updated th HMM.It is worthy of note that our reputation system framework is based on the assumption that all buyers are honest and always give fair rating scores.We do not consider collusion attacks from buyers.

Simulation Experiments
In this section, we conduct some simulation experiments to validate the performance of the algorithm proposed in this paper.Since the objective function and convergence speed are the key indicators to assess the performance of optimization algorithms, we compare PSOHMM and BWHMM to demonstrate that PSOHMM can search better solutions in the first and second experiments with these two aspects, respectively.In order to verify the performance of PSOHMM in e-commerce, we compare PSOHMM, Average, and Beta algorithms with different settings.In the fourth experiment, we simulate the case that sellers utilized their good behavior to deceive buyers.In the fifth experiment, we simulate the setting that sellers play tricks to fluctuate their behavior frequently.In the last experiment, we test the performance of these algorithms with multilevel reputation.

Experimental Protocol.
For the first and second experiments, two group datasets are generated randomly.To improve the efficiency of computation, we choose 5 as the number of observation states.Some e-commerce communities have expanded from a single dimension rating into a multidimensional rating to characterize the behaviors of sellers from various aspects, such as the rating of the quality of products, the quality of service, and the delivery time.In this case, a feasible reputation model should be adaptable for multidimensional rating as well as single dimension.Consequently, we generate ten different 1-dimensional observation sequences with a length of 100 in the first group and ten different 2-dimensional observation sequences with a length of 100.For each dataset, the model parameters of HMMs are trained using these observation sequences and the number of hidden state is set to 2.
In a real e-commerce scenario, some sellers often build their reputation values by behaving well for a certain amount of time and then decide to take advantage of their good reputation by suddenly changing their behavior.A good reputation model should detect this kind of behavior immediately.So in the third experiment, a binary rating sequence is generated to simulate the changes of behavior.The rating sequence consists of 50 good rating scores followed by 50 bad rating scores.A 2-state HMM is trained by the observed sequence, which uses 1 to represent untrusted states and 2 to represent trusted states.For PSOHMM, the number of particles is 25 and the iteration number is 10.For BWHMM, the iteration number is 50.
In the fourth experiment, we simulate the fluctuant behaviors of sellers.In order to make maximum benefit, some sellers will switch their behaviors frequently with good and bad behaviors.Consequently, we generate the binary rating sequence with the length of 100, in which 20 good rating scores are followed by 20 bad rating scores, 20 good rating scores, and so on.
The binary reputation is not enough to express the trustworthiness of sellers, so the multilevel reputation is used in the fifth experiment.A rating score sequence with first 50 good rating scores followed by 50 neutral rating scores and 50 bad rating scores is generated.In this experiment, we will adopt a three-level reputation, so the number of hidden states is set to 3 for PSOHMM.
In the last experiment, real dataset is collected from Amazon company.The dataset contains the product information and the feedback from customers.For the sake of simplicity, a reasonable assumption is that each product corresponds to one seller.After buyers purchase a product, they will give their rating scores.The popular product will obtain maximum profit.In this experiment, the rating scores of one product are extracted from the dataset and analyzed.These rating scores are given from 11 September 2000 to 23 June 2006.The number of the feedbacks is 375.
To test the response of different reputation evaluation algorithms, this paper compares PSOHMM with Average [38] and Beta algorithms [9] in last three experiments.Average algorithm computes the mean value of all rating scores as the reputation value; Beta algorithm takes binary ratings as input and computes the reputation value by statistical updating Beta probability density functions.To forget old feedback gradually, the forgetting principle is introduced into Beta algorithm.Because the optimal forgetting factor depends on the dataset, we provide three different forgetting factors  = 0.2,  = 0.5, and  = 0.9.

Experimental Comparison Based on the Objective Function.
In this section, according to 1-dimensional and 2dimensional observation sequences, we get five different HMMs.As shown in Section 2, the optimal model parameter should maximize the probability of the observation sequence.So the log-likelihoods of these observation sequences with these optimized HMMs are computed, that is, log ( | ) to compare the optimizing ability between PSOHMM and BWHMM as shown in Figures 3 and 4.
From Figures 3 and 4, we can observe that PSOHMM always achieves larger log-likelihoods of observation sequences than BWHMM for both two data sets with 1-dimensional and 2-dimensional data.It means that BWHMM is easily trapped into local optimum, while PSOHMM is able to find better solutions and can be applied into observation sequences of any dimension.Actually, in PSOHMM, the current best position of a particle  represents a local optimum, whereas  represents the best solution among all local optima found in the current iteration.At each iteration, the position is updated with the velocity vector.Since the velocity vector guarantees a new position can be a better solution than the previous position,  moves toward the global optimal solution gradually.PSOHMM employs some particles to search an optimal solution simultaneously and have a larger probability to find a global optimal solution, so PSOHMM is more efficient than to BWHMM to search model parameters.Figure 5 exactly validates the analysis.It shows that the best position of every particle  always converges to  with an increase of the iteration number.

Experimental Comparison Based on the Convergence
Speed.In order to demonstrate the improvement of the convergence speeds, Figure 6 compares the convergence performance of PSOHMM and BWHMM with different iteration numbers.We use the first group test data to record the loglikelihoods of each iteration using PSOHMM and BWHMM, respectively, and we compute the average log-likelihoods at each iteration.We can note that PSOHMM converges gradually after one iteration, while BWHMM is still not stable after 25 iterations.It is also worthy observing that PSOHMM    always has a larger log-likelihood objective function than BWHMM at each iteration, which provides better proof of the objective function performance of PSOHMM within an iterative process.

Simulation
Experiments in E-Commerce.When sellers begin to deceive buyers after they have accumulated a good reputation in real e-commerce environment, we compare PSOHMM reputation model with Average and Beta reputation algorithm.Figure 7 shows that all algorithms can estimate the reputation values for the first 50 rating scores.But when the seller changes its behavior, PSOHMM can respond faster than Average and Beta algorithms.Especially for Beta method, a large forgetting factor ( = 0.9) responses more slowly than  = 0.2 and  = 0.5.It means that a large forgetting factor leads to a great influence of reputation value issued from the old rating score.When  = 0.2 and  = 0.5, Beta and Average methods have a similar response to the behavior change of a seller.
Sellers have realized that behaving badly following good reputation is not reasonable, which will degrade their good reputation gradually finally no matter which reputation evaluation model is used.For this reason, some sellers play a trick: they switch their behavior between good and bad reputation frequently in order to maximize the benefits and  to keep certain good reputation at the same time.Figure 8 demonstrates the estimation performance of PSOHMM compared to Average and Beta algorithms.Although within 100 transactions, sellers will change their behaviors every 20 transactions.PSOHMM can reflect these changes immediately.However, Average and Beta algorithms always keep their reputation above 0.5.All reputation values belong to [0, 1].Thus, reputation values greater than 0.5 means that the sellers are considered as trustable, even if they behave badly sometimes.As a result, the estimated reputation might mislead buyers to transact with malicious sellers.
However, some buyers want to know more details about reputation values due to the inflexibility of binary states with trusted and untrusted states.In this case, two methods are proposed to solve the problem.The first method would be to provide the probability of trustworthiness of sellers to denote the reputation value, and let buyers make a decision whether they will transact the seller.Another solution is to increase the number of hidden states to represent multilevel reputation.Figure 9 illustrates that performance of PSOHMM, Average, and Beta algorithms using multilevel reputation.Like Figure 7, PSOHMM has a fast response to the behavior changes, Average algorithm has a slow response, but Beta algorithm cannot respect the behavior changes from a good rating to a neutral rating until the rating score becomes bad.The way to find an optimal number of hidden states is still unsettled.
According to simulation experiments in e-commerce, PSOHMM is superior to Average and Beta evaluation mechanism to respond to the behaviors of sellers for both binary rating and multiply rating.Let us imagine the behaviors of some sellers in e-commerce, some sellers usually have positive behaviors to accumulate a good reputation when they enter into e-commerce community.After some transactions, they might degrade the quality of products or service or deceive buyers to acquire the maximum benefit.In this case, PSOHMM reputation evaluation mechanism has better capability to reflect the changes of these behaviors and is helpful to warn buyers that want to have a transaction with these sellers.

The Case Study on Amazon
Company.At last, the experiment is carried on real data from Amazon company to validate the performance of PSOHMM further.For this experiment, we collected a public dataset from Amazon company which contains feedbacks about the reputation of their products.Each product can be imagined as a seller.Sometimes the product is welcomed by buyers, and sometimes it is not.In this case, Amazon should adjust the order of the product according to the feedback from buyers.First of all, in order to illustrate the advantage of PSOHMM further, the objective function values of PSOHMM and BWHMM on Amazon real data are compared in Figure 10.It is obvious that for real data, PSOHMM also has the capability to search better model parameters than BWHMM to maximize the objective function.
Next, the predicted reputation of one product is illustrated in Figure 11.In order to remove the unfair rating scores in feedback and enhance the robustness of PSOHMM, it computes reputation based on average rating scores.It can be noticed that Average and Beta algorithms always think the product is popular, while the reputation predicted by PSOHMM demonstrates that the product is welcomed by customers at the beginning and then the negative feedbacks from customers increase, so it can reflect the changes of customers' demand and provides a good reference information for the company to adjust the orders for this product.

Conclusion
This paper proposes optimization Hidden Markov Model based on Particle Swarm Optimization (PSOHMM) to develop a new reputation model.The proposed algorithm takes full advantage of global searching capability of PSO and avoids BW algorithm trap into the local optimum.Aiming at the interval and normalized constraints in HMM, this paper employs remapping and renormalized methods in an iterative process.Based on the efficiency of PSOHMM algorithm, this paper proposes the reputation evaluation framework based on Hidden Markov Model.The simulation experiments have demonstrated that PSOHMM is superior to BW algorithm to search for optimal model parameters and is more stable than BW algorithm.Compared with Average and Beta reputation evaluation algorithm, PSOHMM has a fast response to the behavior changes of sellers in e-commerce.
Though the PSOHMM reputation evaluation model has achieved some advantages over previous work, we should continue to improve the PSOHMM algorithm.Firstly, we will introduce an offline learning algorithm to enhance the computational efficiency and make it feasible in some large e-commerce communities.Then, we will study the approach to adjust the optimal number of hidden states in HMM using optimization algorithms depending on different applications.Meanwhile, the collusion attack of buyers is not considered in our reputation model, so we will develop a more robust PSOHMM reputation model to resist the attack from malicious buyers.Due to the flexibility of the proposed reputation model, the PSOHMM model is not only fit for e-commerce environment, but also can be applied in other applications.For instance, it can be used to research the reputation of suppliers in supply chain management and the reputation of products in Product Lifecycle Management.Therefore our further work also will focus on the applications of this reputation and expand it into other industrial areas.

Figure 1 :
Figure 1: Data structure of a particle.

Figure 2 :
Figure 2: The reputation evaluation framework based on Hidden Markov Model.

Figure 5 :Figure 6 :
Figure 5: The convergence of  and  with iterative number.

Figure 7 :
Figure 7: The reputation evaluation comparison of PSOHMM, Average, and Beta algorithms with binary rating scores.

Figure 8 :Figure 9 :
Figure 8: The reputation evaluation comparison of PSOHMM, Average, and Beta algorithms with binary rating scores, when sellers switch behavior frequently.

Figure 10 :Figure 11 :
Figure 10: The comparison of the objective function values between PSOHMM and BWHMM on Amazon real data.
Definition 1 (Hidden Markov Model).Given a set of  observation states , HMM with a finite set of  hidden states  consists of a triple  = (, , ), where  = {  = ( 1 =   )} is the prior probabilities of   being the first state of ;  = {  } is the state transition probabilities matrix, where 1 ≤ ,  ≤ ,   = { +1 =   |   =   } characterizes the transition probability from hidden state  +1 into   , and ∑    = 1;  = {  ()} is the emission probabilities matrix, where 1 ≤  ≤ , 1 ≤  ≤ ,   () = (  =  |   = ) describes the relation between observation   and hidden state   at time , and ∑ (18)rvation sequence, the number of hidden states Initialization: initialized  particles   and corresponding velocities V  randomly while a termination criterion is false do Compute the fitness values for all particles in swarm with(18)Algorithm 1: Hidden Markov Model Algorithm based on Particle Swarm Optimization. Input: