Analysis of Single Channel Blind Source Separation Algorithm for Chaotic Signals

In a wireless sensor network, the signal received by the terminal processor is usually a complex single channel hybrid chaotic signal. The engineering needs to separate the useful signal from the mixed signal to perform the next transmission analysis. Since chaotic signals are nonlinear and unpredictable, traditional blind separation algorithms cannot effectively separate chaotic signals. Aiming to correct these problems—based on the particle filter estimation algorithm—an extended Kalman particle filter algorithm (EPF) and an unscented Kalman particle filter algorithm (UPF) are proposed to solve the single channel blind separation problem of chaotic signals. Mixing chaotic signals of different intensities performs blind source separation. Using different evaluation indexes carries out the experiment and performance can be analyzed. The results show that the proposed algorithm effectively separates the mixed chaotic signals.


Introduction
In recent years, the wireless sensor network has been a crucial research directive of both experts and scholars.It plays a very important role in communications, medical treatments, military affairs, and other industries [1,2].Each sensor is composed of a wireless communication network.To save resources in wireless sensor networks, a strong signal received by a sensor is a complex nonlinear chaotic signal, as the receiving sensor needs to estimate and separate useful signals from mixed signals.Since the late twentieth century, the blind source separation (BSS) technique has experienced rapid development [3], but the single channel blind separation algorithm does not wholly solve the nonlinear mixed signal.Kalman filtering effectively estimates and separates linear mixed signals, but for nonlinear hybrid chaotic signals, accuracy of estimates and separation is low.
For the blind source separation problem and the ICA algorithm, literature [4] proposes an EEG based quantitative classification method, which uses independent components to perform source separation through rate-limiting minimization analysis, automatic regression modeling for feature extraction, and classification using Bayesian neural networks.Sensitivity and accuracy maintain a high standard.In [5], a single channel blind source separation algorithm, based on collective empirical mode decomposition, is proposed to purify and denoise the arc acoustic signal and separate component analysis to separate the virtual multichannel signal into the target source.The source signal is effectively separated from ambient noise signal.
For the blind separation of chaotic signals, literature [6] uses adaptive simulated annealing models to separate the chaotic signals, but the real-time performance of the algorithm is poor.Literatures [7,8] use Kalman filtering and chaotic signal to generate the equation to construct the objective function; that is, the signal is constructed from the product of the mixed signal and the separated signal to achieve the separation.Although Kalman filter has a better separation effect for linear systems, it is not effective for the nonlinear system.Literature [9] and others combine the Kalman filter and particle filter; Particle filter is used to estimate the noise and state variables in the hybrid system and its result is filtered by Kalman filter, but the algorithm is highly complex.Based on the particle filtering, literature [10] and others propose oversampling and sequential estimation of parameters to separate two MPSK signals with different sampling frequencies, but the unpredictable chaotic signal separation has a high error rate.On the basis of literature [10], scholar [11,12] introduced the Rao-Blackwellisaion strategy to reduce residual noise of the separated signal.
In view of the above research, based on particle filter, this paper uses the extended Kalman particle filter (EPF) and the unscented Kalman particle filter (UPF) in chaotic signal, separating the general strength of the chaotic signal and strong chaotic signal, respectively.A comparison is then made to separate the effect of EPF and UPF in different scenarios and analyze them under different performance indexes.

Chaotic Signals Blind Separation Algorithm
. .Signal Model.For wireless sensor networks, the receiver receives the weighted sum of the source signals.
where  is the receiver attenuation coefficient,  = [ 1 ,  2 , ⋅ ⋅ ⋅ ,   ]  ,   () is the th source signal,  is the discrete time point, and V() is the observed noise at this time.
Due to the fact that the receiver often receives nonlinear chaotic signals in wireless sensor networks, it can be described as (1) the received signal is processed at the receiver, resulting in the relationship between the nonlinearity and the previous time: (⋅) is the nonlinear chaotic equation.
According to the Bayes theorem, the filtering problem of this model can be regarded as estimating the joint probability density function {(), V() | (1 : )} of the initial state () and noise signals V() from observed signals with noise  1: .

. . Observation Equation and State Equation.
In the receiving signal process of the wireless sensor network receiver, the adaptive filter extracts the feature vectors, so the observation equation obtained by the receiver is as follows: the state equation is as follows: and in (3) and (4), Z() is the mixed signal observation equation, (⋅) and ℎ(⋅) are the chaotic signal model, and W() and V() are the additive Gauss white noise with zero mean and the  2 variance.
. .Suggested Function Density.In particle filter, the most important work is to find an optimal proposal density distribution function which can help us transfer the sample points in the prior distribution area to the maximum likelihood region: where    is the symbol state for the th time,   −1 is the state of the symbol at the  − 1th time,  is the observation signal.The important likelihood function peak and the prior distribution peak are basically coincided, and width of the likelihood function and the prior distribution peak width are basically consistent, namely, the maximum degree of overlap, so it is the most ideal state; if the situation is contrary, the likelihood function is away from the prior distribution of the peak, among them the overlap degree is very small, then the sample particle is needed to transfer to the likelihood function covered area.This process constructs the proposed density function.
In the prediction stage of the basic particle filter, the prediction value S   () of the particle S   is calculated and the weight calculation equation is used: where  is the measured noise variance and  is the constant coefficient of the likelihood relation.If   >  √ , the particle is rejected.
If the likelihood function is bounded, that is, (S , | S  ) < M  , the method of receiving or denying particles can be used to sample from the optimal proposed density distribution (S  | S −1 , Z  ).
(1) Firstly, a sample    is obtained from the prior distribution Ŝ ∼ (S  | S −1 ) and the variance .
(2) To determine whether the sample is available, if  ≤ (S , | Ŝ )/M  , this sample is acceptable, otherwise it will be rejected, and the then above method regenerates a sample and iterates until the N samples of particle set are generated.
Next, we use the particle filter algorithm improved from the suggested density function: the extended Kalman particle filter algorithm (EPF) and the unscented Kalman particle filter algorithm (UPF).

. . Extended Kalman Particle Filter Blind Separation Algorithm (EPF-BSS). The extended Kalman filter (EKF) uses local linearization to transform the nonlinear functions 𝑓(⋅)
and ℎ(⋅) into Taylor series around the filter values Ŝ and omit more than two orders to get an approximate linearization model.It is a recursive minimum mean square error (MMSE) method.
(A) System Initialization.The a priori distribution of the system ( 0 ) is used and the initialization state S ()  0 of the mixed signal is extracted from the prior distribution.
(B) Importance Sampling.The system state equation ( 6) is used to expand the first-order Taylor series around the filter value Ŝ() by using the nonlinear function ℎ(⋅) to the th sample: , is the current particle updated state.Making the firstorder partial derivative processing to (7) to obtain the system Jacobi state transfer matrix the Jacobi state transfer matrix F   , the state noise matrix V   , and the observed noise matrix W   are used to update the covariance updating matrix S  , : is the transposed matrix.By using covariance update matrix P  , , observation matrix H   , and observation noise matrix W   , we can confirm the regulation likelihood relation constant coefficient in (6) and then confirm the rejected particles according to the constant : where R  is the observed variance vector at the moment .
According to ( 9) and ( 10), the mean S   and covariance P  of the sample at the moment  can be obtained: where Z  is the latest observation information for this time.
The core of the extended Kalman particle filter is as follows: In the sampling stage, we use EKF algorithm to calculate mean and covariance for each particle and then use every mean and covariance to guide the system sampling.In the process of using the EKF algorithm to calculate the mean and the variance, it uses the posterior filter density function: In the framework of the particle filter algorithm, the EPF algorithm produces the Gauss recommended density distribution for each particle and updates the th particle: and with the use of ( 13), the update of the sample value S and covariance P from the moment 0 to k of the th sample can be obtained: The weights are recalculated and normalized for each particle: (C) Resampling.The purpose of resampling is to discard particles with small weight according to the weight of different particles and to retain and copy the important particles.Suppose where ⟨⋅⟩ is the rounded down operation.Suppose and the weight value of the particle is redistributed, which is the same as the number of the particles after the sampling, that is, w = 1/ .
(D) State Vector Update.Combine the optimal estimation at the moment  and the weight update to obtain the update of the state vector at the moment  + 1 to restore the source signal: . .Unscented Kalman Particle Filter Blind Separation Algorithm (UPF-BSS).The unscented Kalman filter (UKF) abandoned the processing of linearization of nonlinear functions and adopted the Kalman filtering framework, and for onestep prediction equation, the Unscented Transform (UT) algorithm is used to solve the nonlinear transfer problem of mean and covariance.UKF algorithm approximates the probability density distribution of nonlinear function and uses a series of samples to approximate the posterior probability of the answer.It does not need to approximate nonlinear function; Jacobi derivation of the state transition matrix, and the UKF will not ignore the Taylor higher-order expansions.
(A) Unscented Transform.Selecting the sampling points in the original state distribution to make their mean and covariance is equal to the mean and variance-of the original state of the distribution of these points and those sampling points-will be used in nonlinear function to get the corresponding nonlinear function value points set, through the points set for mean and covariance of the transformed from.The statistical characteristics of the  = () are calculated by the Sigma point  and the corresponding weights  after the UT transformation.
Calculate 2 + 1 Sigma points: is the number of samples,  is the variance, ( √ )  ( √ ) = , and ( √ )  is the th column of the matrix square.The corresponding weights of these sampling points are calculated as follows: where the subscript  is the mean value and  is the covariance, the superscript is the sampling point, and  is the scaling parameter used to reduce the total prediction error. is the parameter of the sampling distribution state,  is a nonnegative weight coefficient, and it can merge the dynamic difference of the higher-order terms in the equation.
The Sigma point set obtained by no trace transformation has the following properties: (1) The Sigma point set around the mean is symmetrically distributed and the symmetric points have the same weight.
(2) The sample variance of the Sigma point set is the same as the random vector sample variance.
(B) Initialization.The a priori distribution of the system ( 0 ) is used and the initialization state  ()  0 is extracted from the prior distribution.
(C) Importance Sampling.By (20), the sampling points are selected around the observation mixed equation to get the set of Sigma points: by ( 22), make one-step prediction of the Sigma point set: V is the process noise vector and the covariance of the time  can be obtained by ( 22) and (23).
according to one-step prediction value, UT is used again to produce new Sigma point set: the Sigma point set predicted by ( 25) is placed to the observation equation, and the predicted observation values are obtained, for  = 1, 2, ⋅ ⋅ ⋅ , 2 + 1.
the observed values of the Sigma point set are obtained by (26), and the mean and covariance of the system prediction are obtained by the weighted sum: is the weight value of the particle at current time which can obtained by (21).
the Kalman gain matrix can be obtained by (28): Finally, through the above derivation equation, the state update and covariance update of the system can be obtained: The sampled updating particles are calculated by ( 12), ( 13), and ( 14) and the weights are recalculated and normalized for each particle by ( 15) and ( 16).
(D) Resampling.Through ( 17) and ( 18), the weight value of the particle is redistributed by using the sampling algorithm.It is of the same number of the particles after the sampling; that is, w = 1/.
(E) State Vector Update.Through (19), the optimal estimation of the time  and the weight update can be combined to get the updated state vector at the time  + 1 and then restore the source signal.
. .Complexity Analysis.The extended Kalman particle filter provides a priori knowledge for sampling based on the extended Kalman filter and the basic particle filter and uses prior knowledge to guide system sampling, so it does not need to predict the preestimation to noise.The unscented Kalman particle filter is used to make symmetrical sampling to the particles around the mean.The number of sampling points compared with that of EPF is (2 + 1)/, effectively separating the mixed nonlinear signal.Supposing the number of particles is   , the number of sampling points is , and the complexity of the importance sampling and resampling EPF algorithm are  (  × ( +  +  + )) =  (  × 4) . (31) The complexity of UPF algorithm is as follows: (  × ( + (2 + 1) +  + (2 + 1))) =  (  × (6 + 2)) . (32)

Simulation Experiment and Result Analysis
. .Evaluation Standard.In order to validate the effectiveness of the algorithm and compare it with two different algorithms, the Monte-Carlo experiments were conducted in the MATLAB 2014a simulation platform for 500 times.The nonlinear signals are distinguished by the different expression coefficients of the signal state space.The coefficients close to 1 are general nonlinear systems, and the coefficients close to 0 are strong nonlinear systems.In the experiment, two different mixed signals are selected in the general nonlinear system and the strong nonlinear system, and the noise is the standard Gauss additive white noise.In order to make the difference between two algorithms more obvious, we use the sum of the difference Δ( θ) between the estimated value and the true value and the timeliness () and the correlation coefficient  as the evaluation index, respectively.
where θ is the estimated value and  is the true value.The timeliness () is the weighted average of all Monte-Carlo experiments under the simulation platform.For the correlation coefficient , we have [13] where  +1 is the estimated signal and  is the source signal.

Simulation Experiment
(A) Experiment .In order to test the estimation results of signal of the two algorithms in the general nonlinear system, the mixed signal  1 () =  1 () +  2 () is used, two of which are shown in (35) and the data length is 300; in order to be explicit, we truncate the signal with the length of 50.From Figure 1, we can see that two kinds of algorithm make a good estimation of the signal and recover the source signals.
(B) Experiment .In order to compare the effect of two algorithms for general nonlinear systems, the experiment two operated under the strong nonlinear system is used as a mixed signal  2 () =  3 () +  4 (), and the two strong nonlinear signal types are shown in (36) with the data length 300.
It can be seen from Figure 2 that, in the strong nonlinear system, EPF algorithm has obvious deviation to the real state estimation and UPF makes a good estimation and separation of the source signal.3 is the timeliness of the two algorithms in nonlinear systems of different intensity, through which we can see that, in the aspect of the timeliness, the calculation speed of UPF is at most 44.32% slower than that of EPF.It can be seen from Figure 4 and Tables 1 and 2 that, under different sampling points, the two algorithms can effectively estimate and separate the source signals for the general nonlinear system.The maximum difference of the correlation coefficient of the algorithm is 0.23% and 0.46%.However, in the case of a strong nonlinear system, the EPF has a large degree of recovery signal distortion when the number of sampling points is small.The correlation coefficient tends to be stable from 120 sampling points, but the correlation coefficient of the algorithm reaches 39.73% compared with the UPF algorithm and 49.62%, and through the deviation and comparison of Figure 5, there is a large distortion deviation between the recovered signal and the source signal in the strong nonlinear system, while in the general nonlinear system, the two algorithms have smaller deviations and can better separate the signal.
The above experiment analysis shows that, for general nonlinear mixed environment, selecting the EPF algorithm can effectively separate signal and save time and needs to choose the UPF, in a strongly nonlinear mixed environment to some extent, but at the expense of his time.EPF can only be in the filtering error and step prediction error is low and good with the filtering estimation effect.Although UPF solved these problems, on the arithmetic complexity it is far greater than the EPF.Thus, the question of how to effectively achieve separation should be used in assessing which algorithm to use.

Conclusion
To conclude, we have described the extended Kalman particle filter blind separation algorithm and the unscented Kalman particle filter blind separation algorithm based on Kalman filter and particle filter; separation matrix is obtained      The advantage of the algorithm is that they can both improve the approximation degree of the posterior probability of the state from different angles and provide an effective basis for the selection of a single channel blind separation.In future work, the particle filter and its extension algorithm can be used to solve the more complicated blind separation problem of mixed communication signals.

Figure 1 :
Figure 1: Estimation result of EPF and UPF for general nonlinear systems.

Figure 2 :
Figure 2: Estimation result of EPF and UPF for strong nonlinear systems.

Figure 4 :
Figure 4: Correlation coefficient comparison of EPF and UPF algorithms at different sampling points.

eFigure 5 :
Figure 5: Deviation comparison of EPF and UPF algorithms at different sampling points.

Table 1 :
Correlation coefficients at different sampling points for general nonlinear systems.

Table 2 :
Correlation coefficients at different sampling points for strong nonlinear systems.