^{1}

^{2}

^{1}

^{2}

Learning rate plays an important role in separating a set of mixed signals through the training of an unmixing matrix, to recover an approximation of the source signals in blind source separation (BSS). To improve the algorithm in speed and exactness, a sampling adaptive learning algorithm is proposed to calculate the adaptive learning rate in a sampling way. The connection for the sampled optimal points is described through a smoothing equation. The simulation result shows that the performance of the proposed algorithm has similar Mean Square Error (MSE) to that of adaptive learning algorithm but is less time consuming.

With the fast development of the information and computation technologies, the big data analysis and cognitive computing have been widely used in many research areas such as medical treatment [

Artificial neural network based Independent Component Analysis (ICA) is the widely used method in BSS, because it provides powerful tools to capture the structure in data by learning. Based on this theory, Natural Gradient Algorithm (NGA) is employed to find the appropriate coefficient vector of artificial neural network [

Most well-known traditional learning algorithms assume that the learning rate is a small positive constant. Inappropriate constant will lead to relative slow convergence speed or big steady state error. There are lots of studies on the learning rate which aim at the better performance and higher convergence speed. von Hoff and Lindgren [

The objective of this paper is to find appropriate learning algorithm, to provide better performance as well as less computation time. The proposed sampling learning rate is based on adaptive learning algorithm, which only calculates and samples a few appropriate points. These selected points are connected by the proposed normalized smooth equation.

In the following, we first review the principle of blind signal separation. Then, we discuss the adaptive learning algorithm and propose the sampling adaptive learning algorithm. Finally, we present two typical examples in mobile voice signal. Different constant learning rates are compared firstly to analyze the relationship between the convergence speed and steady state error. Then the comparison between the adaptive learning algorithm and the sampling adaptive learning algorithm is made, illustrating that the proposed algorithm has similar Mean Square Error (MSE) to that of the adaptive learning algorithm but consumes less computational time.

The model considered in this paper is described by Figure

The system model.

BSS separates the mixed signals, through the determination of an unmixing matrix

According to the reference [

The learning rate is a very important factor for the performance of BSS in controlling the magnitudes of the updates of the estimated parameters. It can be constant or variable. The constant learning rate means that the adaptation in (

The idea of adaptively changing the step size of the learning rate is called adaptive learning algorithm. The adaptive learning algorithm can balance the convergence speed and the steady error performance. In this case, we discuss the adaptive learning algorithm which updates the learning rate step size through the estimate function. Although the distance between the estimated parameter and its optimal value is not directly available to control the step size, evaluation function can be developed to estimate the distance so that the step size can be determined by the following recursion:

However, we must notice that, in the process to obtain a better learning rate, the adaptive step-size control algorithm actually introduces another recursion. In this recursion shown in (

On the other side, the recursion as described in (

Performance of CLA and ALA.

Convergence time (iterative time) | Mean steady state error | Computation time | |
---|---|---|---|

CLA | 681 | 2.1 | 55.6 s |

ALA | 482 | 1.9 | 96.6 s |

The ideal adaptive step size curve for noise-free signal.

An alternative way to use the adaptive strategy is to implement the adaptive step size on sampling. In this method, only several points need the adaptive calculation. For training the

Effect of sample learning rate on updating the matrix

Then the learning rate can be represented as

For the convenience of application, the normalized form for

To test the algorithm, five sub-Gaussian source signals commonly studied in the mobile system are employed. The source signals are

Firstly, we employ four fixed learning rate

Mean Square Error (MSE) for different fixed learning rate.

The raw source signals.

The mixed signals.

Adaptive Learning Algorithm (ALA) can balance the requirement of convergence speed and steady state error. However, ALA requires more computation time in iteration. To solve this problem, we use the sampling adaptive learning algorithm (SALA), by using the sampling interval in the adaptive learning algorithm. Figure

The separated signals.

The MSEs of different algorithms.

Figures

Performance of ALA and SALA for mobile signals.

Convergence time (samples) | Mean steady state error | Computation time | |
---|---|---|---|

ALA | 582 | 2.3 | 96.6 s |

SALA | 586 | 2.0 | 64.2 s |

In order to verify the effectiveness of the proposed SALA, two music sources from the real environment are tested through the simulation. These music sources were mixed by random Gaussian noise matrix

SALA for the real signal.

Based on the discussion of fixed step-size algorithm and adaptive step-size algorithm for the blind separation of sources, a sampling adaptive step-size algorithm has been proposed. The algorithm has similar MSEs with adaptive step-size algorithm, but less computational time. By a smooth connection between two optimal points, the sampling method also has smooth curve and does not bring more recursion.

The authors declare that they have no conflicts of interest.

This work is supported by the Natural Science Foundation of China through the Grant 11702016.