The Lyapunov stability theorem is applied to guarantee the convergence and stability of the learning algorithm for several networks. Gradient descent learning algorithm and its developed algorithms are one of the most useful learning algorithms in developing the networks. To guarantee the stability and convergence of the learning process, the upper bound of the learning rates should be investigated. Here, the Lyapunov stability theorem was developed and applied to several networks in order to guaranty the stability of the learning algorithm.

Science has evolved from an attempt to understand and predict the behavior of the universe and the systems within it. Much of this owes to the development of suitable models, which agree with the observations. These models are either in a symbolic form which the humans use or in mathematical form that are found from physical laws. Most systems are causal, which can be categorized as either static, where the output depends on the current inputs, or dynamic, where the output depends on not only the current inputs but also past inputs and outputs. Many systems also possess unobservable inputs, which cannot be measured, but affect the system’s output, that is, time series systems. These inputs are known as disturbances and aggravate the modeling process.

To cope with the complexity of dynamic systems, there have been significant developments in the field of artificial neural network during last three decades which have been applied for identification and modeling [

Different methods have been introduced for learning the parameters onnetwork based of the gradient descent. All learning methods like backpropagation-through-time [

The Gradient-descent (GD) learning can be achieved by minimizing the performance index

In the batch-learning scheme employing the

Consider a dynamic system, which satisfies

The equilibrium point

Let

The origin of the system is locally stable (in the sense of Lyapunov) if

The origin of the system is globally uniformly asymptotically stable if

To approve stability analysis of the networks based on GD learning algorithm, we can define discreet function as

By using (

Therefore

From the Lyapunov stability theorem, the stability is guaranteed if

Because

In this section, the proposed stability analysis is applied for some networks. The selected networks are neurofuzzy (ANFIA) [

TSKmodel has a linear or nonlinear relationship of inputs

The asymptotic learning convergence of TSK neurofuzzy is guaranteed if the learning rate for different learning parameters follows the upper bound as will be mentioned below:

In equation (

Each neuron model in the proposed recurrent neuron models is summation or multiplication of Sigmoid Activation Function (SAF) and Wavelet Activation Function (WAF) as shown in Figure

Summation/product recurrent sigmoid-wavelet neuron model.

Feed-forward neural network.

The output of feed-forward network is given in the following equation:

The functions

To prove convergence of the recurrent networks, these facts are needed:

Fact 1: let

Suppose

From the facts 3 and 4: For parameter

Differential of output of the model for another learning parameter is

From facts 3 and 4 suppose

For parameter

The consequent part of each fuzzy rule corresponds to a sub-WNN consisting of wavelet with the specified dilation value, where, in the TSK fuzzy model, a linear function of inputs is used while

The asymptotic learning convergence is guaranteed if the learning rate for different learning parameters follows the upper bound as will be mentioned below:

where

In equation (

From (

In this paper, a developed Lyapunov stability theorem was applied to guarantee the convergence of the gradient-descent learning algorithm in network training. The experimental examples showed that the upper bound of the learning parameter could be easily considered using this theorem. So, an adaptive learning algorithm can guaranty the fast and stable learning procedure.