Neural networks with radial basis functions method are used to solve a class of initial boundary value of fractional partial differential equations with variable coefficients on a finite domain. It takes the case where a left-handed or right-handed fractional spatial derivative may be present in the partial differential equations. Convergence of this method will be discussed in the paper. A numerical example using neural networks RBF method for a two-sided fractional PDE also will be presented and compared with other methods.

In this paper, I will use neural network method to solve the fractional partial differential equation (FPDE) of the form:

On a finite domain

The left-hand

When

Similarly, when

The case

I also note that the left-handed fractional derivative of

Published papers on the numerical solution of fractional partial differential equation are scarce. A different method for solving the fractional partial differential equation (

The Rumelhart-Hinton-William’s multilayer network [

If a multilayer network has

Reference [

Let points of

Let

In this paper, I study approximate

We also give some theorems for soliciting the conditions to converge the approximation solution for (

Let

Let

Since

If I operate the mollifier

I define

The essential part of the proof of Irie-Miyake’s integral formula [

I will approximate

From (

Let

Let

Now if

Now, I will prove each basic neighborhood of a fixed point

Conceding a feed forward network with input layer of a single hidden layer, and an output layer consisting of a single unit, I have purposely chosen a single output unit to simplify the exposition without loss of generality.

The network is designed to perform a nonlinear mapping form the input space to the hidden space, followed by a linear mapping from the hidden space to the output space.

Given a set of

For strict interpolation as specified here, the interpolating surface (i.e.; function

Compute the error

Calculate

After that the best weights

The above method is supposed to work in this situation. In such case, the values of absence of the exact solution of independent variables will be substituted in the boundary conditions in order to get the exact values. Those values can be used in the phase of training of the considered backpropagation neural network. The approach is proved to get good weights of (

The following two-sided fractional partial differential equation

Table

The training data by values of the boundary conditions.

0.1 | 0 | 0.1444 | 0.1678 |

0.2 | 0 | 1.0404 | 1.5493 |

0.5 | 0 | 0.2500 | 0.9654 |

1.3 | 0 | 3.3124 | 6.7412 |

1.5 | 0 | 2.25 | 4.7759 |

1.7 | 0 | 1.0404 | 5.7418 |

2 | 0.1 | 0 | 0 |

2 | 0.4 | 0 | 0 |

2 | 0.6 | 0 | 0 |

The comparisons between exact solution

0.2 | 0.1 | 0.4691 | 0.4246 |

0.4 | 0.2 | 1.3414 | 1.3752 |

0.6 | 0.3 | 0.2490 | 0.2831 |

0.8 | 0.4 | 2.0906 | 2.0597 |

1.0 | 0.5 | 2.4711 | 2.4244 |

1.2 | 0.6 | 2.4261 | 2.4997 |

1.4 | 0.7 | 2.0231 | 2.0488 |

1.6 | 0.8 | 0.7362 | 0.7651 |

1.8 | 0.9 | 0.2107 | 0.2239 |

Maximum error | 0.0736 |

Compares maximum error between approximate solutions.

FDM | NNS | ||

Maximum error | Maximum error | ||

0.2 | 0.1 | 0.1417 | 0.0736 |

0.1 | 0.05 | 0.0571 | 0.0541 |

0.05 | 0.025 | 0.0249 | 0.0217 |

0.025 | 0.0125 | 0.0113 | 0.0082 |

Referring back to Tables