Neural Network Method for Solving Time-Fractional Telegraph Equation

Recently, the development of neural networkmethod for solving differential equations has made a remarkable progress for solving fractional differential equations. In this paper, a neural network method is employed to solve time-fractional telegraph equation. *e loss function containing initial/boundary conditions with adjustable parameters (weights and biases) is constructed. Also, in this paper, a time-fractional telegraph equation was formulated as an optimization problem. Numerical examples with known analytic solutions including numerical results, their graphs, weights, and biases were also discussed to confirm the accuracy of the method used. Also, the graphical and tabular results were analyzed thoroughly. *e mean square errors for different choices of neurons and epochs have been presented in tables along with graphical presentations.


Introduction
Fractional differential equations can be used to model many real-life problems. Recently, fractional partial differential equations have received much attention of the researchers due to their wide applications in the area of biological sciences and medicine [1][2][3]. Moreover, the study conducted in [4,5] has emphasized on the property of the solution of fractional differential equations like its stability and existence. Particularly, fractional telegraph equations arise in many science and engineering fields such as signal analysis, random walks, wave propagation, electrical transmission line, and so on [6,7], but they are hard to solve. Accordingly, many methods have been utilized to find solutions of fractional differential equations, for instance, spectral methods [8][9][10], finite-element method [11,12], differential transform method [13], and other methods [14,15]. Moreover, Hosseini et al. [16] had discussed fractional telegraph equation using radial basis function approach. Furthermore, Zhang and Meerschaert and Tadjeran [17,18] employed a finite-difference approach for a solution of fractional partial differential equation. Recently, solving fractional differential equations by the neural network method has become an active research area.
A neural network is a type of machine learning algorithm which has amazing ability to solve large-scale problems. It is based on the idea of minimizing the loss function that best approximates the solution to mathematical problems. Nowadays, neural networks are becoming the best solution method to most challenging mathematical problems. e continuously rising success of neural network techniques applied to differential equations (ODEs and PDEs) [19,20] has stimulated research in solving fractional differential equations with the neural network method. Here, this study focuses on solving fractional telegraph equation with the neural network method. As neural network technology is rising rapidly both in terms of methodological and algorithmic developments, we believe that this is a timely contribution that can benefit researchers across a wide range of scientific domains.
Lagaris et al. [21] solved both ordinary and partial differential equations with neural network approaches. e trial solutions have been given which are fixed to satisfy the boundary conditions. Later on, Piscopo et al. [22] have extended this trial solution by adding the boundary conditions into a loss function. is paper extends this idea to solving time-fractional telegraph equation. To the researchers' knowledge, there has been a little study on solving fractional partial differential equations with the neural network approach. In [23], fractional diffusion equation with Legendre's polynomial-based neural network algorithm has been discussed. Pang et al. [24] proposed fractional physics informed neural networks to solve fractional differential equations. e main contribution of this paper is to discuss the artificial neural network algorithm for solving time-fractional telegraph equations. In this paper, we consider a time-fractional telegraph equation [25]: e rest of this paper is organized as follows. In Section 2, a short review of fractional calculus is presented. In Section 3, the algorithms for solving equation (1) are given. In Section 4, numerical examples are solved to illustrate the effectiveness of the neural network method. Section 5 gives the conclusion.
In other words, L p [a, b] is (for 1 ≤ p ≤ ∞) the usual Lebesgue space.
e Caputo fractional derivative In particular, when 0 < α < 1, we have Definition 6 (see [24]). Grünwald-Letnikov finite-difference schemes: based on the stationary grid x j � (j − 1)Δx, for j � 1, 2, . . . , N, the shifted GL finite-difference operator for approximating the 1D fractional Laplacian is defined as where p step size (s) is shifted to guarantee the stability of the schemes.

Solution Method
We constructed an artificial neural network (NN) with one hidden layer and n neurons which is shown in Figure 1. e output of the network, u(x, t), is written as where g: R n ⟶ R n is the activation function, . . , n are the weights and biases of the network, and a symbol " * " represents normal scalar multiplication, respectively. h and f denote hidden and final layers, respectively. Initially, we randomly generated the weights and biases of the network and then we adjusted them in the training processes to minimize the loss function. In short, we find the weights and biases of the neural network that best approximates the solution, u(x, t), of problem equation (1).

Mathematical Problems in Engineering 3
A numerical solution u(x, t) to problem (1) is the one which approximately reduces the sum of the mean square of the left-hand side of equation (11), which is similar to the mean square error (MSE) loss function of a neural network. In references [21,22], the trial solution to ordinary and partial differential equations was presented. Now, we have constructed the trial solution, u(x, t), for fractional telegraph equation which is the output of the neural network.
If we discretize the domain of the inputs, (x, t] ∈ (0, 1) × (0, 1), into a finite number of training points, say m, which is chosen from equally spaced N m × N m grid, then u(x, t) can be obtained by determining the weights and biases that minimize the loss function of the network on the training points. Let e following loss function is used: e problem is then reduced to minimizing ξ(w, b, x, t) by adjusting the weights and biases in the network, for the given choice of hyperparameters (number of neurons, number of hidden layers, learning rate, momentum, and activation function), i.e.,  ξ(w, b, x, t).
To compute the error, we need to calculate the derivatives of the network output, u(x, t), with respect to its input. Lagaris et al. [21] have presented how to obtain the automatic differentiation of partial derivatives. To obtain the fractional derivatives, the Grünwald-Letnikov (GL) method given in [28][29][30] is used. en, a simple hybrid of the two derivatives is used.

Application
In this section, the neural network method for solving timefractional telegraph equation is tested using examples.
is graph is plotted for 50 × 50 mesh grid points. e graph reflects that the error between the exact and approximate solutions is exactly similar.

Conclusion
e novel way to solve time-fractional telegraph equation is proposed.
e method extends the existing approach for differential equations (both ODEs and PDEs) [22] to fractional differential equations.
is approach is applied to finding time-fractional telegraph equations for which exact solutions are known. Compared to traditional fractional differential equation (FDE) solvers, such as the finite-difference method and spectral method, which can only obtain the approximate solutions on the grid points, the proposed method can approximate the solutions of all points in the interval by training only few sample points. However, a lot of training is needed to get higher accuracy, especially when 1 < α < 2. is would be interesting work for future direction.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.

Authors' Contributions
Wubshet Ibrahim and Lelisa Kebena Bijiga contributed equally to this study. Mathematical Problems in Engineering 9