Conformable Fractional Models of the Stellar HeliumBurning via Artificial Neural Networks

,e helium burning phase represents the second stage that the star used to consume nuclear fuel in its interior. In this stage, the three elements, carbon, oxygen, and neon, are synthesized.,e present paper is twofold: firstly, it develops an analytical solution to the system of the conformable fractional differential equations of the helium burning network, where we used, for this purpose, the series expansionmethod and obtained recurrence relations for the product abundances, that is, helium, carbon, oxygen, and neon. Using four different initial abundances, we calculated 44 gas models covering the range of the fractional parameter α � 0.5 − 1 with step Δα � 0.05. We found that the effects of the fractional parameter on the product abundances are small which coincides with the results obtained by a previous study. Secondly, we introduced the mathematical model of the neural network (NN) and developed a neural network algorithm to simulate the helium burning network using a feed-forward process. A comparison between the NN and the analytical models revealed very good agreement for all gas models. We found that NN could be considered as a powerful tool to solve and model nuclear burning networks and could be applied to the other nuclear stellar burning networks.


Introduction
Nowadays, applications of fractional calculus in physics, astrophysics, and related science are widely used [1,2]. Examples of the recent applications of the fractional calculus in physics are found in [3] in which the author has introduced a generalized fractional scale factor and a time-dependent Hubble parameter obeying an "Ornstein-Uhlenbeck-like fractional differential equation" which serves to describe the accelerated expansion of a nonsingular universe, in [4] in which the author extended the idea of fractional spin based on two-order fractional derivative operator and in [5] in which the author has generalized the fractional action integral by using the Saigo-Maeda fractional operators defined in terms of the Appell hypergeometric function.
In astrophysics, many problems have been handled using fractional models. Examples of these studies are in [6], where the author introduced an analytical solution to the fractional white dwarf equation, in [7] in which the authors analyzed the fractional incompressible gas spheres and in [8,9] in which the authors introduced an analytical solution to the first and second types of Lane-Emden equation in the sense of modified Riemann-Liouville fractional derivative. Nouh in [10] solved the fractional helium burning network using a series expansion method. Abdel-Salam and Nouh [11] and Yousif et al. [12] introduced analytical solutions to the conformable polytropic and isothermal gas spheres.
Simulation of ordinary (ODE) and partial differential equations (PDE) using an artificial neural network (ANN) gives very good accuracy when compared with both the numerical and analytical methods. Many authors dealt with this issue and developed many neural algorithms to solve ODE and PDE. Dissanayake and Phan-ien [13] first introduced the concept of approximating the solutions of differential equations with neural networks, where training was carried out by minimizing losses based on the satisfaction of the network with the boundary conditions and the differential equations themselves. Lagaris et al. [14] demonstrated that the network shape could be selected by construction to satisfy boundary conditions and that automatic differentiation could be used to determine the derivatives that appear in the loss function. is approach has been extended to irregular boundary systems [15,16], applied to the resolution of PDEs occurring in fluid mechanics [17], and software packages have been developed to facilitate their applications [18][19][20]. Nouh et al. [21] and Azzam et al. [22] developed a neural network algorithm to solve the first and second types of Lane-Emden equations arising in astrophysics.
e helium burning stage (also known as the triplealpha process) represents the second stage where the stars undergo the transfer of nuclear energy from the interior to their surface. In this stage, nuclear energy is almost converted to light when passing through the stellar atmosphere. Helium burning (HB) releases energy per unit fuel of about 6 × 10 23 MeV/g ≈ 10 18 erg/g. e reaction equations that govern the HB network may be written as follows [10]: where the conversion process from helium to carbon needs 10 8 K.
Clayton [23] set up a model for the helium burning process by taking into account the above reactions. If the number of atoms per unit of stellar material mass for helium, carbon, oxygen, and neon is represented by x, y, z, and r, respectively, then the next four equations (also maybe called the kinetic equations) control the time-dependent change in abundance: where a, b, and c are the reaction rates. e system of equation (2) represents the integer version of the helium burning network and solved simultaneously by computational or analytical methods [23][24][25][26]. Appendix A includes clarification of the derivation of the set of equation (2). e fractional kinetic equation (like helium burning network) has been solved by many authors. In terms of H-functions, [27] presented a solution to the fractional generalized kinetic equation. e generalized fractional kinetic equations have been solved by [28]. Chaurasia and Pandey [29] solved the fractional kinetic equations in a series form of the Lorenzo-Hartley function.
In the present article, we developed a neural network algorithm to solve the fractional system of differential equations describing the helium burning network. We use the principles of the conformable fractional derivative for the mathematical modeling of the ANN. We used in this research an architecture of ANN which is the feed-forward network having three layers and trained using the algorithm of backpropagation (BP) based on the gradient descent delta rule. e analytical solution is developed using the series expansion method and a comparison between the ANN and analytical models is performed to declare the efficiency and applicability of the ANN for solving the conformable helium burning network. e paper is organized as follows: Section 2 introduces the details of the analytical solution of the conformable helium burning model using the series expansion method. Section 3 deals with the mathematical modeling of the neural network technique with its gradient computations and backpropagation training algorithm. Section 4 discusses the results obtained and the comparison between the ANN and analytical models. Section 5 gives the details of the conclusion.

Analytical Solution to the Conformable Helium Burning Model
By being certainly valid, the techniques of numerical integration may provide very accurate models. However, it is surely worthwhile to obtain modeling with the desired precision if complete analytical formulas are created. Besides, these analytical formulas usually provide much more deep insight into the essence of a model than numerical integration. e power series solution, on the other hand, may serve as the analytical representation of the solution in the absence of a closed analysis solution for a particular differential equation. e fractional form of equation (2) is given by [10] If T � t α , then x, y, z, r could be represented by 2 Advances in Astronomy where X m , Y m , Z m , R m are constants to be determined. In equation (2), the left side of the system depicts the abundances of those elements in which the abundance of helium (x) is raised to power 3. To obtain the fractional derivative of u n , we apply the fractional derivative of the product of the two functions. Using the series expansion method, we obtained the recurrence relation of the term x 3 by the following. Let Performing the fractional derivative to equation (5) k times, we get and putting X � 0, we get Since D ︷α,...,α j+1 times we have

Advances in Astronomy
After some manipulations, we get and putting i � j + 1 and i � k − j in equation (10), we have If m � k + 1, then Adding the zero value − (m − 1)!(m − m)A m Q 0 to the second summation of the last equation, we get From the last equation, we can write the coefficients Q m as Putting n � 3 in equation (14), we have taking fractional differentiation α-derivatives to equation (4), we get and inserting equations (4) and (17) into equation (3), the series coefficients X n+1 , Y n+1 , Z n+1 , and R n+1 could be obtained from e recurrence relations corresponding to the integer model could be obtained by putting α � 1 in the last four formulas of equation (18) [26].
At n � 0 and with the initial values of the chemical Advances in Astronomy at n � 1, we get Advances in Astronomy and by applying the same scheme, we can determine the rest of the series terms. So, the product abundance could be represented by the series solution of equation (3) as It is important to mention that x 0 , y 0 , z 0 , and r 0 are arbitrary initial values that enable us to compute gas models with different chemical compositions, that is, pure helium or rich helium models.

Mathematical Modeling of the Problem.
To simulate the conformable fractional helium burning network represented by equation (3), we use the neural network architecture shown in Figure 1.
Considering the initial conditions X 0 � x 0 , Y 0 � y 0 , Z 0 � z 0 , R 0 � r 0 , the neural network could be obtained following the next steps [30]. e form of the neural approximate solution of equation (3) will have two terms: the first represents the initial values and the second represents the feed-forward neural network, where x is the input vector and p is the corresponding vector of adjustable weight parameters.
en, the output of the neural network N l (x l , p) is written as en, neural network output N ℓ (x ℓ , p) is given by 6 Advances in Astronomy where z j � n i�1 w ij x j + β i , w ij is the weight from the input unit j to the hidden unit i, v i is the weight from the hidden unit i to the output, β i represents the bias of the i th hidden unit, and σ(z) is the sigmoid activation function that has the form ). By differentiating the networks output N with respect to the vector x j , we get Differentiating equation (24) n times gives D α,...,α n times As a result, the solution of the helium burning network is given as which fulfills the initial conditions as

Gradient Computations and Parameter Updating.
Using equation (27) to update the network parameters and computing the gradient, the error quantity needed to be minimized is given by

Input layer
Hidden layer Output layer Figure 1: ANN architecture proposed to simulate conformable fractional helium burning network.
Advances in Astronomy 7 where where D α x N(x, p) is given by equation (25). So, the problem is converted into an unconstrained optimization problem.
To update the network parameters, we train the neural network for the optimized parameter values. After the training process, we obtained the network parameters and computed the following: Now, N with one hidden layer is analogous to the conformable fractional derivative. By replacing the hidden unit transfer function with the n th order fractional derivative, the fractional N gradient differentiating with respect to v i , β i , and w ij could be written as e network parameters updating rule can be given as where a, b, c are learning rates, i � 1, 2, . . . , n and j � 1, 2, . . . , h.
In the stellar helium burning model based on ANN, the neuron is the fundamental processing unit that can process a local memory and carry out localised information. At each neuron, the net input (z) is calculated by supplementing the received weights to obtain an aggregate weight of those inputs and add it to a bias (β). e net input (z) is then passed by a nonlinear activation function, which results in the neuron output u j (as seen in Figure 1) [31].

Training of BP Algorithm.
e backpropagation (BP) training algorithm is a gradient algorithm aimed to minimize the average square error between the desired output and the actual output of a feed-forward network.
It requires continuously differentiable nonlinearity. Figure 2 displays a flow chart of a backpropagation offline learning algorithm [32]. e algorithm is a recursive algorithm that starts at the output units, working back to the first hidden layer. A comparison of the output X j , Y j , Z j , R j at the output layer with the desired outputs tx, ty, tz, tr is performed using an error function which has the following form: For the hidden layer, the error function takes the form: where δ j is the error term of the output layer, and w k is the weight between the output and hidden layers. e update of the weight of each connection is implemented by replicating the error in a backward direction from the output layer to the input layer as follows: e value of learning rate η is chosen such that it is neither too large leading to overshooting nor very small leading to a slow convergence rate. e value of the momentum term found in the last part in equation (36) which is affixed with a constant c (momentum) is used to accelerate the error convergence of the backpropagation learning algorithm and also to assist in pushing the changes of the energy function over local increases and boosting the weights in the direction of the overall downhill [33]. is term adds a fraction of the most recent weight values to the current weight values. Both η and c terms are set at the start of the training phase and determine the network speed and stability [31,34]. e process is repeated for each input pattern until the output error of the network is decreased to a prespecified threshold value. e final weight values are frozen and utilized to get the precise product abundances during the test session.
e quality and success of training of ANN are assessed by calculating the error for the whole batch of training patterns using the normalized RMS error that is defined as Weights are adjusted by- If unit j is an output unit, Figure 2: Flowchart of an offline backpropagation training algorithm.
Advances in Astronomy 9 where J is the number of output units; P is the number of training patterns; tx pj , ty pj , tz pj , and tr pj are the desired outputs at unit j, whereas X pj , Y pj , Z pj , and R pj are the actual outputs at the same unit j. A zero error denotes that all the output patterns computed by the stellar helium burning model match the expected values perfectly and that the stellar helium burning model is fully trained. Similarly, internal unit thresholds are adjusted by supposing they are connection weights on links from the input with an auxiliary constant value. e previous algorithm has been programmed using C++ programming language running on Windows 7 of a CORE i7 PC.

Data Preparation.
Based on the recurrence relations (equation (18)), we computed one pure helium gas model, e fractional parameter covers the range α � 0.5 − 1 with a step of 0.05. e calculations are performed for a time T � 2100 s. Consequently, we have a total sum of 44 fractional helium burning models. Figure 3 plots the two product abundances from gas models calculated at α � 0.95, where the solid lines are for the pure helium model with initial abundance X 0 � 1, Y 0 � 0, Z 0 � 0, R 0 � 0; and the dashed lines are for the rich helium model with initial abundances, X 0 � 0.95, Y 0 � 0.05, Z 0 � 0, R 0 � 0. e effects of changing the composition of the gas are remarkable, especially for the carbon C 12 .
In Figure 4, we illustrated the effects of changing the fractional parameters on the product abundances calculated for a gas model with initial abundance X 0 � 0.85, Y 0 � 0.15, Z 0 � 0, R 0 � 0. It is clear that the effects of the change of the fractional parameter on the behavior of the product abundances are small. is result is similar to the results obtained by [10] for the models computed in the sense of the modified Riemann-Liouville fractional derivative.

ANN Training.
For the training of ANN used to simulate the helium burning network, we used part of the data calculated in the previous subsection. e data used for training of the ANN are as shown in the second column of Table 1.
e neural network (NN) architecture used in this paper for the helium burning network has three layers as shown in Figure 1. ese layers are the input layer, hidden layer, and output layer. Different configurations of hidden neurons of 10, 20, and 40 have been tested, where we concluded that 20 neurons in a single hidden layer are giving the best model of the network to simulate the helium burning network. is number of neurons in the hidden layer was found to give the minimum value of RMS error of 0.000005 in an almost similar number of training iterations. As a result, the configuration of the NN we used was 4-20-4, where the input layer has four inputs which are the fractional parameter α, the time t (t takes values from 3 to 2100 in steps of 3 seconds), two of the initial abundances which are the helium (X 0 ), and carbon (Y 0 ). We excluded the other two initial abundances (Z 0 and R 0 ) because their values are always zero as indicated in Table 1. e output layer has 4 nodes which are the timedependent product abundances for helium (X), carbon (Y), oxygen (Z), and neon (R).
During the training of the NN, we used a value for the learning rate (η � 0.035) and for the momentum (c � 0.5).
ose values for η and c were proved to quicken the convergence of the backpropagation training algorithm without exceeding the solution. For the demonstration of the convergence and stability of the values computed for weight parameters of network layers, the behaviors of the convergence of the input layer weights, bias, and output layer weights (w i , β i, and ] i ) for the helium burning network are displayed in Figure 5. As these figures show, the weight values are initialized to random values and after somewhat considerable iterations they converged to stable values.

Comparison between the NN Model and Analytical Model.
After the end of the training phase of NN, we used the final frozen weight values in the test phase to predict the timedependent product abundances for helium (X), carbon (Y), oxygen (Z), and neon (R). In this test phase, we used values for a fractional parameter α not being used in the training phase to predict the helium burning network model. ese values are shown in the third column of Table 1. e results of the predicted values show very good agreement with the analytical values for different helium modes. A comparison between the predicted NN model values and analytical model for two values of the fractional parameters (α � 0.55 and α � 0.95) along with different helium modes shown in Table 1 are displayed in the range of figures , that is, Figures  6-9 for one pure helium gas model, X 0 � 1, Y 0 � 0, Z 0 � 0, R 0 � 0, and three rich helium gas models, In all these figures, the very good agreement between both the NN model and analytical model is clear, which elects the NN to be considered as a powerful tool to solve and model nuclear burning networks and could be applied to the other nuclear stellar burning networks.
From the performed calculations, one can examine the effect of changing the fraction parameter over four elements.   Initial abundances of the HB becomes larger. Also, it is noticed clearly that the abundance of C 12 has the same behavior. e behaviors of the fractional product abundances of O 16 and Ne 20 are different from those of He 4 and C 12 . e differences between the fractional product abundances of O 16 are large after just the beginning of the ignition, whereas the differences between the fractional product abundances of Ne 20 are very small for t ≤ 100 seconds and increase after that time.

12
Advances in Astronomy

Conclusion
In the current research, we introduced an analytical solution to the conformable fractional helium burning network via a series expansion method where we obtained the product abundances of the syntheses elements as a function of time.
e calculations are performed for the four different initial abundances: (X 0 � 1, Y 0 � 0, Z 0 � 0, R 0 � 0), (X 0 � 0.95, Y 0 � 0.05, Z 0 � 0, R 0 � 0); (X 0 � 0.9, Y 0 � 0.1, Z 0 � 0, R 0 � 0) and (X 0 � 0.85, Y 0 � 0.15, Z 0 � 0, R 0 � 0). e results of the analytical solution revealed that the conformable models have the same behaviors as the fractional models computed using the modified Riemann-Liouville fractional derivative. Second, we used the NN in its feed-forward type to simulate the system of the differential equations of the HB. To do that, we performed the mathematical modeling of a NN to  Figure 9: e distribution of the product abundance with time for the pure helium burning network, X 0 � 1, Y 0 � 0, Z 0 � 0, and R 0 � 0. simulate the conformable helium burning network. We trained the NN using the backpropagation delta rule algorithm and used training data for models with the fractional parameter range α � 0.5 − 1 with step Δα � 0.1. We predicted the fractional models for the range α � 0.55 − 0.95 with step Δα � 0.1. e comparison with the analytical solutions gives a very good agreement for most cases, a small difference obtained for the model with fractional parameters α � 0.55. e results obtained in this research prove that modeling of nuclear burning networks using NN gives very good results and validates the NN to be an accurate, robust, and trustworthy method to solve and model similar networks and could be applied to other nuclear stellar burning networks comprised of conformable fractional differential equations.