Computing Topological Invariants of Deep Neural Networks

A deep neural network has multiple layers to learn more complex patterns and is built to simulate the activity of the human brain. Currently, it provides the best solutions to many problems in image recognition, speech recognition, and natural language processing. The present study deals with the topological properties of deep neural networks. The topological index is a numeric quantity associated to the connectivity of the network and is correlated to the efficiency and accuracy of the output of the network. Different degree-related topological indices such as Zagreb index, Randic index, atom-bond connectivity index, geometric-arithmetic index, forgotten index, multiple Zagreb indices, and hyper-Zagreb index of deep neural network with a finite number of hidden layers are computed in this study.


Introduction
Neural networks are not only studied in artificial intelligence but also have got great applications in intrusion detection systems, image processing, localization, medicine, and chemical and environmental sciences [1][2][3]. Neural networks are used to model and learn complex and nonlinear relationships, which is very important in real life because many of the relationships of inputs and outputs are nonlinear and complex. Artificial neural networks are the backbone of robotics, defense technology, and neural chemistry. Neural networks are not only being widely used as a tool for predictive analysis but also trained successfully to model processes including crystallization, adsorption, distillation, gasification, dry reforming, and filtration in neural chemistry [4][5][6][7][8].
e topological index associates a unique number to a graph or network, which provides correlation with the physiochemical properties of the network. Degree-based topological index depends upon the connectivity of the network. e first degree-based topological index, called the Randić index, was formulated by Milan Randić [9] while analyzing the boiling point of paraffin. Over the last three decades, hundreds of topological indices have been formulated by researchers, which are helpful in studying the different properties of chemical graphs like reactivity, stability, boiling point, enthalpy of formation, and Kovat's constant and inherits physical properties of materials such as stress, elasticity, strain, mechanical strength, and many others.
Bollobás and Erdős [10] introduced the general Randić index given by equation (1). e first and second Zagreb indices were introduced by Gutman and Trinajstić [11] in 1972, which appeared during the analysis of π-electron energy of atoms. e multiplicative version of these Zagreb indices (the first multiplicative Zagreb index and the second multiplicative Zagreb index) of a graph were formulated by Ghorbani and Azimi [12]. Shirdel et al. [13] introduced a new version of Zagreb indices named as the hyper-Zagreb index. e widely used atom-bond connectivity (ABC) index is introduced by Estrada et al. [14]. Zhou and Trinajstić [15] gave the idea of the sum-connectivity index (SCI). e geometricarithmetic index was introduced by Vukičević and Furtula [16]. Javaid et al. [17] investigated the degree-based topological indices for the probabilistic neural networks in 2017. Topological indices for multilayered probabilistic neural networks and recurrent neural networks have also been computed recently [18][19][20][21]. For more work-related to computation and bounds of topological indices, see [22][23][24][25][26][27][28][29]. Consider a graph G having a set of nodes Vand a set of edges E. Degree of a node v, denoted by d v , is the number of nodes connected to v via an edge. A degree-based topological indices of a graph G are defined as follows: Randić index (1) First Zagreb index Second Zagreb index First multiple Zagreb index Second multiple Zagreb index Hyper-Zagreb index Atom-bond connectivity index Sum connectivity index Geometric-arithmetic index

Methodology
A deep neural network (DNN) can be represented by a graph Z: where Vdenotes the nodes of the network andE denotes the set of edges between the nodes. We consider a DNN with an input layer having M nodes, r hidden layers each layer having N i , i � 1, 2, . . . , r number of nodes such that the first layer has N 1 nodes, the second layer has N 2 nodes, and similarly, the r-th layer has N r nodes, which can also be expressed as DNN(N 1 N 2 . . . N r ). e output layer of DNN has N nodes. Each node of every layer is connected to all nodes of the next layer. For instance, Figure 1 shows a DNN with an input layer having four nodes, an output layer with three nodes, and five hidden layers.
We first partition the edges of the graph of DNN according to the degree of end vertices of the graph. We analyze the structure of the graph by considering the connectivity of vertices of each layer to the next layer. In DNN, each node of every layer is connected to all nodes of the next layer.
is fact is employed to count the degree of each vertex. Consider a deep neural network DNN(N 1 N 2 . . . N r ). Each node in the input layer has a degree N 1 because every input node is connected to each node of a first hidden layer having N 1 nodes. In the first hidden layer, all node (N 1 ) has the same degree, i.e., M + N 2 . Nodes of the second layer have degree N 1 + N 3 . Similarly, the nodes of i-th hidden layer have e nodes of the output layer have degree N r .
We will compute topological indices using the edge partition method. We will classify the edges on basis of degrees of end-nodes of the edges. e number of edges connecting the input layer to the first hidden layer is N 1 , whose end-nodes have degrees N 1 and M + N 2 . e edges connecting i-th hidden layer to i + 1-st layer have end-nodes having degrees N i−1 +N i+1 and N i +N i+2 and the number of such edges is N i N i+1 . Similarly, the N r N edges connecting the last hidden layer to the output layer have degrees N r−1 + N and N r of end-nodes. ese findings are summarized in Table 1 below, which will be further helpful in computing the topological indices.

Results and Discussions
In this section, we have derived the expressions to compute the topological indices of the deep neural network. ese results are related to the connectivity of nodes of DNN.
en the Randić index(R (1/2) (Z)) and general Randić index (R α (Z)) of DNN are given as Proof. We calculated the degrees of end nodes of every edge for DNN(N 1 N 2 . . . N r ). By using the definitions and values from Table 1, we get the following results: Substituting values from Table 1, we get is can be expressed as follows: Using Table 1, we get is can be further summarized as , first multiplicative Zagreb(PM 1 (Z)) index, and second multiplicative Zagreb index(PM 2 (Z)) of DNN are given as follows: To compute the topological indices of DNN, we use the edge partition method. In Table 1, we have calculated the degrees of end-nodes of each edge forDNN(N 1 N 2 . . . N r ). Now, by using the definitions and values from Table 1, we have the following results: Substituting values from Table 1, we get It can be expressed as follows: . . .
Substituting values from Table 1, we have   4 Computational Intelligence and Neuroscience It can be expressed as follows: Using Table 1, we get is can be further summarized as follows: (iv) We know, from equation (6), Substituting the values from Table 1, we getpi e above expression can be expressed as follows:

. . N r ) be a deep neural network. en the forgotten Zagreb index(F(Z)) and hyper-Zagreb index(HM(Z)) of DNN are given as follows:
Proof. To compute the topological indices of DNN, we use the edge partition method. In Table 1, we have calculated the degrees of end nodes of every edge for DNN (N 1 N 2 . . . N r ).

Computational Intelligence and Neuroscience
Now, by using the definitions and values from Table 1, we get the results given below Table 1, the above relation becomes is can be summarized as follows: Substituting values from Table 1, we get e above expression can be further summarized as follows: network. e atom-bond connectivity index (ABC(Z)), geometric-arithmetic index (GA(Z)), sum connectivity index (SCI(Z)), and augmented Zagreb index (AZI(Z)) of DNN are given as follows: Computational Intelligence and Neuroscience Table 1, we have which can be shortened as follows: Computational Intelligence and Neuroscience 7 Using Table 1, we get is can be expressed as follows: Computational Intelligence and Neuroscience Using Table 1, we get is can be expressed as follows: Substituting values from Table 1, we get is can be abbreviated as follows:

Conclusions
e deep neural network is helpful in modeling compounds with desirable physical and chemical properties employing the structure of compounds. is paper gives computational insight into the degree-dependent topological indices, which include the Randic index, Zagreb index, multiplicative Zagreb indices, harmonic index, ABC index, GA index, and sum-connectivity index of a general DNN with r-hidden layers. ese indices correlate the structure with the properties such as boiling point, molar refractivity (MR), molar volume (MV), polar surface area, surface tension, enthalpy of vaporization, flash point, and many others. e results Computational Intelligence and Neuroscience computed in the above theorems give generally closed formulas that can be exploited to compute the topological indices of neural networks under study by giving specific values to the input parameters. e values of the computed indices grow with the growth of hidden layers and also depend on the number of nodes in each layer.
A deep neural network is an important tool used in experimental design, data reduction, fault diagnosis, and process control. e QSAR studies must be integrated with the neural network approach in order to achieve a more physical understanding of the system. e use of DNN provides an alternative way of predicting physical properties and its linkage with topological indices can further enhance theoretical achievements.
is study can be extended further by analyzing the distance-based topological indices such as the Wiener index, Harary index, and PI index. Computation of spectral invariants of deep neural networks such as energy, Estrada energy, and Kirchhoff index is also open for further research in this area.

Data Availability
No data were used to support the findings of this study.

Conflicts of Interest
e authors declare that they have no conflicts of interest regarding the publication of this paper.