Automatic Estimation of the Dynamics of Channel Conductance Using a Recurrent Neural Network

In order to simulate neuronal electrical activities, we must estimate the dynamics of channel conductances from physiological experimental data. However, this approach requires the formulation of differential equations that express the time course of channel conductance. On the other hand, if the dynamics are automatically estimated, neuronal activities can be easily simulated. By using a recurrent neural network (RNN), it is possible to estimate the dynamics of channel conductances without formulating the differential equations. In the present study, we estimated the dynamics of the Na and K conductances of a squid giant axon using two different fully connected RNNs and were able to reproduce various neuronal activities of the axon. The reproduced activities were an action potential, a threshold, a refractory phenomenon, a rebound action potential, and periodic action potentials with a constant stimulation. RNNs can be trained using channels other than the Na and K channels. Therefore, using our RNN estimation method, the dynamics of channel conductance can be automatically estimated and the neuronal activities can be simulated using the channel RNNs. An RNN can be a useful tool to estimate the dynamics of the channel conductance of a neuron, and by using the method presented here, it is possible to simulate neuronal activities more easily than by using the previous methods.


Introduction
The electrical activities of a neuron can be studied by simulating it, that is, without performing a neurophysiological experiment on a real neuron.NEURON [1] and GENESIS [2] are well-known simulators that simulate neuronal activities and neuronal network activities.Neuronal activities are induced by many channels located in the plasma membrane of a neuron.Channel currents flow through these channels, change the neuronal membrane potential, and induce neuronal activities.In order to simulate neuronal activities, it is first necessary to estimate the dynamics of channel conductance.
There are two methods for estimating the dynamics of channel conductance.One method is to estimate the dynamics using time course of the membrane potential of a neuron [3].In this method, it is first assumed that some putative channels exist in a neuron and then some parameters of the channel dynamics are estimated from the time course of the membrane potentials.The other method is to estimate the dynamics from the time course of the channel conductance at the clamped voltage.The former is called a top-down approach, while the latter is a bottomup approach.In the top-down approach, the dynamics of the channel conductance of a putative channel are simply estimated from the neuronal activities of a neuron, and it is unknown whether the putative channel exists or not.On the other hand, in the bottom-up approach, the dynamics of the conductance of a real channel are estimated from experimental electrophysiological data.Our ultimate objective is to develop a simulation system for simulating neuronal activities easily.In this system, we can select several channels from a database, implement them in a putative neuron, and simulate the neuronal activities.Hence, we have taken the bottom-up approach as the estimation method of the channel dynamics.In addition, when a new channel is found, our system can automatically estimate the channel dynamics, and these dynamics can be easily stored in the database of the system.
In order to determine conductance dynamics from the experimentally recoded time course of the channel conductance of a real neuron, it is necessary to formulate the differential equations that express the relationship between the membrane potential and the channel conductance in a conventional manner.One method of estimating the dynamics is to approximate the dynamics using Boltzmann and Gaussian functions [4].However, in this method, some of the parameters are not automatically determined; therefore, it is necessary to adjust them.In addition, this method is appropriate only for certain channels, but not other channels.The other method, which neuroscientists usually follow, is to provide channel data to some neurocomputational researchers and to help them to formulate and adjust the parameters.However, if we can automatically estimate channel dynamics from experimental data, we can simulate neuronal activities very easily by using the channels.Hence, we have started the development of an estimation method for channel dynamics.In order to estimate channel dynamics, we have used a recurrent neural network (RNN) so far, because RNN can learn and reproduce a time sequence.This method, to estimate the dynamics of channel conductance of neurons by RNN, is our unique method.
In a previous study, we simulated the neuronal activities of a squid giant axon using RNNs, which learned the dynamics of the gate variables of the Na + and K + channels.We were thus able to reproduce the neuronal activities of a squid giant axon by using this method [5].However, the time course of gate variables cannot usually be determined by experiment.Channel conductances can usually be recorded at the clamped membrane potentials in a voltage-clamp experiment; in this experiment, the membrane potential is clamped (held constant) while measuring the currents across a cell membrane.Hence, we need to develop an automatic estimation method that employs channel conductance data.Therefore, the purpose of our present study is to develop an RNN estimation method in which channel dynamics can be estimated from the channel conductance determined by voltage-clamp experiment.Note that it is not what we insist in this paper to construct a Hodgkin-Huxley model using RNNs.

Squid Giant Axon.
We simulated the neuronal activities of the squid giant axon using the dynamics of channel conductances estimated by RNNs in this study.The squid giant axon has two channels, namely, Na + and K + .In order to train RNNs, we used the data on the conductances of these two channels.However, we could not obtain the actual conductance data recorded by neuroscientists; hence, for training RNN, we used the channel conductances calculated using the Hodgkin-Huxley (HH) model [6].The details of the HH model are described in the appendix.

Recurrent Neural
Network.An RNN is a neural network having recurrent connections.By means of these connections, past outputs of units associate with the current outputs, thereby making it possible to estimate the dynamics of a phenomenon [7].The RNN is trained using the timeseries of the phenomenon to do so.There are several types of RNNs, for example, a fully connected-type RNN (FRNN), an Elman-type RNN (ERNN) [8], and a time-delayed neural network (TDNN) [9].In this study, we used FRNN.Figure 1 shows the structure of the RNN.
The present RNN has eleven input units, ten hidden units, one output unit, and a bias unit.The hidden and output units have recurrent connections.The output z i (t) of unit i is defined by the following equations: The suffix i indicates the unit number, t denotes time, I and H represent the assemblies of input and hidden units, respectively, B and O represent a bias unit and an output unit, respectively, x i (t) is the output of an input unit or a bias unit at time t, and y i (t) is the output of a hidden unit or an output unit.The output of the hidden unit i and the output unit i at time t + 1 are calculated from the following equations: Here, s i (t) is the net input to a hidden unit or an output unit at time t, w i j denotes the connection weight from the unit j to the unit i, f (s i (t + 1)) is the output function for hidden and output units, and a sigmoid function was adopted as the output function in this study.
Advances in Artificial Neural Systems 3 2.3.Learning Algorithm.The algorithm called back propagation through time [10] was used to train the RNN.The RNN was trained by using the time course of channel conductance as a teacher signal.An error between RNN output and the teacher signal between t = 0 and t = T is determined by In (3), d(t) and y o (t) denote teacher signal and the output of an output unit, respectively.In order to decrease the error E, the connection weights were updated.The quantity by which the connection weight w i j is updated is denoted by Δw i j and is obtained by partially differentiating (3) with respect to the weight w i j .The equation for Δw i j is as follows: Here, η is called the learning rate and is a positive constant.In this study, η is set to 0.01, δ i (t) can be calculated from the following equations recursively in the reverse order of time: ( Until the average error E/T decreased to a criterion, the weights were updated using (4).

RNN Model of Squid Giant Axon.
The squid giant axon has two channels, namely, Na + and K + .In our RNN model, the dynamics of the Na + and K + conductances were implemented in two different RNNs, and the RNNs trained using the dynamics of the Na + and K + conductances are called Na + -RNN and K + -RNN, respectively.In a physiological experiment, channel conductances are recorded by clamping the membrane potential at different voltages.In our model, we mimicked the experimental method.The clamped membrane potential was the input, and the channel conductance at that potential was the output of an RNN.The pairs of the clamped potentials and the respective channel conductance data recorded in the previous five time steps were also used as input data to the RNN.Thus, eleven sets of data were input to the RNN.After an RNN is trained, when one inputs a membrane potential to the RNN, the RNN produces a time course of the channel conductance at that potential.In this study, the training data for the Na + and K + conductances were calculated from (A.2) and (A.3) of the appendix by using the 4th Runge-Kutta (RK) method (Figures 2 and 3).The calculation time step was 0.05 millisecond.The input data, membrane potential, and channel conductances were transformed between 0.01 and 0.99 to be close to the output of the RNN units.The error criterion for the Na + conductance was 2 × 10 −4 , and that for the K + conductance  was 10 −6 .The criterion for the K + conductance was smaller than that for the Na + one because it was easier to estimate the K + conductance than the Na + conductance.
In a conventional method (see the appendix) when we calculate the neuronal activities of the squid giant axon, we first determine the differential equations of gate variables and channel conductances, and then calculate the membrane potential using them.On the other hand, in our RNN model, instead of the differential equations of gate variables and channel conductances, we used the RNNs.The membrane potential is calculated using a differential equation ((A.1) in the appendix), as in a conventional method.In order to compare the data calculated using RNNs with those calculated using a conventional method, we calculated the neuronal activities of the squid giant axon from the differential equations of gate variables and channel conductances by using the RK method.The neuron whose activities were calculated using RNNs and the RK method are called the RNN-neuron and the squid giant-(SG-) neuron, respectively.The calculation flows of the SG-neuron and the RNN-neuron are shown in Figure 4.

3.1.
Training of the Na + -and K + -RNNs.An RNN can be trained in the dynamics of Na + conductance, and in this study, the Na + -RNN could approximate the training data shown in Figure 5.
In addition to the Na + -RNN, we were able to develop the K + -RNN (Figure 6).

Neuronal Activities of the RNN-Neuron and SG-Neuron.
We will present the results of the RNN-neuron.Initially, a weak stimulus (5 μA/cm 2 ) and a strong stimulus (10 μA/cm 2 ) were provided for 1 millisecond in order to verify the presence of a threshold in the RNN-neuron.The weak stimulus did not induce an action potential, but the strong stimulus did (Figure 7(a)).These results indicate the presence of a threshold in the RNN-neuron and the SGneuron.The time course of the channel conductances of the RNN-neuron; the solid lines represent the output from the Na + -RNN or that from the K + -RNN, and the dotted lines represent the dynamics of Na + conductance of (A.2) or that of K + conductance of (A.3).conductances of the RNN-neuron.Interestingly, the time courses are not included in the training period of the RNNs.
Next, we examined whether the RNN-neuron has a refractory period (Figure 8).The first stimulation was provided at 10 milliseconds with an intensity of 10 μA/cm 2 for 1 millisecond, and the second stimulation was provided 8 or 12 milliseconds after the first one.The duration of these stimuli was the same as that shown in Figure 7.The first stimulus evoked an action potential.When the second stimulus was provided 8 milliseconds later at an intensity of 10 μA/cm 2 , no action potential was generated in the RNNneuron (Figure 8(a)).Even in the case of a larger intensity (20 μA/cm 2 ) at the second stimulus, no action potential was generated (Figure 8(b)).Further, when the onset time of the second stimulus was 12 milliseconds later than that of the first stimulus, the action potential was not generated in the case of the small intensity (10 μA/cm 2 ), either (Figure 8(c)), but it was generated in the case of the larger intensity (20 μA/cm 2 ) (Figure 8(d)).Thus, the RNN-neuron has an absolute and relative refractory period, similar to the SGneuron.
The squid giant axon exhibits the phenomena in which a hyperpolarized pulse of a 5-millisecond duration induces a rebound action potential [6].The RNN-neuron also exhibited a rebound action potential, as shown in Figure 9, though the peak latency of this action potential was slightly longer than that of the SG-neuron.
In the last part of our experiment, a constant stimulus with an intensity of 10 μA/cm 2 was applied to the neurons.Periodic action potentials were generated in the RNN-and SG-neurons (Figure 10(a)).When the stimulation intensity was increased to 20 μA/cm 2 , the frequency of the action potentials increased in both neurons (Figure 10(b)).The periods of action potentials generated at different stimulation intensities are shown in Figure 10(c).The RNN-and SG-neuron had almost the same characteristics.

Dynamics of Another Channel Conductance by an RNN.
We used our RNN method for another channel than those in the squid giant axon.The Ca 2+ channel plays a role in making bursts in a hippocampal pyramidal cell and in entering the extracellular Ca 2+ into the neuron.The Ca 2+ channel found in a hippocampal pyramidal cell by Kay and Wong [11] was voltage-dependent.An RNN was trained in the dynamics of the Ca 2+ channel conductance.The time course of the conductance as a function of the membrane potential was reproduced by the RNN, as shown in Figures 11 and 12

Discussion
We were able to train the RNNs in the dynamics of Na + and K + channel conductances of the squid giant axon (Figures 2, 3, 4, and 5), and we were able to reproduce the various neuronal activities of the axon, threshold phenomenon, refractory period, and the constant stimulationinduced action potentials (Figures 7, 8, 9, and 10).In the neuronal activities Na + -and K + -RNNs produced the different time course of the channel conductance from the trained one.By using this method, other channel dynamics could also be reproduced.The dynamics of Ca 2+ channel conductance in the hippocampus can be reproduced by an RNN trained with the dynamics of the Ca 2+ channel conductance (Figures 11 and 12).Hence, there is a possibility that our proposed RNN method can be used as a general method for the automatic estimation of the dynamics of channel conductance determined by a physiological experiment.
In the present study, we used a mean square error (MSE) between the training data and the output of an RNN to evaluate the training error (3).When the average MSE between Na + conductance and RNN output had a high value such as 10 −2 , which is larger than the criterion, the neuronal activities could not be reproduced well.The error criterion of the Na + -RNN used in this study was below 2 × 10 −4 , and various neuronal activities could be reproduced.Thus, it is appropriate to evaluate the reproduction level of neuronal activities using the MSE.
We used an FRNN to estimate the dynamics of channel conductance in the present study.We also reproduced some neuronal activities using other types of RNNs, namely, an Elman-type RNN (ERNN) [8] and a time-delayed neural network (TDNN) [9].In both cases, the inputs and outputs were the same as those in the present study.To compare the RNNs' ability to reproduce neuronal activities, we prepared ten Na + -RNNs and ten K + -RNNs, combined them, and made one hundred RNN-neurons of three types: TDNN, ERNN, and FRNN.We adjusted the respective number of the hidden units of the TDNN, the ERNN, or the FRNN to twenty, eleven, or ten in order to equal the number of total connections among the three NNs.We then calculated the maximum value of the cross-correlation function (MVCCF) obtained between the outputs from the SG-neuron and those from the three types of RNN-neurons in order to evaluate the reproduction level of the neuronal activities.The stimuli used for this purpose were the same as those shown in the bottom of Figure 7(a).The average MVCCF of the TDNN, ERNN, and FRNN were 0.154, 0.895, and 0.951, respectively, with the average MVCCF of the FRNN being maximum.The results were significant (P < .001;Kruskal-Wallis test with Steel-Dwass test as a posthoc test).Therefore, the FRNN is the most effective RNN for the automatic estimation of the dynamics of channel conductance.
When an RNN was trained using training data other than those used in the present study, the threshold of the SG-neuron could not be reproduced in the RNNneuron.In this study, we refined the training data, and then the characteristics of the neuronal activities could be reproduced.This result suggests that the reproduction of neuronal activities by an RNN depends on the training data.When an RNN was trained, the error produced by the K + -RNN always became low, while it was difficult to reduce the error produced by the Na + -RNN.The ability to reproduce neuronal activities depends on the reproduction of the dynamics of the Na + channel conductance.The more correctly an Na + -RNN predicts the time course of Na + channel conductance, the more accurately the RNN-neurons can reproduce the neuronal activities.
The results obtained using the RNN-neuron differed somewhat from those obtained using the SG-neuron.We believe that there are two causes for the difference.First, it is possible that the RNN could not extrapolate the time course of channel conductance sufficiently to stimulate the neuronal activities.As described previously, the reproduction ability depends on the training data.Thus, in a future study, we will use better training data from which an RNN can learn the dynamics of channel conductance more correctly.Second, the Na + -RNN could not reproduce the training data as perfectly as the K + -RNN could (Figures 4 and 5).Na + channel conductance was calculated from the two gate variables: an activation gate and an inactivation gate (see the appendix).At some membrane potential, the conductance remains constant, while the two gate variables can change differently.The effect by the change can be seen, for example, at about 150 and 170 milliseconds at the two arrows in Figures 2 and 5. Just before the first arrow, the conductance remains constant, while the membrane potential changes.At the first arrow, the conductance was double of that at the second one in the training data (Figure 2); however the RNN could not reproduce this feature (the arrows in Figure 5).The time courses of the two gate variables of Na + channel in the period from just before the first arrow to the second one were analyzed, and it was found that this result is caused by an increase in value of the inactivation gate variable h of the Na + channel due to the hyperpolarization of the membrane potential before the first arrow.We would like to revise the RNN structure so that it can learn the results of such an occurrence.
Our estimation method was used to estimate the conductance dynamics of not only Na + and K + channels of the giant squid axon but also other channels (Figures 11 and 12).An action potential that cannot be reproduced by the HH model has been recently reported [12], that is, the action potential that has a rapid initiation slope and variable onset times.It is possible for an RNN to reproduce such action potential when the RNN is trained using appropriate training data.
In the future, we intend to find more appropriate training data to enhance the reproduction possibility in our estimation method and to estimate the dynamics for other several channel conductances by using RNNs to simulate many types of neurons.

Conclusion
In order to simulate neuronal activities, it is necessary to estimate and formulate the dynamics of neuronal channel conductance using differential equations based on experimental data.We developed an automatic estimation method of estimating the dynamics of channel conductance by using a fully connected recurrent neural network.Two RNNs were trained using the Na + and K + channel conductance data from the squid giant axon.By using these trained RNNs, the neuronal activities of the axon could be reproduced.Thus, our RNN estimation method can automatically estimate the conductance dynamics of a new channel from experimental data, and can easily simulate neuronal activities using the estimated dynamics.

Appendix
The equations of the Hodgkin-Huxley model are as follows: In these equations, V denotes the membrane potential; C = 1.0 μF/cm 2 is the membrane capacity; and I Na , I K , and I L denote Na + , K + , and leak currents, respectively.I In denotes the injected current; V Na = 50 mV is Na + equilibrium potential; V K = −77 mV is K + equilibrium potential; and V L = −54. 4mV is the leak equilibrium potential; g Na is Na + conductance whose activation and the inactivation gate variables are referred to as m and h; and g K is K + conductance whose activation gate variable is n.The gate variables change with time and voltage.In the steady state, an activation gate variable has a tendency to increase with the membrane potential; conversely, the inactivation gate variable decreases with the potential.Here, g Na = 120 mS/cm 2 is the maximum Na + conductance; g K = 36 mS/cm 2 is the maximum K + conductance; and g L = 0.3 mS/cm 2 is the maximum leak conductance.α m and β m are the rate constants for the m gate; α h and β h are those for the h gate; and α n and β n are those for the n gate.

Figure 1 :
Figure 1: Fully connected RNN used in this study.The open squares, open circles, filled circle, and filled square represent input units, hidden units, an output unit, and a bias unit, respectively.The rounded rectangle represents an aggregate of hidden units.

2 )Figure 3 :
Figure 3: Training data for K + conductance.The explanation for (top) and (bottom) is the same as that for Figure 2.

Figure 4 : 2 )Figure 5 :
Figure 4: Calculation flows for the SG-neuron (a) and the RNNneuron (b).Rectangles: calculated variables; circles: use of differential equations.One time step elapses every cycle.

Figure 7 :
Figure 7: Presence of a threshold and action potential in the RNN-neuron.(a) The solid line represents the output from the RNN-neuron, and the dotted line represents that from the SG-neuron.The weak stimuli did not induce action potentials, and the strong stimuli induced action potentials.The stimulation is represented at the bottom.(b) The relation of stimulation intensities and peak values of the action potentials; the triangles are those of the SG-neuron, and the squares are those of the RNN-neuron.(c)The time course of the channel conductances of the RNN-neuron; the solid lines represent the output from the Na + -RNN or that from the K + -RNN, and the dotted lines represent the dynamics of Na + conductance of (A.2) or that of K + conductance of (A.3).

Figure 8 :
Figure 8: Refractory phenomenon of the action potentials observed in the RNN-neuron. .

Figure 10 :
Figure 10: Periodic action potentials induced by a weak (a) and a strong (b) constant stimulation and the relationship between the periods and the stimulation intensities of the constant stimulation (c).The filled triangles are those of the SG-neuron, and the filled squares are those of the RNN-neuron.