Quasi-Discrete Dynamics of a Neural Net: The Lighthouse Model

This paper studies the features of a net of pulse-coupled model neurons, taking into account the dynamics of dendrites and axons. The axonal pulses are modelled by &functions. In the case of small damping of dendritic currents, the model can be treated exactly and explicitly. Because of the &functions, the phase-equations can be converted into algebraic equations at discrete times. We first exemplify our procedure by two neurons, and then present the results for N neurons. We admit a general dependence of input and coupling strengths on the neuronal indices. In detail, the results are


THE MODEL
In my paper I describe a model that I recently developed [1].It adopts a middle position between two well-known extreme models.The one widely- known model is that of McCulloch and Pitts [2]   which assumes that the neurons have only two states, one resting state and one firing state.The firing state is reached when the sum of the inputs from other neurons exceeds a certain level.The other case is represented by modelling neurons by 187 means of the Hodgkin-Huxley model.This model, originally devised to understand the properties of axonal pulses, has been applied to the generation of pulse trains by neurons [3].Further related models are based on the concept of integrate and fire neu- rons [4,5].While these models deal with phase couplings via pulses, another phase coupling is achieved by the Kuramoto-type [6,7].For recent work cf.Tass and Haken [8,9].
We first consider the generation of dendritic currents by means of axonal pulses via the synapses.
H. HAKEN   We formulate the corresponding equation for the dendritic current f as follows: (t) aP(t r) "7b(t) + F(t), (1)   where P is the axonal pulse, r a time delay.is a decay constant and F w is a fluctuating force.As is known from statistical physics, whenever there occurs damping, fluctuating forces are present.As usual we shall assume that the fluctuating forces are &correlated in time.As is known, vesicles that release neurotransmitters and thus eventually give rise to the dendritic current can spontaneously open.This will be the main reason for the fluctu- ating force F. But also other noise sources may be considered here.When a pulse comes in, the open- ing of a vesicle occurs with only some probability.Thus we have to admit that in a more appropriate description a is a randomly fluctuating quantity.
While F w in (1) represents additive noise, a repre- sents multiplicative noise.In order to describe the pulses properly, we introduce a phase angle 05 and connect P with q5 through a function f We require the following properties off: f(q5 + 50) f(@, periodic, (4) (c) sharply peaked.(5)   Finally we have to establish a relationship be- tween the phase angle q5 of the pulse P produced by the neuron under consideration and the dendritic currents.To this end, we write (t) S(X) + F4(t), (6)   where the function S(X) has the following proper- ties: S is equal to zero for X smaller than a threshold (3, then it increases in a quasi-linear fashion until it saturates.Denoting the dendritic currents of other neurons by bm, we write S in the form S(X)--S(Cmm(t-7-')q-Pext(/- 7-n) ).(7)   Here, r' and r" are delay times.Pext is an external signal that is transferred to the neuron under consi- deration from sensory neurons.A simple explicit representation of (7) obeying the properties just required for S is given by ZCmff)m([--T')-qt-pext(/-T 't) ( for S > 0, otherwise. The interpretation of Eq. ( 6) is based on the func- tioning ofa lighthouse, in which a light beam rotates.
The rotation speed q depends on S according to (6).
We start from ( 9)-(11) and assume that the sys- tem operates in the linear regime of S (cf.(8)).In this section we neglect delays, i.e. we put r r' 0.
for both positive and negative .1.
(55) Using (54) and the property of the (5-function, we obtain Let us discuss the times t and tff in more detail. These times are defined by t# 4( 2) + 2) 2rcn, t + t + 2rm.

PHASE RELAXATION AND THE IMPACT OF NOISE
In the preceding section we derived equations for the phase-deviation {j(t) from the phase-locked state.Equation (58) refers to the phase-difference 2 1 52 1 and reads (with B 0) In the following we again use the abbreviation: A/a. (60) Because of the &functions in (59), q is to be taken at the discrete times tn.Because the phase q5 refers to the steady state, a in (60) is a constant.We first study the solution of (60) in the interval (61) and obtain (62 At times tn we integrate (59) over a small interval around t, and obtain {(l.+ e) {(t.e) a{(& e). ( 63) Since { undergoes a jump at time t, there is an ambiguity with respect to the evaluation of the last term in (63).Instead of t e we might equally well choose t, + e or an average over both expressions.
Since we assume, however, that a is a small quan- tity, the error is of higher order and we shall, there- fore, choose at t, e as shown in Eq. ( 63). (Taking the average amounts to replacing (l-a) by (1-a/2)/(1 + a/2).)On the r.h.s, of (63), we insert (62) for t, + e and thus obtain c(& _+_ () (1 a){(t,_l q-)e (64) Since the t/s are equally spaced, we put t.-t._, A. ( For the interval the solution reads tN < < IN+I (66) Since the absolute value of 1-a is smaller than unity, (67) shows that the phase deviation {(t) relaxes towards zero in the course of time.
Introducing the variable x instead of , we can rewrite (73) in an obvious manner by means of x (1 a){x_e -7zx +/)}. (74) To solve the set of Eq. (74), we make the substi- tution x, ((1 a)e-zx)y (75) and obtain a recursion formula for y, Yn Yn-1 (1 a)-n+le7t"n. ( Summing up over both sides of (76), we obtain In order to evaluate (79), we need the stochastic properties of B. Before we proceed further, we dis- cuss the case in which B(t) is singular, for instance of the form 6(t-t,o).
For t<t0 we can proceed in analogy to the Eqs.( 59)-(67).For t &o. ( the integration of Eq. (68) around &0 yields (tno + e) (tno ) a(t.o e) + B (82) and in between the singularities we have as solution of (68), To be still more specific let us assume that for the solution reads e(t,) -o. ( Then instead of (82), we obtain sc(t.o+ e) B. (86) H. HAKEN which means that we now proceed as we did it following Eq.(59), namely (86) acts just as initial condition.
We now treat the case in which B is time inde- pendent.The integral in (79) can immediately be evaluated and we obtain It tells us that the effect of the perturbation persists, and that Xu eventually acquires a constant value.We now turn to the case in which B is a stochastic function of time, where we shall assume that the statistical average over B vanishes.In the following we shall study the correlation function for the case N large, and IN-N'I finite. (89) Using (79), the correlation function can be written in the form We evaluate (90) in the case N'>N (91) and assume further that B is &correlated with strength Q.Then (90) acquires the form (92) The evaluation of the sum in (92) is straightforward and yields Q (e 2zx 1){ (1 a)-2e 27/x }-1 27 x (1 a)N'-Ne-'x(u'-u), which for a << 1, (94) can be written as R e -(TA+a)(N'-N) Q. (95)

27
The correlation function has the same form as we would expect it from a purely continuous treatment of the Eq.(68), i.e. in which the &functions are smeared out.
TWO NEURONS: EXPLICIT SOLUTION OF THE PHASE-LOCKED STATE In the preceding section we studied a variety of deviations from the phase-locked state, of which we needed only a few general properties.In this section we wish to explicitly construct that function.Its equation is of the form +7-Af()+C. (96) By making the transformation -Ct + X ct + X, (97) we can cast (96) into + 7;;4 Af(x + ct). (98) We note that X is continuous everywhere X(t + e) X(t-e). (99) On the other hand, because of the singular chara- cter of (12), which is explicitly expressed by ( 14) and (15), integrating (98) over the time-interval (tn + 6, + --(), we immediately obtain )(/n+l / ) 2(/n+l ) A for t-/n+l. (100) On the other hand, for the time-interval tn / e < <_ tn + --E, we obtain 2 + 7; 0, ;(t) ;(tn + e). e -n(t-"). ( 101 Using ( 101) in (100), we obtain the recursive relation 2(tn+l / e) ;(tn / e)e -(t"+l-t") + A. (102) We first assume that the times t at which the jumps of the derivatives of the phase occur are given quantities.In the following we shall study (102) explicitly.We first introduce the abbreviations ( Because of ( 105), ( 108) can be cast into the form ( Because the dependence of the jump-times on the phases q5 is not specified, the solution (109) is valid quite generally.Introducing a time T so that tN < T < tN+l (111)   holds, we may write x(T) at that general time in the form x(T) e-n(r-'N)X(tN). ( We now study the relationship between the jump- times or their difference, i.e. t+ tn (113) and the phase.According to (12) and (13), the jumps occur at time intervals (113) so that + (-) dq-27r (114) holds.Because of (97), ( 101) and (103), we obtain c + + H. HAKEN Inserting this relation into (114), we obtain (1 e -7(t"+-t") --X tn --6 -2r, (116) which is an equation for (113) provided x(t, + ) is known.For a small damping constant of the dendritic currents, we expect 7(t.+,-t.)<< 1. (117 Under this condition, (116) acquires the form (t.+l t.)(c + x(t, + e)) 2r, (118)   or, because of (115), the form 27l (t,+l-tn)= (119) and Equation (119) tells us that the sequence of jump- times is inversely proportional to the speed of the phase, which is quite a reasonable result.
Let us now consider the steady state in which holds.This implies that even in the general case (116) the jump-times are equidistant tn+l tn /k equidistant. ( This allows us to perform the sums that occur in (107) and (109) explicitly and the solution of Eq. (98) can be written as X(tN + e) e--/'UXl (to) + A e -TNA (122) whereby we use the abbreviation (110).When we ignore transients, i.e. consider the steady state, (122) simplifies to X X(IN @-6) A(1 e-'zx)-1. (123) From (115) we then obtain b(tn + e) c + A(1 e-'zx)-1. (124) Because of the coupling HA, the phase velocity is increased.We can now determine A explicitly.We insert x(t,) according to (123) into (116) and obtain A--l( 27r-e -:). (125) Clearly, the coupling strength A must be sufficiently small, i.e.A < 27r7.
So far we calculated the time-derivative of X.It is a simple matter to repeat all the steps done before so that we are able to derive the results for X at time TN and also for b at TN.Under the assumption of equidistant jumps, we obtain X(tN) x(to) -_ 2(t0)(1 e-TNA) ( (/)(tN) X(tO)+ ;(to)(1 -e-N/X). ( 127 Under the choice of the initial time, such that X(to) )(t0) 0, (128) we obtain in the limit of time oc, i.e. for the steady state, _IAN+CNA A (129) c(tu)-7 e_7/x.
(152) Let us discuss these equations in two ways: (1) We may prescribe '/1 and A2 and determine those C1, C 2 (that are essentially the neural inputs) that give rise to A, A 2.
(2) We prescribe c and c2 and determine A, A2.
Since w#= 27r/A.are the axonal pulse frequen- where These results exhibit a number of remarkable feat- ures of the coupled neurons: according to (156) their frequency sum, i.e. their activity is enhanced by positive coupling A. Simultaneously, according to (155) some frequency pulling occurs.According to (153), neuron becomes active even for vanish- ing or negative cl (provided Ic127r < c2A/9,), if neuron 2 is activated by c2.This has an important application to the interpretation of the perception of Kaniza figures, and more generally to associative memory, as we shall demonstrate elsewhere.

MANY COUPLED NEURONS
The case of two neurons can be generalized to many neurons.The corresponding equations read + Tcj Z Aj{k qS(mod 2re)} k + Gt +
The set of linear differential equations (164) can be solved by the standard procedure.We introduce eigenvectors with components v so that v;ajk AV; Their solution can be obtained as in Section 3.
The corresponding equations for a:z are linear and read c6 + Aee, / (2rT)cve, 6. (181) They can be solved under the usual conditions.
Depending on the coupling coefficients Aa,, even those 0: may become nonzero, for which c[ 0. On the other hand, only those solutions are allowed for which o:[ > 0 for all 1.This imposes limitations on c and A[z,.

CONCLUDING REMARKS AND OUTLOOK
In the above paper I treated a model that is highly nonlinear because of the dependence of the 6- functions on the phases qS.Nevertheless, at least in the limit of small dendritic damping, I could solve it explicitly.This model contains two thresholds.
The first threshold is the conventional one where one assumes that below it the neuron is quiescent, whereas above threshold the neuron fires.It was assumed that the network operates below its sec- ond threshold, where we expect pronounced satura- tion effects on the firing rates.Probably this region has to be explored in more detail.Also the case that the dendritic damping is not small might deserve a further study.Preliminary considerations show that here chaotic firing rates must be expected. k
means of (75), we obtain the final result in the form (with t-t_ A) XN YO ((1 a)e-Tzx)