Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks

It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a “signature” of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.

On the other hand, we identified the sequence "1101" within the poststimulus time histogram (PSTH)/instantaneous firing rate (IFR) of a cultured neuronal network in Baljon et al. [15] (Figure 1) as well as our own network [16,17] (Figure 2). The sequence "1101" is the core portion of the reversal M-sequence "1101000" which is a typical pseudorandom sequence [18][19][20][21]. Electric circuits generating pseudorandom sequences are well known. Furthermore, it has been shown that, in a cultured neuronal network on a multielectrode, pseudorandom-like sequences (codes), including "1101", are detected, and they flow with some spatial decay curve [22]. We call the chart of the appearance frequencies of short code (e.g., length <8) versus various codes (in number) "code spectrum." Each cultured neuronal network is characterized by a specific code spectrum curve. That is, the curve shape seems to depend on the characteristics of neurons and the network configuration, including the weight distribution in the network. Therefore, we may consider it as a "signature" of the network. In the present study, we performed a simulation of code spectrum from a multielectrode on a 2D mesh neural network using neurons with fluctuating characteristics of refractory period and output delay time around each given intrinsic characteristic from time to time; that is, parameters of each neuron varied intrinsically and temporarily. Connection weights between neurons are 2 Computational Intelligence and Neuroscience  where the sequence "1101" was observed. Since the timing becomes dispersed within trials, the peak positions and shapes gradually changed as time elapsed. Particularly, there was a tendency of risings and peaks to become faster than the regular timing, which is led by the fastest spike among the dispersed spikes as well as the slowest spike to be cancelled by the succeeding fastest negative spike (effectively supposed; pulling down the tail of peak), and lowering the peak height [16,17]. randomly generated on the basis of a given statistical property. In this paper, expression "neuronal network" is mainly used for natural one, and "neural network" is for artificial one.

Network Configuration.
We performed a computer simulation to observe spike propagation on a 2D network with a 33 × 33 2D mesh of neurons. Each neuron has connection i j W ij Figure 3: Inputs to neuron in a 2D mesh neural network. Each neuron (red) receives input spikes from eight neighboring neurons (blue) such as through connection weights ∈ [−1, 1]. Neuron integrates such weighted spikes during its accepting period and outputs a spike after a delay time if the integrated value exceeds zero. weights to and from eight neighboring neurons, as shown in Figure 3.
In short, if increases, the number of negative weights increases. For example, if = 1/3, weight distributes uniformly in [−1 /3, 1]; that is, the ratio of positive to negative weights is 3 : 1. This ratio is a typical one in real neuronal networks. In 2D mesh artificial neural network model with weights to and from every eight neighboring neurons, however, weight distribution must be different because of different network shape and every eight neighboring weight is generated randomly, which usually does not result in zero. Of course, such models can be designed that first decide whether or not the weight exists and then decide on the weight value if it exists. For simplicity, we have designed that no connections between adjacent neurons (i.e., not connected; weight zero) are effectively realized by suppressions with negative weights. Fluctuation spike by chance Figure 4: Integrate-and-fire model without leakage but with fluctuation in the parameters of neuron . Each neuron has an inherent accepting period and output delay time . These parameters vary with time within certain ranges ( ) and ( ), respectively. Neuron integrates weighted input spikes during the accepting period for the th firing, and after the refractory period ends, it decides whether the integrated value exceeds zero for firing at every time point. If so, it outputs the th output spike with delay time . That is, ∈ ( ), and likewise, ∈ ( ).
in [−1, 1] but not uniformly; that is, the ratio of positive to negative weights is 1 : 4. Although the setting of weights was simple, the result was not much different from that obtained using separate settings of the weight values and their positive or negative signs.

Characteristics of Neurons.
The neuron model used here is shown in Figure 4, which is a type of integrate-and-fire model without leak [25]. Neuron accumulates weighted inputs during the accepting period . If the accumulated weighted input becomes positive and the neuron is not in its refractory period, the neuron outputs a spike after a delay time as the th firing. Some parameters used in the simulation are given as examples below.
A basic accepting period intrinsic to neuron is randomly generated in each network within a range ∈ { 0 −2, 0 −1, 0 , 0 +1, 0 +2} with a probability of 1/5 for each, where 0 is a fixed common parameter value through the network, and the unit time (bin) is 0.1 ms. The actual instantaneous accepting period of neuron at the th firing is further randomly given within the fluctuation range ( ) around ; that is, ∈ ( ). The instantaneous refractory period is implicitly assumed as slightly smaller than . The accepting period can also be called input integration or the accumulating period for firing. Roughly speaking, the accepting and refractory periods can be regarded as the same. The software can stimulate neurons with arbitrary spatiotemporal patterns. Typically, a combination of three neurons is simultaneously stimulated such that the states of the neurons are set to "1" at time 1 (bin no. 1). The reason why we stimulated three neurons simultaneously instead of only one neuron is that, in wet experiments, many neurons around an electrode seem to be simultaneously stimulated by an electrical stimulation; and secondly, parallel stimulation to multiple neurons seems to promote stable propagation of excitation according to computer simulations.
Parameters used in the simulation are as follows. Basic accepting period 0 is a preassigned network parameter such as 20, 30, . . . , 100, where unit time (bin width) is 0.1 ms. Instantaneous accepting period ∈ ( ) = { − 1, , + 1} is randomly selected at the th firing with the probability , 1 − 2 × , and , respectively, and = 1/12. This value defines how much the instantaneous accepting period fluctuates from time to time. If we increase or expand the range ( ), the fidelity of communication or information flow decreases. The above value is determined by considering a trade-off between such considerations and attaining 99% communication fidelity. Details will be presented in our coming paper.
Similarly, the output delay is as follows. Basic output delay time is randomly selected from {2, 3, . . . , 8} with a probability of 1/7 each. Instantaneous output delay time ∈ ( ) = { − 1, , + 1}, and = 1/12, similar to ( ). We have utilized an integrate-and-fire model without leaks, but instead we used a model incorporating a fluctuating period of acceptance. Our model is more stringent than the leaky integrate-and-fire model because our model randomly neglects old spikes, while the leaky model neglects them gradually. However, from our simulations, ignoring a portion of the spikes in any type is not fatal and not essential but only influences the quality of communication. This property is a strong point of neural networks that are composed of essentially fluctuating neurons. The fluctuating time parameters are concentrated in the two parameters-the accepting period, preceding the fire deciding time and the output delay time, following this time. These designs were determined by balancing the degree of complication, essential precision of results, and computation time.
Results of a preliminary experiment of stimulating a 9 × 9 mesh neural network are shown in Figure 5, where three neurons were stimulated at time = 0. We can observe "spike waves" propagating from source stimulations, and the waves often behave like Synfire [9,10]. Further, the code flow may be regarded as a component of such spike waves. It is possible to acquire codes in wet experiments and compare them with the simulation. In the present study, however, as a preliminary step, we focused on simulating not only the entire spike waves but also the flow of codes. Figure 6 shows the arrangement of a multielectrode on a 33 × 33 2D mesh neural This suggests that the codes are a part of these "spike waves."
network. Each neuron has directional connections with eight neighboring neurons. Each electrode gathers spikes from two to nine surrounding neurons. The major differences from the wet experimental configuration [22] are as follows: (1) there are no neurons and no connections outside the explicit 1089 neurons, (2) neurons are placed on a regular lattice, which is different from an irregularly shaped cultured neuronal network, and (3) distances between electrodes are smaller than those in the cultured neuronal network. It can be said that difference (3) is a smaller scale of (1) and it corresponds to the fact that we regard the spike data from different electrodes as independent events. That is, in (1) and (3), we neglect the behavior of neurons outside the noted neurons.

Component Code Spectrum.
By changing the basic accepting period 0 and the positive and negative weight balance parameter , we can generate spike trains for each type of electrode Em catching from ( = 2, 3, . . . , 9) surrounding neurons. We then decode the spike trains to obtain code spectra for each . We call such a spectrum "component spectrum." Figure 7 shows an example of simulated spike trains caught by electrode Em. Figure 8 shows examples of component code spectra of simulated spike sequences of some 0 (median of accepting period ) and the connection weight parameter . Practically, the number of neurons emitting spikes to each electrode changes electrode by electrode according to its statistical distribution. Therefore, the actual observed code spectra will be mixed according to the probability distribution of the number of neurons around the electrodes. Figure 9(a) shows the results of the average number of codes detected in recorded spike trains of 9 trials for code numbers 1-21 (code spectrum) from 63 electrodes of Sample A during 200 ms after stimulation expressed as 2000 times the bin data. Sample A is the same as that presented in [22].

Spectrum Fitting with Components. The blue curve in
The orange curve in Figure 9(a) shows the best fit to the number of codes detected in spike trains of 2000 time bins of Sample A using code spectra of artificially generated spike sequences with different accepting periods 0 and connection weight parameters and probability distribution of . Although this is an inverse problem and imperfect, the estimation of neuronal parameters 0 , , and probability distribution of is possible to some extent. Table 1 shows the normalized squared error of a simulated code spectrum to that of recorded data of Sample A with a changing probability distribution of , that is, a combination of component code spectra for various 0 and values.  Some root mean square (RMS) error data are not shown (shown as "-") because the number of codes detected was too small to calculate the component code spectrum as such spikes disappeared within the given time. Tables 2 and 3 show that of Sample B in Figure 9(b) and Sample C in Figure 9(c), respectively.

Expansion of
. In some cases, the best estimation of the probability distribution of had a large value at 9 , suggesting that there are more than nine neurons around the electrode. Therefore, we increased the number to 16. Figure 10 shows the expanded code spectral components up to 16 . Although 2 -9 have various shapes, 9 -16 have a similar shape. When the best fit process included a large number of parameters, the computation time was long. Therefore, it may be reasonable to regard 9 as the representative component of 9 -16 to decrease the computation time.

Maximum Cross-Correlations.
Because the simulation size was limited, the separation distances between the electrodes were very small (1-4 times of the neuron pitch). Nevertheless, using the same method as used in [22] for the natural neuronal network, we calculated the maximum crosscorrelation Φ ( ) of a trial among eight and 20 neighbors between a time frame difference of 0.5 (ms) for 2 -16 and 14 major codes ( Figure 11). However, since the electrodes were located relatively closer than in [22], and therefore cooccurrence probability of each code between two electrodes was expected higher, normalization of cross-correlation by code 6 Computational Intelligence and Neuroscience 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21  1 Code number Code number Code number  Code number (e) 0 = 10 ms = 1 Figure 8: Code spectrum components for several parameters of accepting period 0 and positive and negative weight balance . The horizontal line represents code numbers (1, 2, . . . , 21) whose number of "1"s in the code is 3. That is, code 1 = "111," code 2 = "1011," code 3 = "1101," code 4 = "10011," code 5 = "10101," . . . code 21 = "11000001" [22].  (6-20 bins). Codes are detected with 1% time accuracy, although practically several % because of the 0.1 ms bit width. This curve can be considered as the "signature" of spike trains. The orange curve is the best fit to code spectrum using a simulation spectrum. Table 1: Normalized RMS error for various parameters of neurons used to fit the data in Figure 9  length was not applied. As shown in Figure 11(a), in case of 4 and 5 , the maximum cross-correlation is rather flat in comparison with that of Sample A of the natural neuronal Table 2: Normalized RMS errors for various parameters of neurons used to fit the data in Figure 9 network of [22]. The characteristics of 6 -11 (Figure 11 Code number Code number  Code number (c) 0 = 8 ms = 2. Code number   Table 3: Normalized RMS errors for various parameters of neuron used to fit the data in Figure 9(c).

Discussion and Conclusions
In the present study, we constructed a 2D mesh neural network model and an input/output (I/O) functional model of neurons. A 2D mesh model may differ from a real natural cultured neuronal network, which will have a nonhomogeneous distribution of neurons [26]. The I/O functional model of neurons has certain intrinsic and instantaneous fluctuations of the refractory period and output delay time [11]. There may be some discrepancies compared with real natural cultured neurons; however, we have shown that from the perspective of the code spectrum of spike trains, it is possible to simulate the electrical activity of a neuronal network on a multielectrode plate (MED plate) with a 2D mesh neural network model. Moreover, it was possible to estimate the neuronal network parameters to some extent by recording spike trains with electrode without sorting. For example, though there is no guarantee to be true ones, estimated typical accepting periods were 8-9 ms, and refractory periods were also estimated as almost the same which are within range (5-10 ms) of a text book [27]. The detected codes were composed of spikes from different neurons because we acquired codes with bit widths ranging from 0.6 to 2.0 ms; in cases of 4-bit code, code lengths are mainly less than 6 ms (= 2 ms × 3 [spans of 4-bit code]). These values are generally shorter than the refractory period Computational Intelligence and Neuroscience Code number C of neurons, which is estimated to be more than 8 ms in our cases. This suggests that it is unnecessary to add sorting processing to the spikes obtained from each electrode in case of analyzing by code with length less than 8 ms. That is, spikes forming the codes under our conditions originate from different neurons, and it may be possible to analyze circuit states around the electrodes. We need not to deal with 1 whose spike interval is more than 8 ms. In our case, although we statistically analyzed all the 63 electrodes (except for the stimulating electrode) together, it may be possible to analyze individual electrodes, which may give more precise distribution of ; that is, we can determine whether neurons near a certain electrode compose a specific circuit such as a linear feedback shift register (LFSR) [14].
We used simulated code spectrum components to analyze the wet experimental spectrum of a cultured neuronal network. It is true, however, there are problems of (1) besides codes are not always orthogonal, (2) some of the extracted codes may be false such that composed of overlapped spike combination by chance [22]. To reduce such effects, it may be effective to improve the statistical background such as increasing the number of spike trains to suppress the statistical variation of the code spectrum components, because the present data were acquired from only one spike train of one trial each, that is, from the 63 electrodes, which are not necessarily perfectly independent.
A 2D mesh neural network can generate spike waves, as shown in Figure 4. Thus, the code flow observed on a cultured neuronal network can be regarded as a fragment of "spike waves." Code not only works as a marker of the spike waves but also provides information or clues about the circuit's shape. In other words, the code spectrum reflects the circuit shape, including the weight distribution, neuron characteristics, and its role in communication around each electrode. Therefore, it may be considered as a kind of network signature. Furthermore, spike waves will establish asynchronous multiplex communication links as well as multiplex communication within a synchronous neural network [23,28] where various pseudorandom-like codes are observed. For communication within this network, each neuron or group of neurons cannot receive entire spike waves, but rather a specific part of the wave, that is, a pseudorandom-like code or a spatiotemporal combination of codes. Then, based on the communication links as presented in Figure 12 [24,29], information will be processed by an intelligence mechanism in the brain. Under these conditions, information will be radiated as spike waves from source neurons and then widely propagated via the neuronal network. There has been substantial research on behavior of a spike intensity wave as a global macrotask including cardiac electric propagation [30][31][32]. Although it may be true that most neurons work to relay spikes as a part of spike waves, destination neurons in a communication task are limited in number and must select signals sent to them from among the various spike waves based on spike trains containing codes that are a fragment of each spike wave or a spatiotemporal combination of these codes. This task is local and should not be treated as conventional "intensity" level wave information, but rather as "locally phasic" level wave information incorporated in the spatiotemporal pattern of locally arriving spikes including codes. In artificial brain research, though through concentrated digital synaptic switching, effect of communication is shown [33]. The present study serves a base of communication tasks in natural  [23,24]. Red arrow < 5 ms < green < 10 ms < blue. We can see that red arrow with lag time < 5 ms runs within each hemisphere, and blue > 10 ms across the callosum.
(a) The top three receiving neurons can detect the specific transmitting neuron among the four neurons from the spike wave fragment as a combination of spike codes (the spatiotemporal spike pattern) Which bell ringing?
(b) The same mechanism allows us to perceive where a bell is ringing in our surrounding atmosphere intelligence, as illustrated in Figure 13. These studies of spike waves will lead to higher-order reasoning/intelligence in the brain via communication [8,34].
Our results, including those reported in [22], are summarized as follows: (1) We are investigating an intelligence mechanism with the maintenance of correspondence between wet experiments and simulation. (2) We identified a pseudorandom sequence (code) "1101" for PSTH in the literature as well as from our experience which should provide clues about neuronal circuits around electrodes.
(3) A code spectrum as an extension of code "1101" was obtained from cultured neuronal networks and simulations.
(4) A code spectrum can be considered as a "signature" of its associated network by which some characteristics can be estimated, including the refractory period and weight distribution.
(5) A code can function as a mark on a spike wave. We have shown that to some extent, wave propagation preserves the codes. This finding was documented by movies as well as quantitatively.
Future studies will address the following: (i) communication based on codes as a part of spike waves, (ii) organization of communication links to derive intelligence functions.