^{1}

^{2}

^{1}

^{2}

A computational model of a self-structuring neuronal net is presented in which repetitively applied pattern sets induce the formation of cortical columns and microcircuits which decode distinct patterns after a learning phase. In a case study, it is demonstrated how specific neurons in a feature classifier layer become orientation selective if they receive bar patterns of different slopes from an input layer. The input layer is mapped and intertwined by self-evolving neuronal microcircuits to the feature classifier layer. In this topical overview, several models are discussed which indicate that the net formation converges in its functionality to a mathematical transform which maps the input pattern space to a feature representing output space. The self-learning of the mathematical transform is discussed and its implications are interpreted. Model assumptions are deduced which serve as a guide to apply model derived repetitive stimuli pattern sets to

It can be said that neuronal networks, whether artificial,

The base principle of the mathematical Hough transform will be outlined in the section on computational Hough models at the neural level. As the interconnection scheme and interplay of the associated neurons in the microcircuitry of orientation selectivity remain open, we additionally describe with particular attention the modeling and execution of the mathematical Hough transform. Several computational models are contrasted, beginning at the microcircuitry level of the interconnected neurons and synapses. A prime example of this, namely, a neural net composed of cortical columns and microcircuits, is discussed in detail.

Following these sections, we conclude by proposing guidelines and novel 3D microelectrode arrays (MEAs) to be used in future

The sensory organs provide the windows to the world for the brain. Eyes and ears encode photon distributions or sound pressure levels into electrical spike trains which are delivered and distributed by

The sensory organs mediate the information transfer from the outer world to the inner world. This mediated information in the inner world is used to create internal representations of the outer world in distinct brain areas and is structured in the brain into categories, such as color and motion in the visual system [

The hypothesis that the firing of orientation selective cells could be understood by mapping their input stimuli by a mathematical transform which is intrinsically implemented in cellular microcircuits was formulated by Blasdel. He demonstrated that a mathematical Hough transform model could be adopted to explain principles of the transformational process of bar detection by orientation selective cells. The Hough transform is a coordinate transform, in which an input space is transformed to an ordered feature space. Each point in the feature space is given by its coordinates and numerically by the accumulation vote of histogram entries in the corresponding grid cells. Blasdel assumes a coordinate transform mapping of the

Indeed, Kawakami and Okamoto propose a cell model for motion detection in which the Hough transform is the essential part in the identification of bars [

Equipped with this model and model predictions, Okamoto et al. conducted a motion-detection experiment with primates. They extensively compared their model predictions to

Although successful in its predictions, Kawakami’s model remains a mathematical model with little indication of how the algorithm is realized cellularly. In what way this algorithm might be executed structurally, dynamically, and functionally by a neuron ensemble remains largely unknown. The base model descriptors are neurons as entities which are affected with some model assumptions [

Several authors describe spiking neural network models which execute Hough transforms at the neuron and synapse level [

In the first Hough transform implementation, the authors describe a character recognition study using a biologically plausible neural network of the mammalian visual system [

In their second implementation of the Hough transform, the authors further described a biologically inspired spiking neural network with Hebbian learning for vision processing. The authors wrote that the Hough transform can be used to find simple shapes like lines and circles in images [

In their third implementation, the authors demonstrated the detection of straight lines using the above described spiking neural network model. Based on the receptive field of the Hough transform, the authors showed that a spiking neural network is able to detect straight lines in a visual image [

In their fourth implementation, the authors described a neural net for 2D slope and sinusoidal shape detection [

A feedforward neural network model with an input layer at the bottom and feature classifier output neurons at the right. The network topology is regular with repetitive elementary building block segment microcircuits. (Reprinted with permission from Brückmann et al., [

The neural net learns to detect a set of training elements like bars or sinusoids. It is trained with a set of

The spatiotemporal input patterns are transformed to a time and place code where the firing of an output neuron

The input layer feeds the subsequent layers with the spatiotemporal input patterns and triggers a propagation of the signals through the associated cortical columns. Each spatiotemporal input, such as a bar with a defined slope, activates the propagation of a wave front in the net through its cortical columns. Due to the specific signal propagation velocities in the net, the activation wave front forms a planar wave front at a specific layer

The signal propagation velocity vector field of the cortical columns delay lines is learned by collectively tuning their individual signal propagation velocities. Each cortical column delay line adapts in a pattern induced learning process to its specific signal propagation velocity. The delay times of all delay neurons are equal and set to 1 millisecond (the duration of a clock step).

Each cortical column delay line consists of a signal conducting pathway (Figure

The elementary building block segment microcircuit. Path following is self-learned by the settings of the weights at the signal bifurcation. (Reprinted with permission from Brückmann et al., [

Nine training patterns of bars of different slopes. Each time step one image row is consecutively applied to the input layer. (Reprinted with permission from Brückmann et al., [

The path selection and therefore the signal propagation local velocity are regulated by antagonistic weights

The synaptic weights are trained with an unsupervised Hebbian-learning rule and a Boltzmann temperature function which decreases from a starting temperature

An output neuron only spikes if all selected signal paths are activated, because the thresholds of the output neurons are equally set to the number of signal paths minus 1. The thresholds of the output neurons can be adjusted to lower values (e.g., the output neuron spikes if more than

If an output neuron spikes in a layer, the weights of the selected paths are collectively changed by

For each layer, the weight setting has to be learned. The learning of the weights is a time evolving process. Weights of the first layer settle first and converge to their 0 or 1 end state. After the weights in the first layer have settled, the weights of the second layer begin to settle, consecutively the weights of the subsequent layers settle, until at last the weights of the last layer settle. Subsequent learning in each layer depends on the preconditioned setting of the weights in the previous layers. Learning finishes when all weights converged to their final states 0 or 1.

In summary, a neural net with velocity tuned cortical columns self-learns to detect bars of different slopes and sinusoids of different frequencies depending on the applied training set. Self-learning has been examined for different sizes of the neural net. The neural net executes a coordinate transform which maps the spatiotemporal input patterns through cortical columns and microcircuits to a feature vector. The solution found by the neural net is compared to mathematically derived solutions which are computed by Hough transform space-time equations for straight lines and sinusoids in a numerical grid. The authors drew the conclusion that the weight settings are either analytically derivable by the Hough transform equations or are self-learned by the neural net by pattern induced learning.

Based on its plausibility, the neuronal network model serves as a partial system model for functional aspects and dynamic properties of real neuron ensembles.

The signal propagation of temporary ordered sequences through the net incorporates synfire chains [

A Boltzmann temperature has been introduced in the model so that the synapses converge smoothly to their final gating on or blocking off state which is consolidated by modeling synapses as binary open and closed gates [

Unsupervised Hebbian learning has been assumed in the model which is in concordance with STDP [

This model together with its foundation in basic neurobiological assumptions will serve as a subject in information processing and be a guide for system identification in further

The Hubel-Wiesel experiment using a new protocol presented here is planned to be revisited

The assumption is that receptive field areas will functionally assemble by synaptogenesis due to pattern induced stimulation in the NGN [

The NGN is not developed according to a brain-like protocol nor does it resemble an explant neuron ensemble in its macro- and microstructure [

Astrocytes and neurons combine in a homeostatic relationship to create the so-called tripartite or tetrapartite synapse [

Several

Activity can be induced by electrode stimulation of a single electrode in near-distant neurons [

Repetitive paired-pulse stimulation of a single electrode for brief periods induces persistent strengthening or weakening of specific polysynaptic pathways depending on the inter-pulse interval [

Cultured cortical networks

Micro-CT image of a microtower with integrated electrodes which forms a 2D MEA (a) and which forms a 3D MEA (b) when placed in series with a baseplate compatible with commercial amplifier systems.

Contrary to

Two spatiotemporal stimulation sequences to be provided by the microelectrodes of the 3D MEA.

The patterns will be presented by the 3D MEAs over long stimulation times to the cultured cortical network. Neurons in the vicinity of the electrodes will be activated consecutively in time and the computations will remain local by limited axonal and dendritic outspread. Some of the questions will be as follows. Are input stimuli mediated through cortical columns and microcircuits to feature neurons? Are the networks able to discriminate between the presented patterns? Are the feature detector neurons arranged in an ordered fashion according to ascending feature sets? And are all intermediate neurons intrinsically localized as part of the cortical microcircuit in order to realize an identification of the system?

The cortical column model is adopted for system inspection and is essentially modeling aspects associated with the evolving NGN. This is a numerical model and as such has some unitary parameters which are not related to physical entities, like absolute timing in measurable quantities. Time is unitary and specified as a clock step with no time associated units in the model. In the experiment for STDP learning, the time interval will be set to the millisecond range according to literature values. The inter-pulse interval will be set in the stimulator. With the help of the experiment, we plan to fine-tune the model and parameters of the physical NGN. In variations of the experiment, the control parameters will be tuned to see where the optimal range and set points of various controllable parameters are. These parameters and points include stimulus length, STDP timing intervals, repetitive pattern burst modes, number of training cycles, and day of stimulus activation. A crucial role is played by the adaption of the weights to their minimal blocking values or maximal conducting values, respectively.

Thus far, our computer simulation of the cortical column model requires the weight incrementing learning curve value

An early indicator for the success of the experiment is that over many stimuli test patterns some of the recording electrodes will have a high output value relative to the trigger time of the applied stimulus. In a subsequent observation, the first two delay neurons in the cortical column can be disregarded by directly connecting the axonal branches to the dendrites of output neuron 1, which detects the coincident firing of the two input neurons. As a result, these neurons will fire. The key learning is experienced by the two delay neurons in cortical column 2. Over the sustained training period, it is necessary and sufficient that the synaptic branch tends towards the delaying neuron 2 and the direct branch connectivity is cut off. In a coevolutionary manner, the upper signaling path should establish a direct axonal connection to the dendrite of output neuron 2, so that neuron 2 can sense the synchronous coincidence of the upper and lower signaling path.

The risks of the experiment are clearly identified. The cortical microcircuit elements are very few and it is not

A thorough description on different levels of abstractions has been given to reproduce the Hubel-Wiesel experiment

The Hubel-Wiesel experiment has been resumed and the contributions of several authors were listed, which indicate some plausibility for the mathematical Hough transform as substrate of information processing in biological maps for orientation selectivity. Several computational Hough models at neural level have been compared and one model has been selected as a guide for further experiment. The proposed

This research has been supported by the 3DNeuroN project in the European Union’s Seventh Framework Programme, Future and Emerging Technologies (Grant agreement no. 296590).