Constraints of Biological Neural Networks and Their Consideration in AI Applications

Biological organisms do not evolve to perfection, but to out compete others in their ecological niche, and therefore survive and reproduce. This paper reviews the constraints imposed on imperfect organisms, particularly on their neural systems and ability to capture and process information accurately. By understanding biological constraints of the physical properties of neurons, simpler and more efficient artificial neural networks can be made (e.g., spiking networks will transmit less information than graded potential networks, spikes only occur in nature due to limitations of carrying electrical charges over large distances). Furthermore, understanding the behavioural and ecological constraints on animals allows an understanding of the limitations of bio-inspired solutions, but also an understanding of why bio-inspired solutions may fail and how to correct these failures.


Introduction
A common misconception of evolutionary biology is that natural selection acts to produce organisms perfectly adapted to their environment. This notion has perhaps best been challenged by Gould and Lewontin [1] with their analogy of the "the spandrels of San Marco," where they claim that evolution normally acts on existing structures and body plans, making the best use of them as possible. In general, new structures are not produced and large changes, such as changes in the body plan of arthropods, do not occur (the analogy being that the highly decorated spandrels have the primary function of supporting the structure of the cathedral, and only the secondary function of aesthetic beauty-nevertheless many visitors assume the highly decorated spandrels are there for the artwork they display).
While the structures of neurons are flexible, for example in terms of their length or number of dendrites or synaptic connections, many biochemical, physiological, behavioural, and ecological constraints still apply to their form and function, and to the ability of their form and function to adapt through evolution.
Understanding these constraints in a holistic manner is essential for the effective design of artificial neural networks.
In some cases, somewhat paradoxically, the constraints can produce more efficient solutions than fully flexible artificial networks might (see section on energy constraints below). However, in many cases, rigidly following the constraints of biological networks is neither productive nor necessary for artificial intelligence applications (see sections on information transfer and ecological constraints below). This brief review gives an overview of the biological constraints of neural processing and where and when they might be important to consider in the design of artificial neural networks.

The Evolutionary Origins of Neurons
All animals with the exception of the phylum Porifera (the sponges) possess some sort of neurons, from the loose neural nets of the Cnidaria to the highly developed brains of the cephalopod molluscs and vertebrates [2]. Neurons are specialised forms of cells, and share many common features with other cells in the body. In particular, all cells have numerous proteins that allow transport of particles across the cell membrane, either through (facilitated) diffusion or by active transport against a concentration gradient.
In neurons, charged ions are the particles that move across the cell membrane, altering the voltage between the inside and outside of the cell [2].
The evolution of neurons from "normal" cells gives an insight into one of their major constraints; diffusion occurs across cell membranes (and the proteins embedded in them) and therefore essentially ions "leak" out of neurons. Neurons are not electrical wires, and a voltage or membrane potential at one end of a neuron will not travel far before degradation. In fact information-in terms of voltage-rapidly degrades in these "graded potential neurons" over distances of less than 1 mm [3]. Essentially, unmodified cells are not a good way of conducting electricity (through charged ions), and neurons have evolved mechanisms to cope with this, such as spiking. These mechanisms also have their own constraints and are further discussed below.
While neurons transmit electronic information, synapses are where the processing of information occurs [2]. Initially, the conversion of electrical information into chemical information in the synapse, and the reconversion back to electronic information in the postsynaptic neuron appears an anarchic process, yet the role of neurotransmitters can be highly varied in the postsynaptic role (producing various excitatory and inhibitory postsynaptic responses) (reviewed by [4]). Synaptic transfer, however, is a slow process in relation to the speed information passes through the nervous system [2]. Processes with many synapses, therefore, will be unable to process information very rapidly, as might be needed for escape behaviours, and classic examples of escape behaviours generally involve at most, a moderate number of synapses (see reviews in [5,6]). While the functional evolutionary origins of synapses are unclear, recent studies have demonstrated that the genes required to produce proteins necessary for synaptic transmission are found in the genomes of sponges, which lack nervous systems [7]. Therefore, it is probable that synaptic transmission has its origins in exploiting proteins produced for another purpose. While synaptic processing is responsible for the successful functioning of animal nervous systems, it is developed from evolutionary modification of a "best available" solution, and in some cases may be constrained by the slow transmission rates of the process.
Artificial neural networks, however, do not need to model the complexity of synaptic transmission, unless their goal is to model and understand the biological processes that occur. Neither the "leak" of charge, nor the time of synaptic processing that occurs in real neurons needs to be a constraint of these networks, although computational time of processing an artificial synapse may still effectively limit the size of such a network.

Information Transfer by Neurons
As stated above, information rapidly degrades over short distances in simple graded potential neurons. The action potential, or spike, is therefore the common means of transfer of electronic information in neurons [2]. Essentially the spike rate or number of spikes in a given time period arriving at a synapse relates to the information available, and this information can, in some cases, be directly correlated to observable behaviours [6,[8][9][10]. Some studies have indicated that it is not just the spike rate and duration of spikes that contain information, but also the patterns of spikes that can contain information [11]. However, most studies suggest that spike patterns are not detected and contain no transferable information across synapses [9,10,12].
Conversion of membrane potential to spikes is analogous to differences between analogue and digital, and in the same manner that greater information can be transferred using analogue, graded potential neurons have the ability to transmit up to five times more information than spiking neurons [13,14]. Essentially, this loss of information though spiking is due to the constraints of "leaky" neurons and is a biochemical/physiological constraint of information processing. Artificial neural networks (whether realised in hardware or software) are not subject to the same constraints, nevertheless numerous studies have studied or developed artificial neural networks consisting of spiking neurons, when it is highly likely that it would be simpler, and able to transmit more information, to create artificial networks of graded potential neurons. While studies investigating information limits of spiking networks, or the greater amount of information that could be passed across synapses if patterns of spikes could be recognised, should be encouraged, neural networks designed for a particular application should take heed of simpler and more effective approach of analogue or graded potential processing.

Energetic Constraints
In order to transfer electrical information in neurons, ions need to be moved across cell membranes to create a voltage or potential difference across the membrane, and to restore the neuron to its resting potential [2]. Much of this movement of ions is related to the work of ion pumps such as the sodiumpotassium pump-which uses energy to maintain the ion balance. Therefore, complex biological neural networks used to achieve complex tasks use more energy than simple neural networks. The energetic constraints are not negligible. Larger brains and more neurons incur a higher metabolic cost than smaller brains [15] and have been shown to result in shorter life spans and lower fecundity in late life (e.g., in Drosophila, [16]).
In nature, evolutionary trade-offs have developed. Essentially these are where animals clearly show suboptimal behaviours, for example in terms of sex allocation, where it seems that constraints on perception may mean that optimal male to female ratios are not always produced [17]. While there is currently no firm evidence to indicate that these imperfect behaviours are related to neuronal constraints (but see evidence in [17,18]), it is known that neural networks are bias in the type of environmental information they obtain and do not provide a full knowledge of the environment, as predicted by classic optimality models [19,20]. Work is currently ongoing to determine the precise neural constraints in operation during the determination Advances in Artificial Intelligence 3 of sex ratios, but it is likely to be related to the higher energy requirements required to develop a larger neural network capable of processing this information. Evolutionary trade-offs, therefore seem to occur between fitness gains from optimal behaviours and developmental costs of establishing the necessary neural pathways to sense and act on various environmental stimuli. This indicates that the energetic costs of neural processing are important in nature [21][22][23][24].
Energetic constraints are an example of where artificial neural networks could learn from biology. While the resultant behaviours of neurally constrained animals are not optimal, they are "good enough" for individuals to survive and reproduce. Most software based neural networks will not be run on computers capable of massive parallel processing, therefore the processing time for any given task will increase with the size of the network. Furthermore, for hardware embodied artificial networks, a direct analogy with power consumption can be drawn, which could affect the operating range of, for example, autonomously navigating robots. In these situations, designing a perfect solution, over something that is simply "good enough" may be ultimately disadvantageous. However, the seriousness of task may play the deciding role in the size of the network implemented. The example, for instance, of designing an analogue VLSI neural network to be deployed in cars to visually detect possible colliding objects [25], would greatly benefit from overengineering to create a perfect behaviour, even if such a behaviour is not present in the real biological system (see below). The power consumption of such a device would be minimal in comparisons with the overall power consumption of the car, and suboptimal behaviours (i.e., not detecting a collision, or falsely detecting a collision when no colliding objects are present) would have serious consequences.

Ecological Constraints
The ideas above have concentrated on the bottom up effect of neural processing; that is, the actual constrains on information transfer by neurons. One aspect that is often forgotten in considering information processing is the top down constraints. These are the processes that indicate the effectiveness of the neural processing-in keeping an organism in an optimal habitat to survive and reproduce.
In the above discussion on sex ratios, I discussed how the energetic costs of neural processing might outweigh the benefits obtained from capturing and acting on perfect data. In some cases, the costs of deferring from an optimal behaviour may not be large and a trade-off may be reached. In other cases, further evolutionary constraints might prevent optimal neural systems being required. For example, there is no need for an organism such as a snail to develop sensory systems to tell when it is about to be crushed by a human foot. Such a process will cause death to the snail, so the costs of being crushed are very high. However, even if an information capture and processing system did exist, it would not aid the snail. The mode of locomotion is slow, and however well it captured the information, it would not be able to avoid being crushed. This example seems obvious, but in terms of bio-inspired artificial neural networks, considering the behavioural ecology of the organism whose neural network you are trying to copy is something frequently forgotten [26] (see below). Especially when considering invertebrate neural networks and behaviours, if the animal has no need to behave in a certain way or capture a certain type of information, then it is unlikely to possess neural circuits to do so.
A key example of not considering ecology in sufficient detail can be given by work on the locust Lobula Giant Movement Detector (LGMD) neural network. Locusts possess a pair of neurons known as the Descending Contralateral Movement Detectors (DCMDs), which spike in a one-toone ratio with the presynaptic LGMD [27]. Initial work was conducted on the neuron, largely because it was easy to record from, and responded well to visual movement stimuli [27][28][29]. It was found that it responded most vigorously to looming stimuli (objects approaching on a direct collision course) and was therefore considered a suitable neuron for detecting and avoiding collisions [30]. However, little consideration was given to what locusts might need to avoid colliding with. Insects, for example, as anyone who has seen a fly trapped in a room with many windows can confirm, are perfectly capable of colliding at high speed with objects and suffering little in the way of injury.
Finally, behavioural observations showed that flying locusts would briefly stop flying and perform a "gliding" manoeuvre to looming objects that best resembled the speed and size movements of predatory birds [8,31]. These glides resulted in a rapid drop in height and would prevent the locust being eaten by the bird. The glides also occurred at peak DCMD spike frequencies [8,9]. Objects that were larger or slower moving (in terms of a size: speed ratio) than predatory birds did not result in these gliding behaviours occurring. Thus, the collision sensor was in fact a predator avoidance mechanism, with a clear evolutionary and ecological advantage, rather than the collision detection mechanism previously proposed.
While the above example is an understandable progression of science, many bio-inspired neural networks were based on the LGMD and its ability to operate as a collision detector [32][33][34]. The LGMD, in fact, exploits a unique property of the predator, in that small, fast moving bird predators that catch locusts on the wing produce a mathematically unique signature of image expansion over the eye of the locust [26]. Larger or slower moving objects are much harder to detect using image expansion over the eye [26]. Thus, unmodified models of the LGMD neural network never produced reliable collision detectors, normally, since the sensitivity of the system needed to be increased, by altering the synaptic weights, to detect collisions; there were major problems with false detections [34]. Over engineering the network, to include many other bio-inspired neural networks operating together, did however, result in more reliable collision detection systems, in general, able to predict car collisions and not respond falsely to noncollision events [25,35,36]. Another warning from behavioural ecology to the application of artificial, but bio-inspired, neural networks, is that behaviour is often more complex than first thought. To some extent, this is a situation that has been brought about by the strict "cause and effect" of manipulative experiments used in behavioural ecology, that can be used to show that behaviour X arises because of stimulus Y. In practise, although behaviour X cannot occur in the absence of stimulus Y, it is also dependent on stimuli U and W. Therefore, designing a neural network to pick up stimulus Y, will not reproduce behaviour X in an efficient manner. An example of this may be homing behaviour, where an individual animal forages away from its home, but returns to the home after each foraging location [37,38]. Many animals (as diverse as desert ants, limpets, and fiddler crabs) use a "path integration" technique to find their way home, essentially calculating the net distance and angle travelled and moving home in a direct line. Research into experimentally moving fiddler crabs when foraging (moving the substratum) shows that they use path integration to find their way back to their home and cannot find their way home without this behaviour [39]. However, in many species, path integration is not foolproof, and is backed up by techniques such as trail following (following outgoing trails from the home position) and using features of the landscape to "reset" the path integration mechanism [38,40,41]. As such, designing a path integration neural network is useful, but may not make an application that behaves exactly as expected.

Advances in Artificial Intelligence
Testing and developing neural networks for creating robotic "behaviour" are also often conducted in simplified environments-either in laboratories with few "distracting" visual features, or in simulation environments such as Webots. In the above example of homing behaviour, it can be seen how simplified environments can be problematicif there are no visual landscape features to detect, then this behaviour cannot occur, and path integration mechanisms cannot occur. In fact, recent work into aggregation in intertidal snails shows that simulations of the snails' behaviour need to contain complex information about the environment (particularly the persistence of mucus trails) to accurately mimic what occurs in real situations. Both simulations and real behavioural data conducted in simplified environments did not produce such effective aggregation behaviour [42].

Constraints on Human Cognition
The evolutionary origins of neural networks and the constraints played by neuronal processing are not only present in the invertebrates considered in the above examples. Constraints of neural processing also affect human cognition. An example of human neural processing constraints can be given by psychological tests on human perception. Many indicate that rapid decision-making is prone to errors. For example, tests showing images of soldiers either carrying a machine gun or an umbrella, and asking participants to rapidly assess the risk of the situation, often produced inaccurate responses [43]. Essentially this is an evolutionary trade-off between speed and accuracy of information processing, not different to the neural constraints faced in many nonhuman animals [44].
Equally the concept of evolutionary adaptations of neural networks is present in humans. Stress responses of the brain are thought to have arisen through the "flight or flight" mechanism to avoid predators and capture prey [45], although alternative theories do exist (e.g., [46]). The rapid Advances in Artificial Intelligence 5 change in the human ecological niche over the past few centuries has resulted in negative implications for these stress responses. This has strong analogies with problems of artificial neural networks operating in slightly different ecological niches to which they originally evolved (see above example of locust collision detection networks). If the "ecology" of the artificial network is not identical to that of the mimicked network, it may not perform as predicted.
Although humans are far more cognitively complex than most invertebrates discussed in this paper, their neural systems still show the same constraints as discussed elsewhere, and building artificial intelligence applications to mimic human intelligence should still consider constraints such as, information transfer in spiking neurons, the evolutionary origins of the aspect of intelligence, or behaviour being mimicked.

Conclusions
This paper has reviewed some key biological constraints of neurons (Table 1), and indicated that many of these do not need to be constraints for developing artificial neural networks. Furthermore, it has reviewed ecological and behavioural constraints of some animals, which are unique to their particular ecological niches. These ecological and behavioural constraints will shape the neural networks used by animals to collect and process information, and are therefore vital to consider if developing artificial networks to try to mimic some or all of an animal's behaviour.