Consistent Recovery of Sensory Stimuli Encoded with MIMO Neural Circuits

We consider the problem of reconstructing finite energy stimuli encoded with a population of spiking leaky integrate-and-fire neurons. The reconstructed signal satisfies a consistency condition: when passed through the same neuron, it triggers the same spike train as the original stimulus. The recovered stimulus has to also minimize a quadratic smoothness optimality criterion. We formulate the reconstruction as a spline interpolation problem for scalar as well as vector valued stimuli and show that the recovery has a unique solution. We provide explicit reconstruction algorithms for stimuli encoded with single as well as a population of integrate-and-fire neurons. We demonstrate how our reconstruction algorithms can be applied to stimuli encoded with ON-OFF neural circuits with feedback. Finally, we extend the formalism to multi-input multi-output neural circuits and demonstrate that vector-valued finite energy signals can be efficiently encoded by a neural population provided that its size is beyond a threshold value. Examples are given that demonstrate the potential applications of our methodology to systems neuroscience and neuromorphic engineering.


Introduction
Formal spiking neuron models, such as integrate-and-fire (IAF) neurons, encode information in the time domain [1]. Assuming that the input is bandlimited with a known bandwidth, a perfect recovery of the stimulus from the train of spikes is possible provided that the spike density is above the Nyquist rate [2]. Using results from frame theory [3] and statistics [4], these findings were extended to (i) bandlimited stimuli encoded with a population of IAF neurons with receptive fields modeled as linear filterbanks [5], (ii) multivariate (e.g., space-time) bandlimited stimuli encoded with a population of IAF neurons with Gabor spatiotemporal receptive fields [6], and (iii) sensory stimuli encoded with a population of leaky integrate-and-fire (LIF) neurons with random thresholds [7].
These results are based on the key insight that neural encoding of a stimulus with a population of LIF neurons is akin to taking a set of measurements on the stimulus. These measurements or encodings can be represented as projections (inner products) of the stimulus on a set of sampling functions. Stimulus recovery therefore calls for the reconstruction of the encoded stimuli from these inner products. These findings have shown that sensory information can be faithfully encoded into the spike trains of a neural ensemble and can serve as a theoretical basis for modeling of sensory systems (e.g., auditory, vision) [8].
In this paper we investigate the problem of reconstructing scalar and vector stimuli from a population of spike trains on a finite time horizon. The encoding circuits considered are either single-input multi-output or multi-input multi-output (MIMO). The increasing availability of multineuron population recordings has led to a paradigm shift towards population-centric approaches of neural coding and processing. Examples of MIMO models in systems neuroscience include extensive investigations of spike train transformations between neuron populations [9] as well as the analysis of the causal relationships between neurons in a population [10]. In neuromorphic engineering MIMO models have been used for brain-machine interfaces [11], as well as silicon retinas and related hardware applications [12].

Computational Intelligence and Neuroscience
The stimuli considered in this paper have finite energy and are defined on a finite time horizon. Even though restricted to finite time intervals, finite energy signals have infinite degrees of freedom. Consequently, the formal stimulus recovery is ill-defined. We cast the stimulus reconstruction problem in the abstract spline theory [13] and recover the stimulus as the unique solution to an interpolation spline problem. Splines serve as a valuable mathematical tool for interpolation problems, and their applications arise in many areas such as data smoothing in statistics [4], computer graphics [14], and digital signal processing [15].
Through the formulation of the interpolation spline problem, the reconstructed signal will give the same measurements as the original one. We show that this leads to a signal recovery that is consistent in the sense that the reconstructed signal triggers exactly the same spike train when passed through the same neuron as the original stimulus. The reconstructed signal is also required to achieve a maximum degree of smoothness gauged by a quadratic criterion. This condition ensures that the problem has a unique optimal solution.
A preliminary version of some of the ideas presented here appears in [16]. The analysis was based on results arising in generalized sampling [17]. Here the theory is presented in a more general setting using the spline theoretic framework, and all proofs are included. We apply our theoretical results to stimuli encoded with a number of spiking neural circuits of interest. These include populations of integrate-and-fire neurons with linear receptive fields that arise in hearing, ON-OFF neural circuits with feedback that arise in vision and multi-input multi-output (MIMO) neural circuits that arise in olfaction.
Formally, MIMO neural circuits encode M-dimensional vector-valued finite energy stimuli into the spike trains of a population of N neurons. Their architecture consists of an N × M linear, time invariant filtering kernel that feeds into an ensemble of N neurons. For this novel neural circuit we formulate and solve the problem of optimal consistent recovery and also discuss some of the key conditions that the filtering kernel has to satisfy in order to get a good reconstruction.
The paper is organized as follows. Section 2 formulates the problem of consistent reconstruction on a finite time horizon as a spline interpolation problem and presents its general solution. In Section 3 the reconstruction problem is addressed for stimuli encoded with a population of LIF neurons. Section 4 presents general MIMO neural encoding circuits and the corresponding optimal consistent stimulus reconstruction. A neuroscience inspired example is presented where the filtering kernel performs arbitrary (but known) delays and scalings to input stimuli akin to simple synaptic models. Finally Section 5 concludes our work.

Encoding with LIF Neurons and Consistent Stimulus Recovery
In this section we formulate and solve the problem of optimal consistent reconstruction for the simple case of a stimulus encoded with a single LIF neuron. We show how the spiking of an LIF neuron can be associated with a series of projections in the general L 2 space. We impose intuitive constraints on the desired reconstructed signal and show that the reconstruction algorithm can be reduced to a spline interpolation problem.

Neural Encoding with Single LIF Neurons
, be a signal (or stimulus) of finite length and energy, that is, In what follows we assume that the stimulus u is the input to a Leaky Integrateand-Fire (LIF) neuron. Throughout this paper (t k ), k = 1, 2, . . . , n, denotes the set of recorded spikes. As in the case of bandlimited signals [5], neuron encoding can be associated with the projection (measurement) of the stimulus on a set of functions. By applying the t-transform [2], we can determine both the sampling functions and the projections of the stimulus on these functions using only the knowledge of the spike times. Assume that the encoder is an LIF neuron, with threshold δ, capacitance C, resistance R, and constant bias b. The membrane potential of the LIF neuron is governed by the differential equation with the initial condition V (0) = 0 and reseting conditions for all t ∈ [0, T], and k = 1, 2, . . . , n. By solving the differential equation in each interspike interval, the ttransform of the LIF neuron is given by for all k, k = 1, 2, . . . , n − 1. The t-transform can be rewritten as where L k : L 2 ([0, T]) → R is a linear functional given by for all k = 1, 2, . . . , n − 1. Therefore, we have the following result.

Lemma 1. The t-transform of the LIF neuron can be written in inner-product form as
where Remark 2. The inner products or projections u, φ k , k = 1, 2, . . . , n − 1, in (7) represent a set of measurements or encodings of the signal u on [0, T]. Since (φ k ) and (q k ), k = 1, 2, . . . , n − 1, in (6) can be readily derived from the knowledge of the spike times and the neuron parameters, the signal encodings are available to an observer reading the spike times (t k ), k = 1, 2, . . . , n − 1.

Consistent Stimulus
Recovery. The problem of stimulus reconstruction calls for estimating the signal u given the set of spikes (t k ), k = 1, 2, . . . , n. This problem is, for the class of stimuli u ∈ L 2 ([0, T]), ill-defined. (Signals that lie in a L 2 space have, in general, infinite degrees of freedom and thus cannot be perfectly recovered by a finite number of observations.) A remedy is provided by introducing a set of constraints on the recovery. The first constraint considered here requires the reconstructed signal u to generate the same spikes as the original stimulus. The second constraint requires choosing among the reconstructed stimuli the one with the maximum degree of smoothness. The latter is formulated as an optimization criterion.

Definition 3.
A reconstruction u of u based on the spike times (t k ), k = 1, 2, . . . , n, is said to be consistent if u triggers exactly the same spike train as the original stimulus u.

Remark 4.
As before, assume that at time 0 the membrane potential of the LIF neuron is set to the resting potential 0. Then the consistency condition above is equivalent with for all k, k = 1, 2, . . . , n − 1.

Definition 5.
A consistent reconstruction u that minimizes the quadratic criterion is called the optimal consistent reconstruction of u.

Remark 6.
Ju is the norm of the second derivative of the reconstructed stimulus.

Lemma 7.
The optimal consistent reconstruction u solves the spline interpolation problem where · is the standard L 2 -norm restricted to the interval [0, T].
Proof. It follows directly from Definitions 3 and 5.
Remark 8. An introduction to splines and the general solution to spline interpolation problems is presented in the Appendix A.  Theorem 9. The optimal consistent reconstruction is unique and is given by where where * denotes the convolution, and | · | denotes the absolute value. With

coefficients c and d satisfy the matrix equations
Moreover G is an (n − 1) × (n − 1) matrix, and p and r are column vectors with entries given by where all the inner products are restricted to the interval [0, T].
Proof. The proof follows from Theorem 4 in Appendix A. Note that the function | · | 3 is up to a multiplicative constant Green's function for the second-order iterated Laplacian.
(See Lemma 5 in Appendix B).

Computational Intelligence and Neuroscience
The representation functions (13) can be explicitly given in analytical form as where The entries of the matrix G are given by with g(x) = x 3 + 6x. Finally Remark 10. By letting R → ∞, one obtains the representation of the optimal consistent reconstruction for stimuli encoded with the ideal IAF neuron. The parameters and representation functions take a simple form: 2.3. Example. The input to an LIF neuron is a bandlimited signal with bandwidth of 100 Hz. The neuron encodes the stimulus during the time interval [0, 0.2] second. A bias equal to b = 3 is also added to the input. The parameters of the LIF neuron are δ = 0.8, C = 0.01, and R = 50. Under these conditions the neuron generated 78 spikes. The recovered signal is shown in Figure 1. In order to quantify the quality of the recovery, we used the signal-to-noise ratio (SNR) defined by In the above SNR definition the noise corresponds to the error between the original and reconstructed signal. The SNR was equal to 47.53 dB.

Single-Input Multi-Output Encoding and Consistent Stimulus Recovery
In this section we consider the reconstruction of a stimulus encoded with a population of LIF neurons. We demonstrate that the consistent recovery can be again formulated as a spline interpolation problem and provide the reconstruction algorithm. We also show how the methodology developed in this section can be applied to a simple encoding circuit consisting of two-coupled ON-OFF neurons with feedback.

Encoding with a Population of LIF Neurons.
In what follows we consider a neural encoding circuit consisting of N leaky integrate-and-fire (LIF) neurons. Neuron j has threshold δ j , bias b j , resistance R j , and capacitance C j for all j = 1, 2, . . . , N. After each spike every neuron resets its membrane potential to 0. Let t j k denote the kth spike of neuron j, with k = 1, 2, . . . , n j , where n j is the number of spikes that the neuron j generates, j = 1, 2, . . . , N.
The t-transform of the population of N LIF neurons is given by Let for all j = 1, 2, . . . , N. As in the previous section, we have the following.

Consistent Stimulus Recovery.
Let q be a column vector defined as q = [q 1 ; q 2 ; . . .
The vectors p, r, c have the same dimension and are similarly defined. The matrix G is a block square matrix defined as The following theorem first appeared in [16]; its proof is presented here for the first time.
Theorem 12. Assume that at time 0 the membrane potential of all neurons is at the rest value 0. The optimal consistent reconstruction u is unique and can be written as where The reconstruction coefficients are given in matrix form by where [·] + denotes the pseudoinverse and Proof. The proof is notationally more complex but closely follows the proof of Theorem 9. The representation functions (27) can be computed analytically as where f (x) = x 3 − 3x 2 + 6x − 6.

Example: Encoding with an ON-OFF Neuron Pair.
We consider an encoding circuit consisting of two interconnected integrate-and-fire neurons with feedback. For simplicity we assume that the IAF neurons are ideal, that is, Figure 2 depicts the circuit. Whenever a spike is generated, the firing neuron is reset and feedback is added to the membrane potential. In addition, the firing of each spike is communicated to the other neuron through crossfeedback. The two neurons in Figure 2 arise as models of ON and OFF bipolar cells in the retina and their connections through the nonspiking horizontal cells [18]. Let t j k denote the kth spike of the jth neuron, k = 1, . . . , n j and j = 1, 2. The t-transform of the neural circuit amounts to and can be written in inner product form as with q j k the right-hand side of (31) and φ ] , for all k, k = 1, 2, . . . , n j − 1, and j, j = 1, 2. With the t-transform in inner product form, the optimal consistent reconstruction is given by Theorem 12. A simple example consisting of two symmetric neurons with parameters δ 1 = −δ 2 = δ, κ 1 = κ 2 = κ, and b 1 = −b 2 = b is considered here. The cross-feedback is of the form No other feedback is present, that is, h 11 = h 22 = 0. The neuron parameters are δ = 0.75, κ = 0.01, and b = 3. In addition, α = 1/0.015 sec −1 and c = 1/3. Note that the impulse response of the filter has mean value zero. If the mean value is nonzero, the spike density of the ideal IAF neurons can be driven to zero or infinity. The input was chosen to be the temporal contrast of an artificial (positive) input photocurrent. With v denoting the input photocurrent, the temporal contrast u is defined as Clearly, even when the input bandwidth of the photocurrent v is known, the effective bandwidth of the actual neuron input u cannot be analytically estimated. The input photocurrent was bandlimited with bandwidth 100 Hz and had duration 200 milliseconds. Each neuron generated 75 spikes. The result of the recovery is shown in Figure 3. The SNR is equal to 28.75 dB.

Multi-Input Multi-Output Encoding and Consistent Stimulus Recovery
In this section we present our model of consistent information representation of M-dimensional vector signals using an N × M-dimensional filtering kernel and an ensemble of N integrate-and-fire neurons (see Figure 4). We assume without loss of generality that the neurons are ideal (nonleaky  additively coupled into a single neuron. Finally, we describe an algorithm for stimulus reconstruction that is based on spline interpolation.

MIMO Model for Neural Encoding. Let
respectively, is a Hilbert space. Let H : R → R N×M be a filtering kernel defined as Filtering the signal u with the kernel H leads to an N- Equation (37) can also be written in vector notation as where h j = [h j1 , h j2 , . . . , h jM ] T is the filtering vector of the neuron j, j = 1, 2, . . . , N. A bias b j is added to the component v j of the signal v, and the sum is passed through an integrate-and-fire neuron with integration constant (capacitance) κ j and threshold δ j , for all j, j = 1, 2, . . . , N (see Figure 4). For simplicity we assume that the IAF neurons are ideal, that is, R j → ∞. Let t j k denote the kth spike of the neuron j, with k = 1, 2, . . . , n j , where n j is the number of spikes generated by neuron j, j = 1, 2, . . . , N. The Time Encoding Machine in Figure 4 maps, therefore, the input vector u into the vector time sequence (t j k ), j = 1, 2, . . . , N, k = 1, 2, . . . , n j .
The t-transform for the jth neuron can be written as where , for all k, k = 1, 2, . . . , n j − 1, and all j, j = 1, 2, . . . , N. Note that, without any loss of generality, after firing all neurons are reset to the zero state. The t-transform (40) can be written in an inner product form as where Remark 13. An implicit assumption in writing the ttransform in the inner product form (41) is that the sampling functions φ j k belong to L 2 M . A sufficient condition for the latter is that all filters are bounded-input bounded-output (BIBO) stable, that is, R |h ji (s)| ds < ∞ for all i, i = 1, 2, . . . , M, and all j, j = 1, 2, . . . , N.

Consistent Stimulus Recovery.
The optimal consistent reconstruction is given by the solution of the following spline interpolation problem: We have the following result.
Theorem 14. Assume that at time 0 the membrane potential of all neurons is at the rest value 0. The optimal consistent reconstruction u is unique and can be written as With p = [p 1 ; p 2 ; . . . ; p N ] T , p j ∈ R (nj −1)×M , and r similarly defined, the reconstruction coefficients are given in matrix form by with where the inner products are restricted to the interval [0, T].
Proof. The proof is based on Theorem 4.

Remark 15.
Note that since the signal reconstruction is set up as a spline interpolation problem, the algorithm presented in Theorem 14 above will produce a solution that depends on both the filtering kernel H and the spiking mechanism of the population of neurons. We will briefly mention here conditions of no information loss due to filtering. If F denotes the Fourier transform, we have The requirement for no information loss implies that F H, the filtering kernel in the Fourier domain, has rank M for all frequencies of interest (here for all ω ∈ R). A trivial necessary condition that comes out of the rank condition is that N ≥ M; that is, the number of neurons that encode the stimulus must be at least equal to the number of its components. This intuitive argument has important ramifications in experimental neuroscience as it shows that, in general, multivariate stimuli (e.g., video sequences) cannot be efficiently represented by the spike train of a single neuron or a small neural population. Rather, the spike trains from a larger population of neurons that encode the same stimulus needs to be used.

Example: Delay Filter Bank.
We present the realization of the recovery algorithm for a filtering kernel that induces arbitrary, but known, delays, and weights on the stimulus. The kernel models dendritic tree latencies in sensory neurons (motor, olfactory) [8] or, in general, delays and synaptic weights between groups of pre-and postsynaptic neurons. In order to incorporate these delays, we assume that the stimuli are defined on a time window larger than [0, T]. The inner product, however, is restricted to the time interval [0, T].
The recovered stimuli using the spikes from 3, 6, and 9 neurons, respectively, are depicted from top to bottom in Figure 5. For each component, the recovered signal converges to the original one. Figure 6 shows the SNR corresponding to the recovery of each stimulus component when 3, 4, . . . , 9 neurons are used. Figure 6 demonstrates that overall, as more neurons are added to the representation of the stimulus, the SNR of all stimulus components increases. An exception is observed in the recovery of the component u 3 ; the addition of a neuron from three to four leads to a decrease of the SNR. Note, however, that the SNR for the recovery of the entire vectorvalued stimulus u increases with the addition of the fourth neuron from 7.71 dB to 12.23 dB (data not shown).

Discussion
The methodology of interpolating splines presented here applies to the deterministic case where the input stimulus and the LIF neurons have low noise levels. It ties in naturally with theoretical results that show that neural encoding of bandlimited signals leads to perfect signal reconstruction if Nyquist-type rate conditions are satisfied [5].
In neuromorphic engineering applications the noise levels can be kept low. Neuronal spike trains, however, often exhibit strong variability in response to identical inputs due to various noise sources. For stimuli encoded with neural circuits the problem of optimal reconstruction can be formulated as a smoothing spline problem [4]. This case is presented analytically in [7] for a slightly less general setup, where the signals belong to a Reproducing Kernel Hilbert Space [19]. A reconstruction of stimuli encoded with LIF neurons using both smoothing and interpolating splines offers an additional alternative. Thus, the methodology of spline theory provides a general framework for the optimal reconstruction of signals on a finite time horizon.
Computational Intelligence and Neuroscience The methodology presented here can be applied to the reconstruction of stimuli encoded with neurons that belong to other model classes of interest. An example is provided by neuron models with level crossing spiking mechanisms and feedback that have been investigated in [16]. More generally, the t-transform of any neuron model with piecewise linear dynamics can be described by a set of linear projections. Neurons with linear dynamics have been shown to express complex spiking behaviors [20][21][22].
The MIMO architecture presented here consists of a linear, time-invariant filtering kernel that is separated from the neural spiking mechanism. By relaxing the timeinvariance property and embedding spike-triggered reseting mechanisms at the level of the filtering kernel, more complex transformations can be modeled. Consequently dendritic trees incorporating compartmental neuron models and spike backpropagation [23] can be analyzed with the methodology advanced in this paper. The aforementioned architectures will be the subject of future research.

A. Interpolation Splines in Hilbert Spaces
We assume throughout that stimuli u belong to the space of functions of finite energy over a domain T , that is, u ∈ L 2 (T ). The information available to a decoder is a set of measurements where φ k ∈ L 2 (T ) are known functions and k = 1, 2, . . . , n.
The inner products can be written in operator form as where q = [q 1 , q 2 , . . . , q n ] T , and L : L 2 (T ) → R n is a linear operator defined by Finding u by inverting (A.2) is, in general, an ill-posed problem. Additional "smoothness" conditions are needed. We introduce these by requiring that the reconstructed signal minimizes a quadratic criterion Ju , where J : L 2 (T ) → Y is a bounded linear operator, Y is the range of J, and · denotes the standard L 2 -norm over T . is called an interpolation spline corresponding to the initial data q, the measurement operator L, and the energy operator J.
We restrict ourselves to the case where the operator J has a finite dimensional kernel of dimension m. The following standard theorem establishes necessary conditions for the existence and uniqueness of the interpolation spline. For a proof see [13]. In order to derive the general form of the interpolation spline, we introduce the notion of reproducing kernel for a Hilbert space with respect to the energy operator J. This notion generalizes reproducing kernels associated with Reproducing Kernel Hilbert Spaces [19]. (2) any functional L : L 2 (T ) → R that vanishes on the kernel of J, that is, with F defined as in (A.11). Combining (A.12) with (A.17) we obtain (A.10). For more information see [13].

B. Reproducing Kernels for MIMO Signal Reconstruction
Let L 2 M = (L 2 (T )) M be the space of M-dimensional vector-valued functions defined over the domain T . The space equipped with inner product and norm given by (35) is a Hilbert space. Suppose that we seek a consistent reconstruction that also minimizes the energy operator For the general representation result (A.8), the scalar factor can be absorbed into the coefficients.