Channel Identification Machines

We present a formal methodology for identifying a channel in a system consisting of a communication channel in cascade with an asynchronous sampler. The channel is modeled as a multidimensional filter, while models of asynchronous samplers are taken from neuroscience and communications and include integrate-and-fire neurons, asynchronous sigma/delta modulators and general oscillators in cascade with zero-crossing detectors. We devise channel identification algorithms that recover a projection of the filter(s) onto a space of input signals loss-free for both scalar and vector-valued test signals. The test signals are modeled as elements of a reproducing kernel Hilbert space (RKHS) with a Dirichlet kernel. Under appropriate limiting conditions on the bandwidth and the order of the test signal space, the filter projection converges to the impulse response of the filter. We show that our results hold for a wide class of RKHSs, including the space of finite-energy bandlimited signals. We also extend our channel identification results to noisy circuits.


Introduction
Signal distortions introduced by a communication channel can severely affect the reliability of communication systems. If properly utilized, knowledge of the channel response can lead to a dramatic improvement in the performance of a communication link. In practice, however, information about the channel is rarely available a priori and the channel needs to be identified at the receiver. A number of channel identification methods [1] have been proposed for traditional clock-based systems that rely on the classical sampling theorem [2,3]. However, there is a growing need to develop channel identification methods for asynchronous nonlinear systems, of which time encoding machines (TEMs) [4] are a prime example.
TEMs naturally arise as models of early sensory systems in neuroscience [5,6] as well as models of nonlinear samplers in signal processing and analog-to-discrete (A/D) converters in communication systems [4,6]. Unlike traditional clockbased amplitude-domain devices, TEMs encode analog signals as a strictly increasing sequence of irregularly spaced times (t k ) k∈Z . As such, they are closely related to irregular (amplitude) samplers [4,7] and, due to their asynchronous nature, are inherently low-power devices [8]. TEMs are also readily amenable to massive parallelization [9]. Furthermore, under certain conditions, TEMs faithfully represent analog signals in the time domain; given the parameters of the TEM and the time sequence at its output, a time decoding machine (TDM) can recover the encoded signal loss-free [4,5].
A general TEM of interest is shown in Figure 1. An analog multidimensional signal u is passed through a channel with memory that models physical communication links. We assume that the effect of this channel on the signal u can be described by a linear multidimensional filter. The output of the channel v is then mapped, or encoded, by a nonlinear asynchronous sampler into the time sequence (t k ) k∈Z . A few examples of samplers include asynchronous A/D converters such as the one based on an asynchronous sigma/delta modulator (ASDM) [4], nonlinear oscillators such as the van der Pol oscillator in cascade with a zero-crossing detector (ZCD) [6], and spiking neurons such as the integrate-andfire (IAF) or the threshold-and-fire (TAF) neurons [9]. The above-mentioned asynchronous samplers incorporate the temporal dynamics of spike (pulse) generation and allow one to consider, in particular for neuroscience applications, more biologically plausible nonlinear spike generation (sampling) mechanisms.
In this paper, we investigate the following nonlinear identification problem: given both the input signal u and the time sequence (t k ) k∈Z at the output of a TEM, what is the channel filter? System identification problems of this kind are key to understanding the nature of neural encoding and processing [10][11][12][13][14], process modeling and control [15], and, more generally, methods for constructing mathematical models of dynamical systems [16].
Identification of the channel from a time sequence is to be contrasted with existing methods for rate-based models in neuroscience (see [10] for an extensive review). In such models the output of the system is taken to be its instantaneous response rate and the nonlinear generation of a time sequence is not explicitly modeled. Furthermore, in order to fit model parameters, identification methods for such models typically require the response rate to be known [17]. This is often difficult in practice since the same experiment needs to be repeated a large number of times to estimate the response rate. Moreover, the use of the same stimulus typically introduces a systematic bias during the identification procedure [10].
The channel identification methodology presented in this paper employs test signals that are neither white nor have stationary statistics (e.g., Gaussian with a fixed mean/ variance). This is a radical departure from the widely employed nonlinear system identification methods [10], including the spike-triggered average [18] and the spike-triggered covariance [19] methods. We carry out the channel identification using input signals that belong to reproducing kernel Hilbert spaces (RKHSs), and, in particular, spaces of bandlimited functions, that is, functions that have a finite support in the frequency domain. The latter signals are extensively used to describe sensory stimuli in biological systems and to model signals in communications. We show that for such signals the channel identification problem becomes mathematically tractable. Furthermore, we demonstrate that the choice of the input signal space profoundly effects the type of identification results that can be achieved.
The paper is organized as follows. In Section 2, we introduce three application-driven examples of the system in Figure 1 and formally state the channel identification problem. In Section 3, we present the single-input single-output (SISO) channel identification machine (CIM) for the finitedimensional input signal space of trigonometric polynomials. Using analytical methods and simulations, we demonstrate that it is possible to identify the projection of the filter onto the input space loss-free and show that the SISO CIM algorithm can recover the original filter with arbitrary precision, provided that both the bandwidth and the order of the input space are sufficiently high. Then, in Section 4, we extend our methodology to multidimensional systems and present multi-input single-output (MISO) CIM algorithms for the identification of vector-valued filters modeling the channel. We generalize our methods to classes of RKHSs of input signals in Section 5.1 and work out in detail channel identification algorithms for infinite-dimensional Paley-Wiener spaces. In Section 5.2 we discuss extensions of our identification results to noisy systems, where additive noise is introduced either by the channel or the asynchronous sampler. Finally, Section 6 concludes our work.

The Channel Identification Problem
We investigate a general I/O system comprised of a filter or a bank of filters (i.e., a linear operator) in cascade with an asynchronous (nonlinear) sampler ( Figure 1). The I/O circuit belongs to the class of [Filter]-[Asynchronous Sampler] circuits. In general terms, the input to such a system is a vector-valued analog signal u = [u 1 (t), u 2 (t), . . . , u M (t)] T , t ∈ R, M ∈ N, and the output is a time sequence (t k ) k∈Z generated by its asynchronous sampling mechanism. In the neural coding literature, such a system is called a time encoding machine (TEM) [4] as it encodes an unknown signal u into an observable time sequence (t k ) k∈Z .

Examples of Asynchronous SISO and MISO
Systems. An instance of the TEM in Figure 1 is the SISO [Filter]-[Ideal IAF] neural circuit depicted in Figure 2(a). Here the filter is used to model the aggregate processing of a stimulus performed by the dendritic tree of a sensory neuron. The output of the filter v is encoded into the sequence of spike times (t k ) k∈Z by an ideal integrate-and-fire neuron. Identification of dendritic processing in such a circuit is an important problem in systems neuroscience. It was first investigated in [20]. Another instance of the system in Figure 1 is the SISO [Filter]-[Nonlinear Oscillator-ZCD] circuit shown in Figure 2(b). In contrast to the first example, where the input was coupled additively, in this circuit the biased filter output v is coupled multiplicatively into a nonlinear oscillator. The zero-crossing detector then generates a time sequence (t k ) k∈Z by extracting zeros from the observable modulated waveform at the output of the oscillator. Called a TEM with multiplicative coupling [6], this circuit is encountered in generalized frequency modulation [21].
An example of a MISO system is the [Filter]-[ASDM-ZCD] circuit shown in Figure 2(c). Similar circuits arise practically in all modern-day A/D converters and constitute important front-end components of measurement and Asynchronous sampler: ideal IAF neuron where u m * h m denotes the convolution of u m with h m , is additively coupled into an ASDM. Specifically, v(t) is passed through an integrator and a noninverting Schmitt trigger to produce a binary output z(t) ∈ {−b, b}, t ∈ R. A zero-crossing detector is then used to extract the sequence of zero-crossing times (t k ) k∈Z from z(t). Thus, the output of this [Filter]-[ASDM-ZCD] circuit is the time sequence (t k ) k∈Z .

2.2.
Modeling the Input Space. We model channel input signals u = u(t), t ∈ R, as elements of the space of trigonometric polynomials H (see Section 5.1 for more general input spaces). Definition 1. The space of trigonometric polynomials H is a Hilbert space of complex-valued functions where u l ∈ C, Ω is the bandwidth, L is the order and T = 2πL/Ω, endowed with the inner product ·, · : Given the inner product in (2), the set of elements forms an orthonormal basis in H . Thus, any element u ∈ H and any inner product u, w can be compactly written as u = L l=−L u l e l and u, w = L l=−L u l w l . Moreover, H is a reproducing kernel Hilbert space (RKHS) with a reproducing kernel (RK) given by also known as a Dirichlet kernel [22].
We note that a function u ∈ H satisfies u(0) = u(T). There is a natural connection between functions on an interval of length T that take on the same values at interval end-points and functions on R that are T-periodic: both provide equivalent descriptions of the same mathematical object, namely a function on a circle. By abuse of notation, in what follows u will denote both a function defined on an interval of length T and a function defined on the entire real line. In the latter case, the function u is simultaneously periodic with period T and bandlimited with bandwidth Ω, that is, it has a finite spectral support supp(F u) ⊆ [−Ω, Ω], where F denotes the Fourier transform. In what follows we will assume that u l / = 0 for all l = −L, −L + 1, . . . , L, that is, a signal u ∈ H contains all 2L + 1 frequency components.

Modeling the Channel and Channel Identification.
The channel is modeled as a bank of M filters with impulse responses h m , m = 1, 2, . . . , M. We assume that each filter is linear, causal, BIBO-stable and has a finite temporal support of length S ≤ T, that is, it belongs to the space Since the length of the filter support is smaller than or equal to the period of an input signal, we effectively require that for a given S and a fixed input signal bandwidth Ω, the order L of the space H satisfies L ≥ S · Ω/(2π). The aggregate channel output is given by v(t) = M m=1 (u m * h m )(t). The asynchronous sampler maps the input signal v into the output time sequence (t k ) n k=1 , where n denotes the total number of spikes produced on an interval t ∈ [0, T].  We are now in a position to define the channel identification problem.

SISO Channel Identification Machines
As already mentioned, the circuits under investigation consist of a channel and an asynchronous sampler. Throughout this paper, we will assume that the structure and the parameters of the asynchronous sampler are known. We start by formally describing asynchronous channel measurements in Section 3.1. Channel identification algorithms from asynchronous measurements are given in Section 3.2. Examples characterizing the performance of the identification algorithms are discussed in Section 3.3. Figure 2(a). In this circuit, an input signal u ∈ H is passed through a filter with an impulse response (or kernel) h ∈ H and then encoded by an ideal IAF neuron with a bias b ∈ R + , a capacitance C ∈ R + , and a threshold δ ∈ R + . The output of the circuit is a sequence of spike times (t k ) n k=1 on the time interval [0, T] that is available to an observer. This neural circuit is an instance of a TEM and its operation can be described by a set of equations tk+1 tk (u * h)(s)ds = q k , k = 1, 2, . . . , n − 1,

Consider the SISO [Filter]-[Ideal IAF] neural circuit in
where q k = Cδ − b(t k+1 − t k ). Intuitively, at every spike time t k+1 the ideal IAF neuron is providing a measurement q k of the signal v(t) = (u * h)(t) on the time interval [t k , t k+1 ).
Definition 5. The mapping of an analog signal u(t), t ∈ R, into an increasing sequence of times (t k ) k∈Z (as in (5)) is called the t-transform [4].
Definition 6. The operator P : H → H given by is called the projection operator. Proof. Since u ∈ H , u(t) = u(·), K(·, t) by the reproducing property of the kernel K(s, t). Hence, (u * h)(t) where (a) follows from the commutativity of convolution, (b) from the reproducing property of the kernel K and the assumption that (6), and (e) from the definition of convolution for periodic functions [23]. It follows that on the interval t ∈ [0, T], (5) can be rewritten as for all k = 1, 2, . . . , n − 1, where ( f ) comes from the commutativity of convolution. The right-hand side of (7) is the t-transform of a [Filter]-[Ideal IAF] TEM with an input P h and a filter that has an impulse response u. Hence, a TDM can identify P h, given a filter-output pair (u, T).
The conditional duality between time encoding and channel identification is visualized in Figure 3. First, we note Computational Intelligence and Neuroscience 5 the conditional I/O equivalence between the circuit in Figure 3(a) and the original circuit in Figure 2(a). The equivalence is conditional since P h is a projection onto a particular space H and the two circuits are I/O-equivalent only for input signals in that space. Second, identifying the filter of the circuit in Figure 3(a) is the same as decoding the signal encoded with the circuit in Figure 3(b). Note that the filter projection P h is now treated as the input to the [Filter]-[Ideal IAF] circuit and the signal u appears as the impulse response of the filter. Effectively, we have transformed the channel identification problem into a time decoding problem and we can use the TDM machinery of [5] to identify the filter projection

Channel Identification from Asynchronous Measurements.
Given the parameters of the asynchronous sampler, the measurements q k of the channel output v can be readily computed from spike times (t k ) n k=1 using the definition of q k ((5) for the IAF neuron). Furthermore, as we will now show, for a known input signal, these measurements can be reinterpreted as measurements of the channel itself. (7) can be written as Proof. The linear functional L k : H → R defined by

such that the t-transform of the [Filter]-[Ideal IAF] neuron in
where w ∈ H , is bounded. Thus, by the Riesz representation theorem [22], there exists a function φ k ∈ H such that To find the latter coefficients, we note that Assuming that u is known and there are enough measurements available, P h can be obtained by first recovering v from these projections and then deconvolving it with u. However, this two-step procedure does not work when the circuit is not producing enough measurements and one cannot recover v. A more direct route is suggested by Lemma 8, since the measurements (q k ) n−1 k=1 can also be interpreted as the projections of P h onto φ k , that is, P h, φ k , k = 1, 2, . . . , n − 1. A natural question then is how to identify P h directly from the latter projections.

Lemma 9. Let u ∈ H be the input to a [Filter]-[Ideal IAF] circuit with h ∈ H. If the number of spikes n generated by the neuron in a time interval of length T satisfies n ≥ 2L + 2, then the filter projection P h can be perfectly identified from the I/O pair
The matrix Φ is of size (n − 1) × (2L + 1) and its elements are given by Proof. Since P h ∈ H , it can be written as (P h)(t) = L l=−L h l e l (t). Then from (8) we have Writing (11) This system of linear equations can be solved for h, provided that the rank r(Φ) of the matrix Φ satisfies r(Φ) = 2L + 1. A necessary condition for the latter is that the number of measurements q k is at least 2L + 1, or, equivalently, the number of spikes n ≥ 2L + 2. Under this condition, the solution can be computed as h = Φ + q.

Remark 10.
If the signal u is fed directly into the neuron, then In other words, if there is no processing on the input signal u, then the kernel K(t, 0) in H is identified as the filter projection. This is also illustrated in Figure 7.
In order to ensure that the neuron produces at least 2L+1 measurements in a time interval of length T, it suffices to Using the definition of T = 2πL/Ω and taking the limit as L → ∞, we obtain the familiar Nyquist-type criterion Cδ < π(b − c)/Ω for a bandlimited stimulus u ∈ Ξ [4, 20] (see also Section 5.1).
Ideally, we would like to identify the impulse response of the filter h. Note that unlike h ∈ H, the projection P h belongs to the space H . Nevertheless, under quite natural conditions on h (see Section 3.4), P h approximates h arbitrarily closely on t ∈ [0, T], provided that both the bandwidth and the order of the signal u are sufficiently large (see also Figure 9).
The requirement of Lemma 9 that the number of spikes n produced by the system in Figure 2(a) has to satisfy n ≥ 2L + 2 is quite stringent and may be hard to meet in practice, especially if the order L of the space H is high. In that case we have the following result.

stimuli at the input to a [Filter]-[Ideal IAF] circuit with h ∈ H.
If the total number of spikes n = N i=1 n i generated by the neuron satisfies n ≥ 2L+N +1, then the filter projection P h can be perfectly identified from a collection of where for all k = 1, 2, . . . , n − 1, l = −L, −L + 1, . . . , L, and i = 1, 2, . . . , N.
Proof. Since P h ∈ H , it can be written as (P h)(t) = L l=−L h l e l (t). Furthermore, since the stimuli are linearly independent, the measurements (q i k ) n i −1 k=1 provided by the IAF neuron are distinct. Writing (5) for a stimulus u i , we obtain . . . ; Φ N ] and q = [q 1 ; q 2 ; . . . ; q N ]. This system of linear equations can be solved for h, provided that the rank r(Φ) of matrix Φ satisfies r(Φ) = 2L + 1. A necessary condition for the latter is that the total number n = N i=1 n i of spikes generated in response to all N signals satisfies n ≥ 2L + N + 1. Then, the solution can be computed as h = Φ + q.
To find the coefficients φ i l,k , we note that φ i l,k = L i k (e l ) (see Lemma 8). Hence, the result follows. Figure 4(a). The block diagram of the SISO CIM in Theorem 11 is shown in Figure 4(b). Note that the key idea behind the SISO CIM is the introduction of multiple linearly independent test signals u i ∈ H , i = 1, 2, ..., N. When the [Filter]-[Ideal IAF] circuit is producing very few measurements of P h in response to any given test signal u i , we use more signals to obtain additional measurements. We can do so and identify P h because P h ∈ H is fixed. In contrast, identifying P h in a two-step deconvolving procedure requires reconstructing at least one v i . This is an illposed problem since each v i is signal-dependent and has a small number of associated measurements.

Examples.
We now demonstrate the performance of the identification algorithms in Lemma 9 and Theorem 11. First, we identify a filter in the SISO [Filter]-[Ideal IAF] circuit (Figure 2(a)) from a single I/O pair when this circuit produces a sufficient number of measurements in an interval of length T. Second, we identify the filter using multiple I/O pairs for the case when the number of measurements produced in response to any given input signal is small. Finally, we consider the SISO [Filter]-[Nonlinear Oscillator-ZCD] circuit with multiplicative coupling (Figure 2(b)) and identify its filter from multiple I/O pairs.

SISO [Filter]-[Ideal IAF] Circuit, Single I/O Pair.
We model the dendritic processing filter using the causal linear kernel Computational Intelligence and Neuroscience   Computational Intelligence and Neuroscience with c = 3 and α = 200. The general form of this kernel was suggested in [24] as a plausible approximation to the temporal structure of a visual receptive field. Since the length of the filter support S = 0.1 s, we will need to use a signal with a period T ≥ 0.1 s. In Figure 5(a), we apply a signal u that is bandlimited to 25 Hz and has a period of T = 0.2 s, that is, the order of the space L = T · Ω/(2π) = 5. The biased output of the filter v = (u * h) + b is then fed into an ideal integrate-and-fire neuron (Figure 5(b)). Here the bias b guarantees that the output of the integrator reaches the threshold value in finite time. Whenever the biased filter output is above zero (Figure 5(b)), the membrane potential is increasing ( Figure 5(c)). If the membrane potential t tk [(u * h)(s) + b]ds reaches a threshold δ, a spike is generated by the neuron at a time t k+1 and the potential is reset to zero ( Figure 5(c)). The resulting spike train (t k ) n k=1 at the output of the [Filter]-[Ideal IAF] circuit is shown in Figure 5(d). Note that the circuit generated a total of n = 13 spikes in an interval of length T = 0.2 s. According to Theorem 14, we need at least n = 2L + 2 = 12 spikes, corresponding to 2L + 1 = 11 measurements, in order to identify the projection P h of the filter h loss-free. Hence, for this particular example, it will suffice to use a single I/O pair (u, T).
In Figure 5(e), we plot the original impulse response of the filter h, the filter projection P h, and the filter P h * . The latter filter was identified using the algorithm in Theorem 14. Notice that the identified impulse response P h * (red) is quite different from h (dashed black). In contrast, and as expected, the blue and red curves corresponding, respectively, to P h and P h * are indistinguishable. The mean squared error (MSE) between P h * and P h amounts to −77.5 dB.
The difference between P h and h is further evaluated in Figures 5(f)-5(h). By definition of P h in (6), P h = h * K(·, 0), or F (P h) = F (h)F (K(·, 0)) since K = K. Hence both the projection P h and the identified filter P h * will contain frequencies that are present in the reproducing kernel K, or equivalently in the input signal u. In Figure 5(f) we show the double-sided Fourier amplitude spectrum of K(t, 0). As expected, we see that the kernel is bandlimited to 25 Hz and contains 2L + 1 = 11 distinct frequencies.
On the other hand, as shown in Figure 5(g), the original filter h is not bandlimited (since it has a finite temporal support). As a result, the input signal u explores h in a limited spectrum of [−Ω, Ω] rad/s, effectively projecting h onto the space H with Ω = 2π · 25 rad/s and L = 5. The Fourier amplitude spectrum of the identified projection P h * is shown in Figure 5(h).

SISO [Filter]-[Ideal IAF] Circuit, Multiple I/O Pairs.
Next, we identify the projection of h onto the space of functions that are bandlimited to 100 Hz and have the same period T = 0.2 s as in the first example. This means that the order L of the space of input signals H is L = T · Ω/(2π) = 20. In order to identify the projection P h lossfree, the neuron has to generate at least 2L + 1 = 41 measurements. If the neuron produces about 13 spikes (12 measurements) on an interval of length T, as in the previous example, a single I/O pair will not suffice. However, we can still recover the projection P h if we use multiple I/O pairs.
In Figure 6 we illustrate identification of the filter using the algorithm in Theorem 11. A total of 48 spikes were produced by the neuron in response to four different signals u 1 , . . . , u 4 . Since 48 > 2L + N + 1 = 45, the MSE between the identified filter P h * (red) and the projection P h (blue) is −73.3 dB.

SISO [Filter]-[Ideal IAF] Circuit
, h(t) = δ(t). Now we consider a special case when the channel does not alter the input signal, that is, when h(t) = δ(t), t ∈ R, is the Dirac delta function. As explained in Remark 10, the CIM should identify the projection of δ(t) onto H , that is, it should identify the kernel K(t, 0). This is indeed the case as shown in Figure 7.

SISO [Filter]-[Nonlinear Oscillator-ZCD] Circuit, Multiple I/O Pairs.
Next we consider a SISO circuit consisting of a channel in cascade with a nonlinear dynamical system that has a stable limit cycle. We assume that the (positive) output of the channel v(t) + b is multiplicatively coupled to the dynamical system (Figure 2(b)) so that the circuit is governed by a set of equations A system (16) followed by a zero-crossing detector is an example of a TEM with multiplicative coupling and has been previously investigated in [6]. It can be shown that such a TEM is input/output equivalent to an IAF neuron with a threshold δ that is equal to the period of the dynamical system on a stable limit cycle [6].
As an example, we consider a [Filter]-[van der Pol -ZCD] TEM with the van der Pol oscillator described by a set of equations where μ is the damping coefficient [6]. We assume that y 1 is the only observable state of the oscillator and without loss of generality we choose the zero phase of the limit cycle to be the peak of y 1 .
In Figure 8, we show the results of a simulation in which a SISO CIM was used to identify the channel. Input signals (Figure 8(a)) were bandlimited to 50 Hz and had a period T = 0.5 s, that is, L = 25. In the absence of an input, that is, when u = 0, a constant bias b = 1 (Figure 8(b)) resulted a in period of 34.7 ms on a stable limit cycle (Figure 8(e)). As seen in Figures 8(b) and 8(c), downward/upward deviations of v 1 (t) + b in response to u 1 resulted in the slowingdown/speeding-up of the oscillator. In order to identify the Computational Intelligence and Neuroscience     K(t, 0) for H 1 Ω,L with Ω = 2π · 10 rad/s and L = 10. Also shown is the original filter h = δ (dashed black) and its projection P h = δ * K(·, 0) = K(·, 0) (blue). The MSE between P h * and P h is −87.6 dB. (f)-(h) Fourier amplitude spectra of K, h and P h * . As before, P h * ∈ H but h / ∈ H .

12
Computational Intelligence and Neuroscience filter projection onto a space of order L = 25 loss-free, we used a total of n = 56 zeros at the output of the zero-crossing detector (Figure 8(d)). This is 1 more zero than the rank requirement of 2L + N + 1 = 55 zeros, or equivalently of 2L + 1 = 51 measurements. The MSE between the identified filter P h * (red) and the projection P h (blue) is −66.6 dB.

Convergence of the SISO CIM Estimate.
Recall, that the original problem of interest is that of recovering the impulse response of the filter h. The CIM lets us identify the projection P h of that filter onto the input space. A natural question to ask is whether P h converges to h and if so how and under what conditions. We formalize this below.  (6), we have

Proposition 12. If
where S h L is the Lth partial sum of the Fourier series of h and h(l) is the lth Fourier coefficient. Hence the problem of convergence of P h to h is the same as that of the convergence of the Fourier series of h. We thus have convergence in the L 2 norm and convergence almost everywhere follows from Carleson's theorem [23].
then P h → h in the L p norm and almost everywhere by Hunt's theorem [23].
It follows from Proposition 12 that P h approximates h arbitrarily closely (in the L 2 norm, or MSE sense), given an appropriate choice of Ω and L. Since the number of measurements needed to identify the projection P h increases linearly with L, a single channel identification problem leads us to consider a countably infinite number of time encoding problems in order to identify the impulse response of the filter with arbitrary precision. To provide further intuition about the relationship between h and P h, we compare the two in time and frequency domains for multiple values of Ω and L in Figure 9.

MISO Channel Identification Machines
In this section we consider the identification of a bank of M filters with impulse responses h m , m = 1, 2, . . . , M. We present a MISO CIM algorithm in Section 4.1, followed by an example demonstrating its performance in Section 4.2. Figure 2

An Identification Algorithm for MISO Channels. Consider now the MISO ASDM-based circuit in
. This circuit is also an instance of a TEM and (assuming z(t 1 ) = b) its t-transform is given by where One simple way to identify filters h m , m = 1, 2, . . . , M, is to identify them one by one as in Theorem 11. For instance, this can be achieved by applying signals of the form u = [0, . . . , 0, u m , 0, . . . , 0] when identifying the filter h m . In a number of applications, most notably in early olfaction [25], this model of system identification cannot be applied. An alternative procedure that allows to identify all filters at once is given below.

a collection of N linearly-independent vector-valued signals at the input of a MISO [Filter]-[ASDM-ZCD] circuit with filters
where u i l = [u i1 l , u i2 l , . . . , u iM l ], i = 1, 2, . . . , N. Finally, the elements of matrix Φ i are given by Using the definition of φ i k = L l=−L φ i l,k e l (t) and substituting (23) into the t-transform (19), we obtain Repeating for all stimuli u i , i = 1, . . . , N, we obtain q = Φh with Φ as specified in (21). This system of linear equations can be solved for h, provided that the rank of Φ satisfies the condition r(Φ) = M(2L + 1). To find the coefficients φ i l,k , we note that φ i l,k = L i k (e l ). Hence, the result follows.
The MIMO time-encoding interpretation of the channel identification problem for a MISO [Filter]-[ASDM-ZCD] circuit is shown in Figure 10(a). The block diagram of the MISO CIM in Theorem 14 is shown in Figure 10 In order to identify the multidimensional channel this system of equations must have a solution for every l. A necessary condition for the latter is that N ≥ M, that is, the number N of test signals u i is greater than the number of signal components M.

MISO [Filter]-[ASDM -ZCD] circuit of
with t ∈ [0, 0.1] s, c = 3 and α = 200 and β = 20 ms. All N = 5 signals are bandlimited to 100 Hz and have a period of T = 0.2 s, that is, the order of the space L = 20. According to Theorem 14, the ASDM has to generate a total of at least M(2L + 1) + N = 128 trigger times in order to identify the projections P h 1 , P h 2 and P h 3 loss-free. We use all five triplets u i = [u i1 , u i2 , u i3 ], i = 1, . . . , 5, to produce 131 trigger times. A single such triplet u 1 is shown in Figure 11(a). The corresponding biased aggregate channel output v 1 (t) − z 1 (t) is shown in Figure 11(b). Since the Schmitt trigger output z(t) switches between +b and −b (Figure 11(d)), the signal v 1 (t) − z 1 (t) is piece-wise continuous. Figure 11(c) shows the integrator output. Note that when z(t) = −b, the channel output is positively biased and the integrator output t tk [v 1 (s) − z(s)]ds is compared against a threshold +δ. As soon as that threshold is reached, the Schmitt trigger output switches to z(t) = b and the negatively-biased channel output is compared to a threshold −δ. Passing the ASDM output z 1 (t) through a zero-crossing device (Figure 11(d)), we obtain a corresponding sequence of trigger times (t 1 k ) 22 k=1 . The set of all 131 trigger times is shown in Figure 11(e). Three identified filters P h 1 * , P h 2 * and P h 3 * are plotted in Figures 11(f)-11(h). The MSE between filter projections and filters recovered by the algorithm in Theorem 14 is on the order of −60 dB.

Generalizations
We shall briefly generalize the results presented in previous sections in two important directions. First, we consider a general class of signal spaces for test signals in Section 5.1. Then we discus channel models with noisy observations in Section 5.2.

Hilbert Spaces and RKHSs for Input Signals.
Until now we have presented channel identification results for a particular space of input signals, namely the space of trigonometric polynomials. The finite-dimensionality of this space and the simplicity of the associated inner product makes it an attractive space to work with when implementing a SISO or a MISO CIM algorithm. However, fundamentally the identification methodology relied on the the geometry of the Hilbert space of test signals [5,26]; computational tractability was based on kernel representations in an RKHS.

collection of N linearly independent and bounded stimuli at the input of a [Filter]-[Asynchronous Sampler] circuit with a linear processing filter h ∈ H and the t-transform
where L i k : H → R is a bounded linear functional mapping P h into a measurement q i k . Then there is a set of sampling Computational Intelligence and Neuroscience for all k ∈ Z, i = 1, 2, . . . , N. Furthermore, if H is an RKHS with a kernel K(s, t), s, t ∈ I, then φ i k (t) = L i k (K(·, t)). Let the set of representation functions {(ψ i k ) k∈Z } N i=1 , span the Hilbert space H . Then are orthogonal basis or frames for H , then the filter coefficients amount to Proof. By the Riesz representation theorem, since the linear functional L i k : H → R is bounded, there is a set of sampling If H is an RKHS, a sampling function φ i k can be computed using the reproducing property of the kernel K as in Finally, writing all inner products φ i k , P h = q i k yields, with the notation above, a system of linear equations Φh = q and the fiter coefficients amount to h = Φ + q.

Example: Paley-Wiener Space.
As an example, we consider the Paley-Wiener space which is closely related to the space of trigonometric polynomials. Specifically, the finite-dimensional space H can be thought of as a discretized version of the infinite-dimensional Paley-Wiener space in the frequency domain. An element u ∈ H has a line spectrum at frequencies lΩ/L, l = −L, −L + 1, . . . , L. This spectrum becomes dense in [−Ω, Ω] as L → ∞. The space Ξ with the inner product ·, · : Ξ × Ξ → R given by     If either the channel or the sampler introduce an error, we can model it by adding a noise term ε k to the t-transform [9]: P h, φ k = q k + ε k .
In the presence of noise it is not possible to identify the projection P h loss-free. However, we can still identify an estimate P h of P h that is optimal for an appropriately defined cost function. For example, we can formulate a bicriterion Tikhonov regularization problem where the scalar λ > 0 provides a trade-off between the faithfulness of the identified filter projection P h to measurements (q k ) n−1 k=1 and its norm P h H . Proof. Since the minimizer P h is in H , it is of the form given in (38). Substituting this into (37), we obtain Remark 20. In Section 3.2, identification of the projection (P h)(t) = L l=−L h l e l (t) amounted to finding P h ∈ H such that the sum of the residuals ( P h, φ k − q k ) 2 was minimized [9]. In other words, we were solving an unconstrained convex optimization problem of the form where h = [h −L , . . . , h L ] and Φ = [Φ 1 ; Φ 2 ; . . . ; Φ N ] with Φ i , i = 1, 2, . . . , N, as defined in (13).

Example: Noisy SISO [Filter]-[Ideal IAF] Circuit.
In the following example, we assume that noise is added to the measurements (q i k ) n−1 k=1 , i = 1, 2, by the neuron and we model that noise by introducing random thresholds that are normally distributed with a mean δ and a standard deviation 0.1δ, that is, δ k ∼ N (δ, (0.1δ) 2 ) : . Thus random thresholds result in additive noise ε i k ∼ N (0, (0.1Cδ) 2 ), i = 1, 2. In Figure 13(a) we show two stimuli that were used to probe the [Filter]-[Ideal IAF] circuit. Both stimuli are bandlimited to 25 Hz and have a period of T = 0.2 s, that is, the order of the space is L = 5. The response of the neuron to a biased filter output v 1 (t) + b (Figure 13(b)) is shown in Figure 13(c). Note the significant deviations in thresholds δ k around the mean value of δ = 0.05. Although a significant amount of noise is introduced into the system, we can identify an optimal estimate P h * that is still quite close to the true projection P h. The MSE of identification is −31.8 dB.

Conclusion
In this paper we presented a class of channel identification problems arising in the context of communication channels in [Filter]-[Asynchronous Sampler] circuits. Our results are based on a key structural conditional duality result between time decoding and channel identification. The conditional duality result shows that given a class of test signals, the projection of the filter onto the space of input signals can be recovered loss-free. Moreover, the channel identification problem can be converted into a time decoding problem. We considered a number of channel identification problems that arise both in communications and in neuroscience. We presented CIM algorithms that allow one to recover projections of both one-dimensional and multi-dimensional filters in such problems and demonstrated their performance through numerical simulations. Furthermore, we showed that under natural conditions on the impulse response of the filter, the filter projection converges to the original filter almost everywhere and in the mean-squared sense (L 2 norm), with increasing bandwidth and order of the space. Thus in order to identify the impulse response of the filter with arbitrary precision, we are lead to consider a countably infinite number of time encoding problems. Finally, we generalized our results to a large class of test signal spaces and to channel models with noisy observations.