Tuesday, June 17, 2008

Spiking Neural model

For detailed PDF document please mail to sekharstuff@gmail.com
Formal spiking neuron models
The second stream of research in computational neuroscience is oriented towards modeling the spiking nature of the neurons and retaining the essential elements of the behavior being modeled, while trying to simplify the complexity of the resulting description (Gerstner,1991; Maass, 1995; Maass, 1997; Rieke et al., 1997). The principal motivation for the creation of simplified models is that they allow studying more easily the computational andfunctional principles of neural systems (Koch, 1999).
The reduction of the detailed neuron models to formal models requires simplifications in at least two respects. First, the non–linear dynamics of spike generation must be reduced to a single ordinary differential equation and second, the spatial structure of the neuron (i.e., the dendritic tree) is neglected and reduced to an input (Gerstner and Kistler, 2002). To support the validity of the former simplification, Kistler et al. (1997) demonstrated that spike generation in the Hodgkin–Huxley model can be reproduced to a high degree of accuracy (i.e., up to 90%) by a single variable model. The authors pointed out that the Hodgkin–Huxley model shows a sharp, threshold–like transition between an action potential for a strong stimulus and graded response (no spike) for slightly weaker stimuli. This suggests that the emission of an action potential can be described by a threshold process .
Several simplified neural models have been proposed in the last decades. The leaky integrate–
and–fire neuron is probably the best–known example of a formal neural model (Tuckwell,1988; Bugmann, 1991; Stevens and Zador, 1998). It simulates the dynamics of the neuron membrane potential in response to a synaptic current by implementing an equivalent electrical circuit. The function of the integrate–and–fire circuit is to accumulate the input currents and, when the membrane potential reaches the threshold value, to generate a spike.Immediately after emitting a pulse, the potential is reset and maintained there for an absolute refractory period.
The simplified mathematical models for spiking neurons cannot account for the entire range of computational functions of the biological neuron. Rather, they try to abstract a number of essential computational aspects of the real cell function. The essential features implemented can differ between models, as a function of what the modeler considers to be relevant and crucial for its domain study. Thus, the integrate–and–fire model focuses upon the temporal summation function of the neuron (Bugmann and Taylor, 1997). The spike response model proposed
by Gerstner (1999) simplifies the action potential generation to a threshold process.The resonate–and–fire model (Izhikevich, 2001) focuses upon the operation of the neuron in a resonating regime. By contrast with the detailed neural models, the computational strength of the spiking neurons arises from the way they interact with each other, when they work cooperatively in large networks.
Neural communication with spikes
Neurons communicate by producing sequences of fixed size electrical impulses called action potentials or spikes (Adrian, 1926). As Rieke and colleagues puts it:
Spike sequences are the language for which the brain is listening, the language the brain uses for its internal musings, and the language it speaks as it talks to the outside world.
In the theory of neural information processing, there are two main hypotheses with respect to where in the spike train the neural information is encoded: in the neural firing rate or in the precise timing of the spikes. These hypotheses are introduced in turn, below.
Rate coding
Adrian (1926) introduced the concept of rate coding, by which the number of spikes in a fixed time window following the onset of a static stimulus code for the intensity of the stimulus. Since Adrian’s studies, the rate coding hypothesis has been dominant in the neural computational field (see for a review Recce, 1999). The definition of the rate has been applied to the discovery of the properties of many types of neurons in the sensory, motor, and central nervous system, by searching for those stimuli that make neurons fire maximally.Recent observations on the behavior of cortical visual neurons demonstrated a temporal precision in brain function that is higher than would be predicted from frequency coding .This suggests that firing rate alone cannot account for all of the encoding of information in spike trains. Consequently, in the last decade, the focus of attention in experimental and computational neuroscience has shifted towards the exploration of how the timing of single spikes is used by the nervous system .It is important to understand that the pulse coding represents an extension of the way neurons code information, rather than a replacement of the firing rate code. Panzeri and Schultz (2001) proposed such a unified approach to the study of temporal, correlation and rate coding.They suggest that a spike count coding phase exists for narrow time windows (i.e.,shorter than the timescale of the stimulus-induced response fluctuations), while for time windows much longer than the stimulus characteristic timescale, there is additional timing information, leading to a temporal coding phase.
Temporal coding by relative latencies
In a temporal code, information can be contained in the temporal pattern of spikes (inter–spike interval codes) or in the time–of–arrival of the spike (relative spike timings) (Cariani,1997). In the following, we discuss the later coding scheme, which is implemented in SpikeNNS. Neurobiological studies of sensory coding of stimuli in the auditory and visual systems revealed that latency of transmission is a potential candidate for coding the stimulus features (Bugmann, 1991; Heil and Irvine, 1996; Heil, 1997). An example is the study by Gawne et al. (1996) who showed that the latency of neurons response in the striate cortex is a function of the stimulus contrast and that synchronization based on spike latencies can make an important contribution to binding contrast related information. The coding scheme which represents analog information through differences in the firing times of different neurons is referred to as delay coding or latency coding (Hopfield, 1995; Gawne et al., 1996; Maass, 1997; Thorpe and Gautrais, 1998).


Figure 2: Coding by relative delay. The neurons in figure emit spikes at different moments in time t(f) j . The most strongly ctivated neuron fires first (i.e., second from left). Its spike ravels a considerable distance along the axon, until last neuron fires (i.e., the fourth from left). The latencies xj are computed with respect to a reference time T .

According to Hopfield (1995) and Maass (1997), a vector of real numbers (x1,...., xn) with xj [0; 1] can be encoded in the firing times tj of n neurons, such as ,
where T is some reference time and c .xj represent the transmission delays. The timing can be defined relatively to some other spike produced by the same neuron or to the onset of a stimulus. If for each neuron, we consider only the latency of the first spike after the stimulus onset, then we obtain a coding scheme based on the time–to–first–spike. According to Van Rullen and Thorpe (2001) cells can act as ’analog–to–delay converters’. $That is, the most strongly activated cells will tend to fire first and will signal a strong stimulation, whereas more weakly activated units will fire later and signal a weak stimulation. This coding scheme was proposed by Thorpe and Gautrais (1998), who argued that during visual object recognition the brain does not have time to evaluate more than one spike from each neuron per processing step. The idea is supported by other experimental studies (Tovee et al., 1993) and was used to implement learning in a number of neural network models, based on the timing or the order of single spike events (Ruff and Schmitt, 1998; Van Rullen et al., 1998, see also Section 6.1).

Computational properties of spiking neurons
Spiking neural models can account for different types of computations, ranging from linear temporal summation of inputs and coincidence detection to multiplexing, nonlinear operations and preferential resonance (Koch, 1999; Maass, 1999). Several recent studies employing rigorous mathematical tools have demonstrated that through the use of temporal coding, a pulsed neural network may gain more computational power than a traditional network (i.e., consisting of rate coding neurons) of comparable size (Maass and Schmitt, 1996; Maass, 1997).
A simple spiking neural model can carry out computations over the input spike trains under
several different modes (Maass, 1999). Thus, spiking neurons compute when the input is encoded in temporal patterns, firing rates, firing rates and temporal corellations, and space–rate codes. An essential feature of the spiking neurons is that they can act as coincidence detectors for the incoming pulses, by detecting if they arrive in almost the same time (Abeles, 1982; Softky and Koch, 1993; Kempter et al., 1998). When operating in the integration mode (see the integrate-and-fire model in Section 4.2.1), the output rate changes as a function of the mean input rate and is independent of the fine structure of input spike trains (Gerstner, 1999). By contrast, when the neuron is functioning as a coincidence detector, the output firing rate is higher if the spikes arrive simultaneously, as opposed to random spike arrival. More precisely, the neuron fires (e.g., signals a detection) if any two presynaptic neurons have fired in a temporal distance smaller than an arbitrary constant c1, and do not fire if all presynaptic neurons fire in a time interval larger than another constant c2 (Maass, 1999).
For a neuron to work as a coincidence detector, two constraints have to be satisfied:
(1) the postsynaptic potential has to evolve in time according to an exponential decay function and (2) the transmission delays must have similar values, so that the simultaneous arrival of the
postsynaptic potentials which cause the neuron to fire will reflect the coincidence of presynaptic
spikes (Maass, 1999). Note that not any spiking neural model can detect coincidence. For instance, the resonator neuron fires if the input train of spikes has the same phase with its own oscillation, but has low chances to spike if the inputs arrive coincidentally (Izhikevich, 2000).
In SpikeNNS, neurons can compute in two regimes: coincidence detection and threshold–and–fire. Acting as coincidence detectors is more likely for hidden units, when they compute over pulses coming from the input layers. That is, because in our implementation, the input spikes arriving on the afferent connections are affected by similar delays with a small noise factor. The latency of the spikes is given by the firing times of the input nodes. The operation in this computing domain depends also on the neural threshold and on the value of the membrane time constant that describes how fast decays the posystynaptic potentials .
In the threshold–and–fire mode, neurons perform a linear summation of the inputs in a similar manner with the integrate–and–fire model. The integration of pulses over a larger time interval is particularly required in the case of spikes arriving on the lateral synapses, which are affected by a large range of delays (e.g., from 1 to 10 ms).
Neural model in SpikeNNS
The neural model implemented in SpikeNNS is a simplified version of the Spike Response Model (Gerstner, 1991; Gerstner et al., 1993; Gerstner, 1999) referred to as SRM0. The Spike Response Model (SRM) represents an alternative formulation to the well–known integrate– and–fire model. Instead of defining the evolution of the neuron membrane potential by a differential equation, SRM uses a kernel-based method. By doing this, the Spike Response Model is slightly more general than the integrate–and–fire models because the response kernels can be chosen arbitrarily, whereas for the integrate–and–fire model they are fixed (Gerstner, 1999). According to Kistler et al. (1997) the Spike Response Model can reproduce correctly up to 90% of the spike times of the Hodgkin-Huxley model. The model can also be used to simulate the dynamics of linear dendritic trees, as well as non–linear effects at the synapses (Gerstner, 1999). The Spike Response Model offers us a powerful computational framework that captures the essential effects during spiking and has the advantages of a simple and elegant mathematical formalization.
1.Spike Response Model
The Spike Response Model describes the state of a neuron by a single variable, the membrane
potential Vi. Figure 4.3a shows the time evolution of the membrane potential of neuron i as a function of time t. Before any input spike has arrived at the postsynaptic neuron i, the variable Vi(t) has the value 0. The firing of a presynaptic neuron j at time t(f)j evokes a postsynaptic potential in the neuron i modeled by the kernel response Eij . Each incoming spike will perturb the value of Vi and if, after the summation of the inputs, the membrane potential Vi reaches the threshold theta then an output spike is generated. The firing time is given by the condition Vi(t(f)i ) = theta. After the neuron has fired the membrane potential returns to a low value which is described by the refractory period function n. After firing, the evolution of Vi is given by the equation:

with s = t-t(f)j . The first term in the equation (i.e., the kernel ni) accounts for refractoriness in the neuron behavior. The second term represents the contribution of all previous spikes t(f)j of presynaptic neurons j on the membrane potential of neuron i. ni denotes the set of neurons presynaptic to i, Fj is the set of all firing times of neuron j and wij are the synaptic strengths between cells (see Figure 5). The kernel Eij , as a function of t-t(f)j , represents the time course of the postsynaptic potential evoked by the firing of the presynaptic neuron j at time t(f)j (see Figure 4.4a). The evolution of the postsynaptic potential function depends also on the time t - ti that has passed since the last spike of the postsynaptic neuron. That is, because if the neuron is in a refractory period, its response to an input spike is smaller than if the neuron is fully responsive. The last term represents the effect on the neuron of an external driving current Iext and the kernel ~E(t - ti; s) is the linear response of the membrane potential to the input current and depends on the time that has passed since the last output spike was emitted at ti (Gerstner,1999).
SRM0
A simpler version of the Spike Response Model can be obtained by neglecting the dependence
of the Eij kernel on the term t - ti (i.e., the effect of the neuron’s last spike on the


Figure 3: (a) Spike Response Model, the membrane potential V of neuron i as a function
of time t. (b) SpikeNNS neural model, the time course of the membrane potential Vi. is
the neural threshold. See in text for more details on the time evolution of the membrane
potential.
postsynaptic potential function) and by considering a null external current. Consequently, equation 4.1 can be rewritten: This version of the Spike Response Model has been entitled SRM0 and has been applied for the analysis of computations with spiking neurons by Maass (1999). The neural model implemented in SpikeNNS is completely specified by the set of Equations 2, 3, 5, which account for several important aspects of neural behavior: the spiking nature of the neuron, the attenuation of the response at the soma resulting from synaptic input, the absolute and relative refractory periods. The model also accounts for spike latency and noise in the neural response (see description in sections below). Figure 3b shows the time evolution of the membrane potential Vi in the simplified neural model implemented in SpikeNNS. Compared with the membrane potential in the SRM model represented in Figure 4.3a, the shape of the action potential is reduced to a formal event, captured by a delta pulse (the vertical line). After the spike emission, the membrane voltage is reset to a negative value and is kept there for 1 ms .The ascending
Figure 4: Postsynaptic potential function in SpikeNNS. The curve decay with time is plotted
for different values of the decay rate, given by the membrane time constant Tm. Note that the slope of the curve is negatively correlated with the value of the membrane time constant.

curve of the € function also reduces to a pulse, followed by an exponential decay of the postsynaptic potential . In SpikeNNS, Equation 4.2 is implemented by the activation function ACT Spike ,which expresses how the membrane potential V of a node i is calculated at a given time t. Let us now consider the mathematical description of the two kernels €and n required in the Equation 2.
4.3.2 Postsynaptic potential function
In SpikeNNS, each hidden neuron is connected to a number of other neurons either from the same layer or from an input or another hidden layer. The firing of any node, i.e., input or hidden, is transmitted to all its postsynaptic units, where it evokes a postsynaptic potential of some standard form (see Figure 4). The spike transmission is affected by a noisy delay d, which in our implementation is proportional with the Euclidian distance between the presynaptic and the postsynaptic node .This delay corresponds to the axonal and dendritic transmission delay of real neurons (Koch, 1999). When the presynaptic spike reaches the postsynaptic unit, the postsynaptic potential (PSP) jumps to a maximum value, i.e., in our simulation this value is set to 1. Afterwards, it decays exponentially towards the resting value, with a rate being given by the time constant Tm. In our model, the postsynaptic potential €ij is described as a function of the difference
s = t - t(f) j - d :
where t is the time of consideration, t(f) j is the time of the presynaptic node firing and d is the delay on the connection. The Heaviside function sets the postsynaptic potential to a null value, for any time moment t that precedes the arrival of the presynaptic spike, that is t <>

Temporal summation of postsynaptic potentials A single synaptic input is rarely sufficient to generate an action potential. The response of a neural cell is usually determined by the way it integrates multiple synaptic inputs. The basic arithmetic that dendrites of a real neuron compute is still a matter of controversy (Poirazi and Mel, 2000). Both linear and nonlinear interactions between synaptic inputs in the brain have been described by neurophysiological experiments (Cash and Yuste, 1999; Koch, 1999) and explored computationally with different formalisms (Rall, 1977; Mel, 1992). In SpikeNNS, we consider that both excitatory and inhibitory inputs accumulate linearly in time. The total synaptic input to a hidden neuron i at some moment t is given by the contribution of all previous spikes of the presynaptic neurons (see Figure 4.5). The set of presynaptic neurons to the node i is Ti = {j/ j is presynaptic to i}.. The set of all firing times of the


Figure 5: The presynaptic contribution to the membrane voltage V (t) of neuron i. Each presynaptic neuron j in Ti emits a series of spikes Fj with firing times t(f) j . Neuron i computes the weighted sum of the decayed PSPs. If the sum exceeds the threshold then an output spike is generated.

presynaptic node j is given by Fj = f t(f) j j Vj(t(f) j ) = g. In SpikeNNS, a limited number of spikes per neuron are stored (e.g., a maximum of 10 spikes/neuron were stored for the simulation run in Section 6.1). In our model, the slope of the postsynaptic potential curve is negatively correlated with the value of the membrane time constant Tm (see Figure 4a). That is, for large values of m, the postsynaptic potential persists longer and it allows the temporal summation of inputs that produce in this way, an aggregate PSP larger than would be elicited by an individual input. The neural threshold and the membrane time constant m represent the principal parameters in determining how many excitatory inputs are needed for a neuron to fire. For example, the choice of a membrane time constant Tm = 5 (the blue graph in Figure 4a) causes an exponential decay of the postsynaptic potential from the maximum value to 0, in about 10 ms. This relatively slow decay of the postsynaptic potential curve favors the significant summation of the synaptic inputs which arrive in a time window no larger than 4 - 5 ms. For instance, given the above value of the Tm, the threshold values were set so that, at least three synaptic inputs (e.g, most commonly 4 or 5 inputs) were necessary for a postsynaptic spike to be emitted.

For detailed PDF document please mail to sekharstuff@gmail.com

No comments: