NEUROFUZZY

Saturday, June 28, 2008

The McCulloch-Pitts Model of Neuron

The early model of an artificial neuron is introduced by Warren McCulloch and Walter Pitts in 1943. The McCulloch-Pitts neural model is alsoknown as linear threshold gate. It is a neuron of a set of inputs I1,I2,I3 and one output y. The linear threshold gate simply classifies the set of inputs into two different classes. Thus the output y is binary. Such a function can be described mathematically using these equations:
(2.1)

(2.2)
W1,W2,W3,W4 are weight values normalized in the range of either (0,1) or (-1,1) and associated with each input line, SUM is the weighted sum, and T is a threshold constant. The function
f is a linear step function at threshold T as shown in figure 2.3. The symbolic representation of the linear threshold gate is shown in figure 2.4



Figure 2.3: Linear Threshold Function




Figure 2.4: Symbolic Illustration of Linear Threshold Gate






The McCulloch-Pitts model of a neuron is simple yet has substantialcomputing potential. It also has a precise mathematical definition. However,this model is so simplistic that it only generates a binary output and also theweight and threshold values are fixed. The neural computing algorithm has diverse features for various applications [Zur92]. Thus, we need to obtain the neural model with more flexible computational features.

Thursday, June 19, 2008

Applications of neural networks

sample applications
Medicine
One of the areas that has gained attention is in cardiopulmonary diagnostics. The ways neural networks work in this area or other areas of medical diagnosis is by the comparison of many different models. A patient may have regular checkups in a particular area, increasing the possibility of detecting a disease or dysfunction.
The data may include heart rate, blood pressure, breathing rate, etc. to different models. The models may include variations for age, sex, and level of physical activity. Each individual's physiological data is compared to previous physiological data and/or data of the various generic models. The deviations from the norm are compared to the known causes of deviations for each medical condition. The neural network can learn by studying the different conditions and models, merging them to form a complete conceptual picture, and then diagnose a patient's condition based upon the models.
Electronic Noses
The idea of a chemical nose may seem a bit absurd, but it has several real-world applications. The electronic nose is composed of a chemical sensing system (such as a spectrometer) and an artificial neural network, which recognizes certain patterns of chemicals. An odor is passed over the chemical sensor array, these chemicals are then translated into a format that the computer can understand, and the artificial neural network identifies the chemical.
A list at the Pacific Northwest Laboratory has several different applications in the environment, medical, and food industries.
Environment: identification of toxic wastes, analysis of fuel mixtures (7-11 example), detection of oil leaks, identification of household odors, monitoring air quality, monitoring factory emission, and testing ground water for odors.
Medical: The idea of using these in the medical field is to examine odors from the body to identify and diagnose problems. Odors in the breath, infected wounds, and body fluids all can indicate problems. Artificial neural networks have even been used to detect tuberculosis.
Food: The food industry is perhaps the biggest practical market for electronic noses, assisting or replacing entirely humans. Inspection of food, grading quality of food, fish inspection, fermentation control, checking mayonnaise for rancidity, automated flavor control, monitoring cheese ripening, verifying if orange juice is natural, beverage container inspection, and grading whiskey.
SecurityOne program that has already been started is the CATCH program. CATCH, an acronymn for Computer Aided Tracking and Characterization of Homicides. It learns about an existing crime, the location of the crime, and the particular characteristics of the offense. The program is subdivided into different tools, each of which place an emphasis on a certain characteristic or group of characteristics. This allows the user to remove certain characteristics which humans determine are unrelated.
Loans and credit cards
Loan granting is one area in which neural networks can aid humans, as it is an area not based on a predetermined and preweighted criteria, but answers are instead nebulous. Banks want to make as much money as they can, and one way to do this is to lower the failure rate by using neural networks to decide whether the bank should approve the loan. Neural networks are particularly useful in this area since no process will guarantee 100% accuracy. Even an 85-90% accuracy would be an improvement over the methods humans use.
In fact, in some banks, the failure rate of loans approved using neural networks is lower than that of some of their best traditional methods. Some credit card companies are now beginning to use neural networks in deciding whether to grant an application.
The process works by analyzing past failures and making current decisions based upon past experience. Nonetheless, this creates its own problems. For example, the bank or credit company must justify their decision to the applicant. The reason "my neural network computer recommended against it" simply isn't enough for people to accept. The process of explaining how the network learned and on what characteristics the neural network made its decision is difficult. As we alluded to earlier in the history of neural networks, self-modifying code is very difficult to debug and thus difficult to trace. Recording the steps it went through isn't enough, as it might be using conventional computing, because even the individual steps the neural network went through have to be analyzed by human beings, or possibly the network itself, to determine that a particular piece of data was crucial in the decision-making process.

Neural Network Follies(funny story)

In the 1980s, the Pentagon wanted to harness computer technology to make their tanks harder to attack.
The Plan
The preliminary plan was to fit each tank with a digital camera hooked up to a computer. The computer would continually scan the environment outside for possible threats (such as an enemy tank hiding behind a tree), and alert the tank crew to anything suspicious. Computers are really good at doing repetitive tasks without taking a break, but they are generally bad at interpreting images. The only possible way to solve the problem was to employ a neural network.
The Implementation
The research team went out and took 100 photographs of tanks hiding behind trees, and then took 100 photographs of trees - with no tanks. They took half the photos from each group and put them in a vault for safe-keeping, then scanned the other half into their mainframe computer. The huge neural network was fed each photo one at a time and asked if there was a tank hiding behind the trees. Of course at the beginning its answers were completely random since the network didn't know what was going on or what it was supposed to do. But each time it was fed a photo and it generated an answer, the scientists told it if it was right or wrong. If it was wrong it would randomly change the weightings in its network until it gave the correct answer. Over time it got better and better until eventually it was getting each photo correct. It could correctly determine if there was a tank hiding behind the trees in any one of the photos.
Verification
But the scientists were worried: had it actually found a way to recognize if there was a tank in the photo, or had it merely memorized which photos had tanks and which did not? This is a big problem with neural networks, after they have been trained you have no idea how they arrive at their answers, they just do. The question was did it understand the concept of tanks vs. no tanks, or had it merely memorized the answers? So the scientists took out the photos they had been keeping in the vault and fed them through the computer. The computer had never seen these photos before -- this would be the big test. To their immense relief the neural net correctly identified each photo as either having a tank or not having one.
Independent testing
The Pentagon was very pleased with this, but a little bit suspicious. They commissioned another set of photos (half with tanks and half without) and scanned them into the computer and through the neural network. The results were completely random. For a long time nobody could figure out why. After all nobody understood how the neural had trained itself.
Grey skies for the US military
Eventually someone noticed that in the original set of 200 photos, all the images with tanks had been taken on a cloudy day while all the images without tanks had been taken on a sunny day. The neural network had been asked to separate the two groups of photos and it had chosen the most obvious way to do it - not by looking for a camouflaged tank hiding behind a tree, but merely by looking at the colour of the sky. The military was now the proud owner of a multi-million dollar mainframe computer that could tell you if it was sunny or not.
This story might be apocryphal, but it doesn't really matter. It is a perfect illustration of the biggest problem behind neural networks. Any automatically trained net with more than a few dozen neurons is virtually impossible to analyze and understand. One can't tell if a net has memorized inputs, or is 'cheating' in some other way. A promising use for neural nets these days is to predict the stock market. Even though initial results are extremely good, investors are leery of trusting their money to a system that nobody understands.

Tuesday, June 17, 2008

Spiking Neural model

For detailed PDF document please mail to sekharstuff@gmail.com
Formal spiking neuron models
The second stream of research in computational neuroscience is oriented towards modeling the spiking nature of the neurons and retaining the essential elements of the behavior being modeled, while trying to simplify the complexity of the resulting description (Gerstner,1991; Maass, 1995; Maass, 1997; Rieke et al., 1997). The principal motivation for the creation of simplified models is that they allow studying more easily the computational andfunctional principles of neural systems (Koch, 1999).
The reduction of the detailed neuron models to formal models requires simplifications in at least two respects. First, the non–linear dynamics of spike generation must be reduced to a single ordinary differential equation and second, the spatial structure of the neuron (i.e., the dendritic tree) is neglected and reduced to an input (Gerstner and Kistler, 2002). To support the validity of the former simplification, Kistler et al. (1997) demonstrated that spike generation in the Hodgkin–Huxley model can be reproduced to a high degree of accuracy (i.e., up to 90%) by a single variable model. The authors pointed out that the Hodgkin–Huxley model shows a sharp, threshold–like transition between an action potential for a strong stimulus and graded response (no spike) for slightly weaker stimuli. This suggests that the emission of an action potential can be described by a threshold process .
Several simplified neural models have been proposed in the last decades. The leaky integrate–
and–fire neuron is probably the best–known example of a formal neural model (Tuckwell,1988; Bugmann, 1991; Stevens and Zador, 1998). It simulates the dynamics of the neuron membrane potential in response to a synaptic current by implementing an equivalent electrical circuit. The function of the integrate–and–fire circuit is to accumulate the input currents and, when the membrane potential reaches the threshold value, to generate a spike.Immediately after emitting a pulse, the potential is reset and maintained there for an absolute refractory period.
The simplified mathematical models for spiking neurons cannot account for the entire range of computational functions of the biological neuron. Rather, they try to abstract a number of essential computational aspects of the real cell function. The essential features implemented can differ between models, as a function of what the modeler considers to be relevant and crucial for its domain study. Thus, the integrate–and–fire model focuses upon the temporal summation function of the neuron (Bugmann and Taylor, 1997). The spike response model proposed
by Gerstner (1999) simplifies the action potential generation to a threshold process.The resonate–and–fire model (Izhikevich, 2001) focuses upon the operation of the neuron in a resonating regime. By contrast with the detailed neural models, the computational strength of the spiking neurons arises from the way they interact with each other, when they work cooperatively in large networks.
Neural communication with spikes
Neurons communicate by producing sequences of fixed size electrical impulses called action potentials or spikes (Adrian, 1926). As Rieke and colleagues puts it:
Spike sequences are the language for which the brain is listening, the language the brain uses for its internal musings, and the language it speaks as it talks to the outside world.
In the theory of neural information processing, there are two main hypotheses with respect to where in the spike train the neural information is encoded: in the neural firing rate or in the precise timing of the spikes. These hypotheses are introduced in turn, below.
Rate coding
Adrian (1926) introduced the concept of rate coding, by which the number of spikes in a fixed time window following the onset of a static stimulus code for the intensity of the stimulus. Since Adrian’s studies, the rate coding hypothesis has been dominant in the neural computational field (see for a review Recce, 1999). The definition of the rate has been applied to the discovery of the properties of many types of neurons in the sensory, motor, and central nervous system, by searching for those stimuli that make neurons fire maximally.Recent observations on the behavior of cortical visual neurons demonstrated a temporal precision in brain function that is higher than would be predicted from frequency coding .This suggests that firing rate alone cannot account for all of the encoding of information in spike trains. Consequently, in the last decade, the focus of attention in experimental and computational neuroscience has shifted towards the exploration of how the timing of single spikes is used by the nervous system .It is important to understand that the pulse coding represents an extension of the way neurons code information, rather than a replacement of the firing rate code. Panzeri and Schultz (2001) proposed such a unified approach to the study of temporal, correlation and rate coding.They suggest that a spike count coding phase exists for narrow time windows (i.e.,shorter than the timescale of the stimulus-induced response fluctuations), while for time windows much longer than the stimulus characteristic timescale, there is additional timing information, leading to a temporal coding phase.
Temporal coding by relative latencies
In a temporal code, information can be contained in the temporal pattern of spikes (inter–spike interval codes) or in the time–of–arrival of the spike (relative spike timings) (Cariani,1997). In the following, we discuss the later coding scheme, which is implemented in SpikeNNS. Neurobiological studies of sensory coding of stimuli in the auditory and visual systems revealed that latency of transmission is a potential candidate for coding the stimulus features (Bugmann, 1991; Heil and Irvine, 1996; Heil, 1997). An example is the study by Gawne et al. (1996) who showed that the latency of neurons response in the striate cortex is a function of the stimulus contrast and that synchronization based on spike latencies can make an important contribution to binding contrast related information. The coding scheme which represents analog information through differences in the firing times of different neurons is referred to as delay coding or latency coding (Hopfield, 1995; Gawne et al., 1996; Maass, 1997; Thorpe and Gautrais, 1998).


Figure 2: Coding by relative delay. The neurons in figure emit spikes at different moments in time t(f) j . The most strongly ctivated neuron fires first (i.e., second from left). Its spike ravels a considerable distance along the axon, until last neuron fires (i.e., the fourth from left). The latencies xj are computed with respect to a reference time T .

According to Hopfield (1995) and Maass (1997), a vector of real numbers (x1,...., xn) with xj [0; 1] can be encoded in the firing times tj of n neurons, such as ,
where T is some reference time and c .xj represent the transmission delays. The timing can be defined relatively to some other spike produced by the same neuron or to the onset of a stimulus. If for each neuron, we consider only the latency of the first spike after the stimulus onset, then we obtain a coding scheme based on the time–to–first–spike. According to Van Rullen and Thorpe (2001) cells can act as ’analog–to–delay converters’. $That is, the most strongly activated cells will tend to fire first and will signal a strong stimulation, whereas more weakly activated units will fire later and signal a weak stimulation. This coding scheme was proposed by Thorpe and Gautrais (1998), who argued that during visual object recognition the brain does not have time to evaluate more than one spike from each neuron per processing step. The idea is supported by other experimental studies (Tovee et al., 1993) and was used to implement learning in a number of neural network models, based on the timing or the order of single spike events (Ruff and Schmitt, 1998; Van Rullen et al., 1998, see also Section 6.1).

Computational properties of spiking neurons
Spiking neural models can account for different types of computations, ranging from linear temporal summation of inputs and coincidence detection to multiplexing, nonlinear operations and preferential resonance (Koch, 1999; Maass, 1999). Several recent studies employing rigorous mathematical tools have demonstrated that through the use of temporal coding, a pulsed neural network may gain more computational power than a traditional network (i.e., consisting of rate coding neurons) of comparable size (Maass and Schmitt, 1996; Maass, 1997).
A simple spiking neural model can carry out computations over the input spike trains under
several different modes (Maass, 1999). Thus, spiking neurons compute when the input is encoded in temporal patterns, firing rates, firing rates and temporal corellations, and space–rate codes. An essential feature of the spiking neurons is that they can act as coincidence detectors for the incoming pulses, by detecting if they arrive in almost the same time (Abeles, 1982; Softky and Koch, 1993; Kempter et al., 1998). When operating in the integration mode (see the integrate-and-fire model in Section 4.2.1), the output rate changes as a function of the mean input rate and is independent of the fine structure of input spike trains (Gerstner, 1999). By contrast, when the neuron is functioning as a coincidence detector, the output firing rate is higher if the spikes arrive simultaneously, as opposed to random spike arrival. More precisely, the neuron fires (e.g., signals a detection) if any two presynaptic neurons have fired in a temporal distance smaller than an arbitrary constant c1, and do not fire if all presynaptic neurons fire in a time interval larger than another constant c2 (Maass, 1999).
For a neuron to work as a coincidence detector, two constraints have to be satisfied:
(1) the postsynaptic potential has to evolve in time according to an exponential decay function and (2) the transmission delays must have similar values, so that the simultaneous arrival of the
postsynaptic potentials which cause the neuron to fire will reflect the coincidence of presynaptic
spikes (Maass, 1999). Note that not any spiking neural model can detect coincidence. For instance, the resonator neuron fires if the input train of spikes has the same phase with its own oscillation, but has low chances to spike if the inputs arrive coincidentally (Izhikevich, 2000).
In SpikeNNS, neurons can compute in two regimes: coincidence detection and threshold–and–fire. Acting as coincidence detectors is more likely for hidden units, when they compute over pulses coming from the input layers. That is, because in our implementation, the input spikes arriving on the afferent connections are affected by similar delays with a small noise factor. The latency of the spikes is given by the firing times of the input nodes. The operation in this computing domain depends also on the neural threshold and on the value of the membrane time constant that describes how fast decays the posystynaptic potentials .
In the threshold–and–fire mode, neurons perform a linear summation of the inputs in a similar manner with the integrate–and–fire model. The integration of pulses over a larger time interval is particularly required in the case of spikes arriving on the lateral synapses, which are affected by a large range of delays (e.g., from 1 to 10 ms).
Neural model in SpikeNNS
The neural model implemented in SpikeNNS is a simplified version of the Spike Response Model (Gerstner, 1991; Gerstner et al., 1993; Gerstner, 1999) referred to as SRM0. The Spike Response Model (SRM) represents an alternative formulation to the well–known integrate– and–fire model. Instead of defining the evolution of the neuron membrane potential by a differential equation, SRM uses a kernel-based method. By doing this, the Spike Response Model is slightly more general than the integrate–and–fire models because the response kernels can be chosen arbitrarily, whereas for the integrate–and–fire model they are fixed (Gerstner, 1999). According to Kistler et al. (1997) the Spike Response Model can reproduce correctly up to 90% of the spike times of the Hodgkin-Huxley model. The model can also be used to simulate the dynamics of linear dendritic trees, as well as non–linear effects at the synapses (Gerstner, 1999). The Spike Response Model offers us a powerful computational framework that captures the essential effects during spiking and has the advantages of a simple and elegant mathematical formalization.
1.Spike Response Model
The Spike Response Model describes the state of a neuron by a single variable, the membrane
potential Vi. Figure 4.3a shows the time evolution of the membrane potential of neuron i as a function of time t. Before any input spike has arrived at the postsynaptic neuron i, the variable Vi(t) has the value 0. The firing of a presynaptic neuron j at time t(f)j evokes a postsynaptic potential in the neuron i modeled by the kernel response Eij . Each incoming spike will perturb the value of Vi and if, after the summation of the inputs, the membrane potential Vi reaches the threshold theta then an output spike is generated. The firing time is given by the condition Vi(t(f)i ) = theta. After the neuron has fired the membrane potential returns to a low value which is described by the refractory period function n. After firing, the evolution of Vi is given by the equation:

with s = t-t(f)j . The first term in the equation (i.e., the kernel ni) accounts for refractoriness in the neuron behavior. The second term represents the contribution of all previous spikes t(f)j of presynaptic neurons j on the membrane potential of neuron i. ni denotes the set of neurons presynaptic to i, Fj is the set of all firing times of neuron j and wij are the synaptic strengths between cells (see Figure 5). The kernel Eij , as a function of t-t(f)j , represents the time course of the postsynaptic potential evoked by the firing of the presynaptic neuron j at time t(f)j (see Figure 4.4a). The evolution of the postsynaptic potential function depends also on the time t - ti that has passed since the last spike of the postsynaptic neuron. That is, because if the neuron is in a refractory period, its response to an input spike is smaller than if the neuron is fully responsive. The last term represents the effect on the neuron of an external driving current Iext and the kernel ~E(t - ti; s) is the linear response of the membrane potential to the input current and depends on the time that has passed since the last output spike was emitted at ti (Gerstner,1999).
SRM0
A simpler version of the Spike Response Model can be obtained by neglecting the dependence
of the Eij kernel on the term t - ti (i.e., the effect of the neuron’s last spike on the


Figure 3: (a) Spike Response Model, the membrane potential V of neuron i as a function
of time t. (b) SpikeNNS neural model, the time course of the membrane potential Vi. is
the neural threshold. See in text for more details on the time evolution of the membrane
potential.
postsynaptic potential function) and by considering a null external current. Consequently, equation 4.1 can be rewritten: This version of the Spike Response Model has been entitled SRM0 and has been applied for the analysis of computations with spiking neurons by Maass (1999). The neural model implemented in SpikeNNS is completely specified by the set of Equations 2, 3, 5, which account for several important aspects of neural behavior: the spiking nature of the neuron, the attenuation of the response at the soma resulting from synaptic input, the absolute and relative refractory periods. The model also accounts for spike latency and noise in the neural response (see description in sections below). Figure 3b shows the time evolution of the membrane potential Vi in the simplified neural model implemented in SpikeNNS. Compared with the membrane potential in the SRM model represented in Figure 4.3a, the shape of the action potential is reduced to a formal event, captured by a delta pulse (the vertical line). After the spike emission, the membrane voltage is reset to a negative value and is kept there for 1 ms .The ascending
Figure 4: Postsynaptic potential function in SpikeNNS. The curve decay with time is plotted
for different values of the decay rate, given by the membrane time constant Tm. Note that the slope of the curve is negatively correlated with the value of the membrane time constant.

curve of the € function also reduces to a pulse, followed by an exponential decay of the postsynaptic potential . In SpikeNNS, Equation 4.2 is implemented by the activation function ACT Spike ,which expresses how the membrane potential V of a node i is calculated at a given time t. Let us now consider the mathematical description of the two kernels €and n required in the Equation 2.
4.3.2 Postsynaptic potential function
In SpikeNNS, each hidden neuron is connected to a number of other neurons either from the same layer or from an input or another hidden layer. The firing of any node, i.e., input or hidden, is transmitted to all its postsynaptic units, where it evokes a postsynaptic potential of some standard form (see Figure 4). The spike transmission is affected by a noisy delay d, which in our implementation is proportional with the Euclidian distance between the presynaptic and the postsynaptic node .This delay corresponds to the axonal and dendritic transmission delay of real neurons (Koch, 1999). When the presynaptic spike reaches the postsynaptic unit, the postsynaptic potential (PSP) jumps to a maximum value, i.e., in our simulation this value is set to 1. Afterwards, it decays exponentially towards the resting value, with a rate being given by the time constant Tm. In our model, the postsynaptic potential €ij is described as a function of the difference
s = t - t(f) j - d :
where t is the time of consideration, t(f) j is the time of the presynaptic node firing and d is the delay on the connection. The Heaviside function sets the postsynaptic potential to a null value, for any time moment t that precedes the arrival of the presynaptic spike, that is t <>

Temporal summation of postsynaptic potentials A single synaptic input is rarely sufficient to generate an action potential. The response of a neural cell is usually determined by the way it integrates multiple synaptic inputs. The basic arithmetic that dendrites of a real neuron compute is still a matter of controversy (Poirazi and Mel, 2000). Both linear and nonlinear interactions between synaptic inputs in the brain have been described by neurophysiological experiments (Cash and Yuste, 1999; Koch, 1999) and explored computationally with different formalisms (Rall, 1977; Mel, 1992). In SpikeNNS, we consider that both excitatory and inhibitory inputs accumulate linearly in time. The total synaptic input to a hidden neuron i at some moment t is given by the contribution of all previous spikes of the presynaptic neurons (see Figure 4.5). The set of presynaptic neurons to the node i is Ti = {j/ j is presynaptic to i}.. The set of all firing times of the


Figure 5: The presynaptic contribution to the membrane voltage V (t) of neuron i. Each presynaptic neuron j in Ti emits a series of spikes Fj with firing times t(f) j . Neuron i computes the weighted sum of the decayed PSPs. If the sum exceeds the threshold then an output spike is generated.

presynaptic node j is given by Fj = f t(f) j j Vj(t(f) j ) = g. In SpikeNNS, a limited number of spikes per neuron are stored (e.g., a maximum of 10 spikes/neuron were stored for the simulation run in Section 6.1). In our model, the slope of the postsynaptic potential curve is negatively correlated with the value of the membrane time constant Tm (see Figure 4a). That is, for large values of m, the postsynaptic potential persists longer and it allows the temporal summation of inputs that produce in this way, an aggregate PSP larger than would be elicited by an individual input. The neural threshold and the membrane time constant m represent the principal parameters in determining how many excitatory inputs are needed for a neuron to fire. For example, the choice of a membrane time constant Tm = 5 (the blue graph in Figure 4a) causes an exponential decay of the postsynaptic potential from the maximum value to 0, in about 10 ms. This relatively slow decay of the postsynaptic potential curve favors the significant summation of the synaptic inputs which arrive in a time window no larger than 4 - 5 ms. For instance, given the above value of the Tm, the threshold values were set so that, at least three synaptic inputs (e.g, most commonly 4 or 5 inputs) were necessary for a postsynaptic spike to be emitted.

For detailed PDF document please mail to sekharstuff@gmail.com

Wednesday, May 21, 2008

Integrate and Fire neuron model

Infor mation about Integrate and Fire neuron model please visit
http://icwww.epfl.ch/~gerstner//SPNM/node26.html

Hodgkin -Huxley neuron model

The standard Hodgkin - Huxley model of an excitatory neuron consists of the equation for the total membrane current, IM, obtained from Ohm's law:
where V denotes the membrane voltage, IK is the potassium current, INa is the sodium current and IL is leakage current carried by other ions that move passively through the membrane. This equation is derived by modeling the potassium, sodium and leakage currents using a simple electrical circuit model of the membrane. We think of a gate in the membrane as having an intrinsic resistance and the cell membrane itself as having an intrincis capacitance as shown in Figure 2.1:

Figure 2.1: The Membrane and Gate Circuit Model

Here we show an idealized cell with a small portion of the membrane blown up into an idealized circuit. We see a small piece of the lipid membrane with an inserted gate. We think of the gate as having some intrinsic resistance and capacitance. Now for our simple Hodgkin - Huxley model here, we want to model a sodium and potassium gate as well as the cell capacitance. So we will have a resistance for both the sodium and potassium. In addition, we know that other ions move across the membrane due to pumps, other gates and so forth. We will temporarily model this additional ion current as a leakage current with its own resistance. We also know that each ion has its own equilibrium potential which is determined by applying the Nernst equation. The driving electomotive force or driving emf is the difference between the ion equilibrium potential and the voltage across the membrane itself. Hence, if Ec is the equilibrium potential due to ion c and Vm is the membrane potential, the driving force is Vc - Vm. In Figure 2.2, we see an electric schematic that summarizes what we have just said. We model the membrane as a parellel circuit with a branch for the sodium and potassium ion, a branch for the leakage current and a branch for the membrane capacitance.

Figure 2.2: The Simple Hodgkin - Huxley Membrane Circuit Model
From circuit theory, we know that the charge q across a capacitator is q = C E, where C is the capacitance and E is the voltage across the capicitor. Hence, if the capacitance C is a constant, we see that the current through the capacitor is given by the time rate of change of the charge
If the voltage E was also space dependent, then we would write E(z,t) to indicate its dependence on both a space variable z and the time t. Then the capacitive current would be
From Ohm's law, we know that voltage is current times resistance; hence for each ion c, we can say where we label the voltage, current and resistance due to this ion with the subscript c. This implieswhere gc is the reciprocal resistance or conductance of ion c. Hence, we can model all of our ionic currents using a conductance equation of the form above. Of course, the potassium and sodium conductances are nonlinear functions of the membrane voltage V and time t. This reflects the fact that the amount of current that flows through the membrane for these ions is dependent on the voltage differential across the membrane which in turn is also time dependent. The general functional form for an ion c is thus
where as we mentioned previously, the driving force, V - Ec, is the difference between the voltage across the membrane and the equilibrium value for the ion in question, Ec. Note, the ion battery voltage Ec itself might also change in time (for example, extracellular potassium concentration changes over time ). Hence, the driving force is time dependent. The conductance is modeled as the product of a activation, m, and an inactivation, h, term that are essentially sigmoid nonlinearities. The activation and inactivationa are functions of V and t also. The conductance is assumed to have the form
where appropriate powers of p and q are found to match known data for a given ion conductance.We model the leakage current, IL, as
where the leakage battery voltage, EL, and the conductance gL are constants that are data driven.Hence, our full model would be
Activation and Inactivation Variables:We assume that the voltage dependence of our activation and inactivation has been fitted from data. Hodgkin and Huxley modeled the time dependence of these variables using first order kinetics. They assumed a typical variable of this type, say m, satisfies for each value of voltage, V:





Tuesday, May 20, 2008

Artificial Neural Model

Artificial Neuron Model
As it is mentioned in the previous section, the transmission of a signal from one neuron to
another through synapses is a complex chemical process in which specific transmitter substances are released from the sending side of the junction. The effect is to raise or lower the electrical potential inside the body of the receiving cell. If this graded potential reaches a threshold, the neuron fires. It is this characteristic that the artificial neuron model proposed by McCulloch and Pitts, attempt to reproduce. The neuron model shown in Figure 1.6 is the one that widely used in artificial neural networks with some minor modifications on it.
Figure 6. Artificial Neuron
The artificial neuron given in this figure has N input, denoted as u1, u2, ...uN. Each lineconnecting these inputs to the neuron is assigned a weight, which are denoted as w1, w2, .., wN respectively. Weights in the artificial model correspond to the synaptic connections in biological neurons. The threshold in artificial neuron is usually represented by θ and the activation corresponding to the graded potential is given by the formula: The inputs and the weights are real values. A negative value for a weight indicates an inhibitory connection while a positive value indicates an excitatory one. Although in biological neurons, θ has a negative value, it may be assigned a positive value in artificial neuron models. If θ is positive, it is usually referred as bias. For its mathematical convenience we will use (+) sign in the activation formula. Sometimes, the threshold is combined for simplicity into the summation part by assuming an imaginary input u0 =+1 and a connection weight w0 = θ. Hence the activation formula becomes:

The output value of the neuron is a function of its activation in an analogy to the firing frequency of the biological neurons:
x = f (a) Furthermore the vector notation
a=wTu+θ
is useful for expressing the activation for a neuron. Here, the jth element of the input vector u is uj and the jth element of the weight vector of w is wj. Both of these vectors are of size N. Notice that, wTu is the inner product of the vectors w and u, resulting in a scalar value. The inner product is an operation defined on equal sized vectors. In the case these vectors have unit length, the inner product is a measure of similarity of these vectors.
Originally the neuron output function f(a) in McCulloch Pitts model proposed as threshold function, however linear, ramp and sigmoid and functions (Figure 6.) are also widely used output functions:





Figure 7. Some neuron output functions
Though its simple structure, McCulloch-Pitts neuron is a powerful computational device. McCulloch and Pitts proved that a synchronous assembly of such neurons is capable in principle to perform any computation that an ordinary digital computer can, though not necessarily so rapidly or conveniently .