Saturday, June 28, 2008

The McCulloch-Pitts Model of Neuron

The early model of an artificial neuron is introduced by Warren McCulloch and Walter Pitts in 1943. The McCulloch-Pitts neural model is alsoknown as linear threshold gate. It is a neuron of a set of inputs I1,I2,I3 and one output y. The linear threshold gate simply classifies the set of inputs into two different classes. Thus the output y is binary. Such a function can be described mathematically using these equations:
(2.1)

(2.2)
W1,W2,W3,W4 are weight values normalized in the range of either (0,1) or (-1,1) and associated with each input line, SUM is the weighted sum, and T is a threshold constant. The function
f is a linear step function at threshold T as shown in figure 2.3. The symbolic representation of the linear threshold gate is shown in figure 2.4



Figure 2.3: Linear Threshold Function




Figure 2.4: Symbolic Illustration of Linear Threshold Gate






The McCulloch-Pitts model of a neuron is simple yet has substantialcomputing potential. It also has a precise mathematical definition. However,this model is so simplistic that it only generates a binary output and also theweight and threshold values are fixed. The neural computing algorithm has diverse features for various applications [Zur92]. Thus, we need to obtain the neural model with more flexible computational features.

Thursday, June 19, 2008

Applications of neural networks

sample applications
Medicine
One of the areas that has gained attention is in cardiopulmonary diagnostics. The ways neural networks work in this area or other areas of medical diagnosis is by the comparison of many different models. A patient may have regular checkups in a particular area, increasing the possibility of detecting a disease or dysfunction.
The data may include heart rate, blood pressure, breathing rate, etc. to different models. The models may include variations for age, sex, and level of physical activity. Each individual's physiological data is compared to previous physiological data and/or data of the various generic models. The deviations from the norm are compared to the known causes of deviations for each medical condition. The neural network can learn by studying the different conditions and models, merging them to form a complete conceptual picture, and then diagnose a patient's condition based upon the models.
Electronic Noses
The idea of a chemical nose may seem a bit absurd, but it has several real-world applications. The electronic nose is composed of a chemical sensing system (such as a spectrometer) and an artificial neural network, which recognizes certain patterns of chemicals. An odor is passed over the chemical sensor array, these chemicals are then translated into a format that the computer can understand, and the artificial neural network identifies the chemical.
A list at the Pacific Northwest Laboratory has several different applications in the environment, medical, and food industries.
Environment: identification of toxic wastes, analysis of fuel mixtures (7-11 example), detection of oil leaks, identification of household odors, monitoring air quality, monitoring factory emission, and testing ground water for odors.
Medical: The idea of using these in the medical field is to examine odors from the body to identify and diagnose problems. Odors in the breath, infected wounds, and body fluids all can indicate problems. Artificial neural networks have even been used to detect tuberculosis.
Food: The food industry is perhaps the biggest practical market for electronic noses, assisting or replacing entirely humans. Inspection of food, grading quality of food, fish inspection, fermentation control, checking mayonnaise for rancidity, automated flavor control, monitoring cheese ripening, verifying if orange juice is natural, beverage container inspection, and grading whiskey.
SecurityOne program that has already been started is the CATCH program. CATCH, an acronymn for Computer Aided Tracking and Characterization of Homicides. It learns about an existing crime, the location of the crime, and the particular characteristics of the offense. The program is subdivided into different tools, each of which place an emphasis on a certain characteristic or group of characteristics. This allows the user to remove certain characteristics which humans determine are unrelated.
Loans and credit cards
Loan granting is one area in which neural networks can aid humans, as it is an area not based on a predetermined and preweighted criteria, but answers are instead nebulous. Banks want to make as much money as they can, and one way to do this is to lower the failure rate by using neural networks to decide whether the bank should approve the loan. Neural networks are particularly useful in this area since no process will guarantee 100% accuracy. Even an 85-90% accuracy would be an improvement over the methods humans use.
In fact, in some banks, the failure rate of loans approved using neural networks is lower than that of some of their best traditional methods. Some credit card companies are now beginning to use neural networks in deciding whether to grant an application.
The process works by analyzing past failures and making current decisions based upon past experience. Nonetheless, this creates its own problems. For example, the bank or credit company must justify their decision to the applicant. The reason "my neural network computer recommended against it" simply isn't enough for people to accept. The process of explaining how the network learned and on what characteristics the neural network made its decision is difficult. As we alluded to earlier in the history of neural networks, self-modifying code is very difficult to debug and thus difficult to trace. Recording the steps it went through isn't enough, as it might be using conventional computing, because even the individual steps the neural network went through have to be analyzed by human beings, or possibly the network itself, to determine that a particular piece of data was crucial in the decision-making process.

Neural Network Follies(funny story)

In the 1980s, the Pentagon wanted to harness computer technology to make their tanks harder to attack.
The Plan
The preliminary plan was to fit each tank with a digital camera hooked up to a computer. The computer would continually scan the environment outside for possible threats (such as an enemy tank hiding behind a tree), and alert the tank crew to anything suspicious. Computers are really good at doing repetitive tasks without taking a break, but they are generally bad at interpreting images. The only possible way to solve the problem was to employ a neural network.
The Implementation
The research team went out and took 100 photographs of tanks hiding behind trees, and then took 100 photographs of trees - with no tanks. They took half the photos from each group and put them in a vault for safe-keeping, then scanned the other half into their mainframe computer. The huge neural network was fed each photo one at a time and asked if there was a tank hiding behind the trees. Of course at the beginning its answers were completely random since the network didn't know what was going on or what it was supposed to do. But each time it was fed a photo and it generated an answer, the scientists told it if it was right or wrong. If it was wrong it would randomly change the weightings in its network until it gave the correct answer. Over time it got better and better until eventually it was getting each photo correct. It could correctly determine if there was a tank hiding behind the trees in any one of the photos.
Verification
But the scientists were worried: had it actually found a way to recognize if there was a tank in the photo, or had it merely memorized which photos had tanks and which did not? This is a big problem with neural networks, after they have been trained you have no idea how they arrive at their answers, they just do. The question was did it understand the concept of tanks vs. no tanks, or had it merely memorized the answers? So the scientists took out the photos they had been keeping in the vault and fed them through the computer. The computer had never seen these photos before -- this would be the big test. To their immense relief the neural net correctly identified each photo as either having a tank or not having one.
Independent testing
The Pentagon was very pleased with this, but a little bit suspicious. They commissioned another set of photos (half with tanks and half without) and scanned them into the computer and through the neural network. The results were completely random. For a long time nobody could figure out why. After all nobody understood how the neural had trained itself.
Grey skies for the US military
Eventually someone noticed that in the original set of 200 photos, all the images with tanks had been taken on a cloudy day while all the images without tanks had been taken on a sunny day. The neural network had been asked to separate the two groups of photos and it had chosen the most obvious way to do it - not by looking for a camouflaged tank hiding behind a tree, but merely by looking at the colour of the sky. The military was now the proud owner of a multi-million dollar mainframe computer that could tell you if it was sunny or not.
This story might be apocryphal, but it doesn't really matter. It is a perfect illustration of the biggest problem behind neural networks. Any automatically trained net with more than a few dozen neurons is virtually impossible to analyze and understand. One can't tell if a net has memorized inputs, or is 'cheating' in some other way. A promising use for neural nets these days is to predict the stock market. Even though initial results are extremely good, investors are leery of trusting their money to a system that nobody understands.

Tuesday, June 17, 2008

Spiking Neural model

For detailed PDF document please mail to sekharstuff@gmail.com
Formal spiking neuron models
The second stream of research in computational neuroscience is oriented towards modeling the spiking nature of the neurons and retaining the essential elements of the behavior being modeled, while trying to simplify the complexity of the resulting description (Gerstner,1991; Maass, 1995; Maass, 1997; Rieke et al., 1997). The principal motivation for the creation of simplified models is that they allow studying more easily the computational andfunctional principles of neural systems (Koch, 1999).
The reduction of the detailed neuron models to formal models requires simplifications in at least two respects. First, the non–linear dynamics of spike generation must be reduced to a single ordinary differential equation and second, the spatial structure of the neuron (i.e., the dendritic tree) is neglected and reduced to an input (Gerstner and Kistler, 2002). To support the validity of the former simplification, Kistler et al. (1997) demonstrated that spike generation in the Hodgkin–Huxley model can be reproduced to a high degree of accuracy (i.e., up to 90%) by a single variable model. The authors pointed out that the Hodgkin–Huxley model shows a sharp, threshold–like transition between an action potential for a strong stimulus and graded response (no spike) for slightly weaker stimuli. This suggests that the emission of an action potential can be described by a threshold process .
Several simplified neural models have been proposed in the last decades. The leaky integrate–
and–fire neuron is probably the best–known example of a formal neural model (Tuckwell,1988; Bugmann, 1991; Stevens and Zador, 1998). It simulates the dynamics of the neuron membrane potential in response to a synaptic current by implementing an equivalent electrical circuit. The function of the integrate–and–fire circuit is to accumulate the input currents and, when the membrane potential reaches the threshold value, to generate a spike.Immediately after emitting a pulse, the potential is reset and maintained there for an absolute refractory period.
The simplified mathematical models for spiking neurons cannot account for the entire range of computational functions of the biological neuron. Rather, they try to abstract a number of essential computational aspects of the real cell function. The essential features implemented can differ between models, as a function of what the modeler considers to be relevant and crucial for its domain study. Thus, the integrate–and–fire model focuses upon the temporal summation function of the neuron (Bugmann and Taylor, 1997). The spike response model proposed
by Gerstner (1999) simplifies the action potential generation to a threshold process.The resonate–and–fire model (Izhikevich, 2001) focuses upon the operation of the neuron in a resonating regime. By contrast with the detailed neural models, the computational strength of the spiking neurons arises from the way they interact with each other, when they work cooperatively in large networks.
Neural communication with spikes
Neurons communicate by producing sequences of fixed size electrical impulses called action potentials or spikes (Adrian, 1926). As Rieke and colleagues puts it:
Spike sequences are the language for which the brain is listening, the language the brain uses for its internal musings, and the language it speaks as it talks to the outside world.
In the theory of neural information processing, there are two main hypotheses with respect to where in the spike train the neural information is encoded: in the neural firing rate or in the precise timing of the spikes. These hypotheses are introduced in turn, below.
Rate coding
Adrian (1926) introduced the concept of rate coding, by which the number of spikes in a fixed time window following the onset of a static stimulus code for the intensity of the stimulus. Since Adrian’s studies, the rate coding hypothesis has been dominant in the neural computational field (see for a review Recce, 1999). The definition of the rate has been applied to the discovery of the properties of many types of neurons in the sensory, motor, and central nervous system, by searching for those stimuli that make neurons fire maximally.Recent observations on the behavior of cortical visual neurons demonstrated a temporal precision in brain function that is higher than would be predicted from frequency coding .This suggests that firing rate alone cannot account for all of the encoding of information in spike trains. Consequently, in the last decade, the focus of attention in experimental and computational neuroscience has shifted towards the exploration of how the timing of single spikes is used by the nervous system .It is important to understand that the pulse coding represents an extension of the way neurons code information, rather than a replacement of the firing rate code. Panzeri and Schultz (2001) proposed such a unified approach to the study of temporal, correlation and rate coding.They suggest that a spike count coding phase exists for narrow time windows (i.e.,shorter than the timescale of the stimulus-induced response fluctuations), while for time windows much longer than the stimulus characteristic timescale, there is additional timing information, leading to a temporal coding phase.
Temporal coding by relative latencies
In a temporal code, information can be contained in the temporal pattern of spikes (inter–spike interval codes) or in the time–of–arrival of the spike (relative spike timings) (Cariani,1997). In the following, we discuss the later coding scheme, which is implemented in SpikeNNS. Neurobiological studies of sensory coding of stimuli in the auditory and visual systems revealed that latency of transmission is a potential candidate for coding the stimulus features (Bugmann, 1991; Heil and Irvine, 1996; Heil, 1997). An example is the study by Gawne et al. (1996) who showed that the latency of neurons response in the striate cortex is a function of the stimulus contrast and that synchronization based on spike latencies can make an important contribution to binding contrast related information. The coding scheme which represents analog information through differences in the firing times of different neurons is referred to as delay coding or latency coding (Hopfield, 1995; Gawne et al., 1996; Maass, 1997; Thorpe and Gautrais, 1998).


Figure 2: Coding by relative delay. The neurons in figure emit spikes at different moments in time t(f) j . The most strongly ctivated neuron fires first (i.e., second from left). Its spike ravels a considerable distance along the axon, until last neuron fires (i.e., the fourth from left). The latencies xj are computed with respect to a reference time T .

According to Hopfield (1995) and Maass (1997), a vector of real numbers (x1,...., xn) with xj [0; 1] can be encoded in the firing times tj of n neurons, such as ,
where T is some reference time and c .xj represent the transmission delays. The timing can be defined relatively to some other spike produced by the same neuron or to the onset of a stimulus. If for each neuron, we consider only the latency of the first spike after the stimulus onset, then we obtain a coding scheme based on the time–to–first–spike. According to Van Rullen and Thorpe (2001) cells can act as ’analog–to–delay converters’. $That is, the most strongly activated cells will tend to fire first and will signal a strong stimulation, whereas more weakly activated units will fire later and signal a weak stimulation. This coding scheme was proposed by Thorpe and Gautrais (1998), who argued that during visual object recognition the brain does not have time to evaluate more than one spike from each neuron per processing step. The idea is supported by other experimental studies (Tovee et al., 1993) and was used to implement learning in a number of neural network models, based on the timing or the order of single spike events (Ruff and Schmitt, 1998; Van Rullen et al., 1998, see also Section 6.1).

Computational properties of spiking neurons
Spiking neural models can account for different types of computations, ranging from linear temporal summation of inputs and coincidence detection to multiplexing, nonlinear operations and preferential resonance (Koch, 1999; Maass, 1999). Several recent studies employing rigorous mathematical tools have demonstrated that through the use of temporal coding, a pulsed neural network may gain more computational power than a traditional network (i.e., consisting of rate coding neurons) of comparable size (Maass and Schmitt, 1996; Maass, 1997).
A simple spiking neural model can carry out computations over the input spike trains under
several different modes (Maass, 1999). Thus, spiking neurons compute when the input is encoded in temporal patterns, firing rates, firing rates and temporal corellations, and space–rate codes. An essential feature of the spiking neurons is that they can act as coincidence detectors for the incoming pulses, by detecting if they arrive in almost the same time (Abeles, 1982; Softky and Koch, 1993; Kempter et al., 1998). When operating in the integration mode (see the integrate-and-fire model in Section 4.2.1), the output rate changes as a function of the mean input rate and is independent of the fine structure of input spike trains (Gerstner, 1999). By contrast, when the neuron is functioning as a coincidence detector, the output firing rate is higher if the spikes arrive simultaneously, as opposed to random spike arrival. More precisely, the neuron fires (e.g., signals a detection) if any two presynaptic neurons have fired in a temporal distance smaller than an arbitrary constant c1, and do not fire if all presynaptic neurons fire in a time interval larger than another constant c2 (Maass, 1999).
For a neuron to work as a coincidence detector, two constraints have to be satisfied:
(1) the postsynaptic potential has to evolve in time according to an exponential decay function and (2) the transmission delays must have similar values, so that the simultaneous arrival of the
postsynaptic potentials which cause the neuron to fire will reflect the coincidence of presynaptic
spikes (Maass, 1999). Note that not any spiking neural model can detect coincidence. For instance, the resonator neuron fires if the input train of spikes has the same phase with its own oscillation, but has low chances to spike if the inputs arrive coincidentally (Izhikevich, 2000).
In SpikeNNS, neurons can compute in two regimes: coincidence detection and threshold–and–fire. Acting as coincidence detectors is more likely for hidden units, when they compute over pulses coming from the input layers. That is, because in our implementation, the input spikes arriving on the afferent connections are affected by similar delays with a small noise factor. The latency of the spikes is given by the firing times of the input nodes. The operation in this computing domain depends also on the neural threshold and on the value of the membrane time constant that describes how fast decays the posystynaptic potentials .
In the threshold–and–fire mode, neurons perform a linear summation of the inputs in a similar manner with the integrate–and–fire model. The integration of pulses over a larger time interval is particularly required in the case of spikes arriving on the lateral synapses, which are affected by a large range of delays (e.g., from 1 to 10 ms).
Neural model in SpikeNNS
The neural model implemented in SpikeNNS is a simplified version of the Spike Response Model (Gerstner, 1991; Gerstner et al., 1993; Gerstner, 1999) referred to as SRM0. The Spike Response Model (SRM) represents an alternative formulation to the well–known integrate– and–fire model. Instead of defining the evolution of the neuron membrane potential by a differential equation, SRM uses a kernel-based method. By doing this, the Spike Response Model is slightly more general than the integrate–and–fire models because the response kernels can be chosen arbitrarily, whereas for the integrate–and–fire model they are fixed (Gerstner, 1999). According to Kistler et al. (1997) the Spike Response Model can reproduce correctly up to 90% of the spike times of the Hodgkin-Huxley model. The model can also be used to simulate the dynamics of linear dendritic trees, as well as non–linear effects at the synapses (Gerstner, 1999). The Spike Response Model offers us a powerful computational framework that captures the essential effects during spiking and has the advantages of a simple and elegant mathematical formalization.
1.Spike Response Model
The Spike Response Model describes the state of a neuron by a single variable, the membrane
potential Vi. Figure 4.3a shows the time evolution of the membrane potential of neuron i as a function of time t. Before any input spike has arrived at the postsynaptic neuron i, the variable Vi(t) has the value 0. The firing of a presynaptic neuron j at time t(f)j evokes a postsynaptic potential in the neuron i modeled by the kernel response Eij . Each incoming spike will perturb the value of Vi and if, after the summation of the inputs, the membrane potential Vi reaches the threshold theta then an output spike is generated. The firing time is given by the condition Vi(t(f)i ) = theta. After the neuron has fired the membrane potential returns to a low value which is described by the refractory period function n. After firing, the evolution of Vi is given by the equation:

with s = t-t(f)j . The first term in the equation (i.e., the kernel ni) accounts for refractoriness in the neuron behavior. The second term represents the contribution of all previous spikes t(f)j of presynaptic neurons j on the membrane potential of neuron i. ni denotes the set of neurons presynaptic to i, Fj is the set of all firing times of neuron j and wij are the synaptic strengths between cells (see Figure 5). The kernel Eij , as a function of t-t(f)j , represents the time course of the postsynaptic potential evoked by the firing of the presynaptic neuron j at time t(f)j (see Figure 4.4a). The evolution of the postsynaptic potential function depends also on the time t - ti that has passed since the last spike of the postsynaptic neuron. That is, because if the neuron is in a refractory period, its response to an input spike is smaller than if the neuron is fully responsive. The last term represents the effect on the neuron of an external driving current Iext and the kernel ~E(t - ti; s) is the linear response of the membrane potential to the input current and depends on the time that has passed since the last output spike was emitted at ti (Gerstner,1999).
SRM0
A simpler version of the Spike Response Model can be obtained by neglecting the dependence
of the Eij kernel on the term t - ti (i.e., the effect of the neuron’s last spike on the


Figure 3: (a) Spike Response Model, the membrane potential V of neuron i as a function
of time t. (b) SpikeNNS neural model, the time course of the membrane potential Vi. is
the neural threshold. See in text for more details on the time evolution of the membrane
potential.
postsynaptic potential function) and by considering a null external current. Consequently, equation 4.1 can be rewritten: This version of the Spike Response Model has been entitled SRM0 and has been applied for the analysis of computations with spiking neurons by Maass (1999). The neural model implemented in SpikeNNS is completely specified by the set of Equations 2, 3, 5, which account for several important aspects of neural behavior: the spiking nature of the neuron, the attenuation of the response at the soma resulting from synaptic input, the absolute and relative refractory periods. The model also accounts for spike latency and noise in the neural response (see description in sections below). Figure 3b shows the time evolution of the membrane potential Vi in the simplified neural model implemented in SpikeNNS. Compared with the membrane potential in the SRM model represented in Figure 4.3a, the shape of the action potential is reduced to a formal event, captured by a delta pulse (the vertical line). After the spike emission, the membrane voltage is reset to a negative value and is kept there for 1 ms .The ascending
Figure 4: Postsynaptic potential function in SpikeNNS. The curve decay with time is plotted
for different values of the decay rate, given by the membrane time constant Tm. Note that the slope of the curve is negatively correlated with the value of the membrane time constant.

curve of the € function also reduces to a pulse, followed by an exponential decay of the postsynaptic potential . In SpikeNNS, Equation 4.2 is implemented by the activation function ACT Spike ,which expresses how the membrane potential V of a node i is calculated at a given time t. Let us now consider the mathematical description of the two kernels €and n required in the Equation 2.
4.3.2 Postsynaptic potential function
In SpikeNNS, each hidden neuron is connected to a number of other neurons either from the same layer or from an input or another hidden layer. The firing of any node, i.e., input or hidden, is transmitted to all its postsynaptic units, where it evokes a postsynaptic potential of some standard form (see Figure 4). The spike transmission is affected by a noisy delay d, which in our implementation is proportional with the Euclidian distance between the presynaptic and the postsynaptic node .This delay corresponds to the axonal and dendritic transmission delay of real neurons (Koch, 1999). When the presynaptic spike reaches the postsynaptic unit, the postsynaptic potential (PSP) jumps to a maximum value, i.e., in our simulation this value is set to 1. Afterwards, it decays exponentially towards the resting value, with a rate being given by the time constant Tm. In our model, the postsynaptic potential €ij is described as a function of the difference
s = t - t(f) j - d :
where t is the time of consideration, t(f) j is the time of the presynaptic node firing and d is the delay on the connection. The Heaviside function sets the postsynaptic potential to a null value, for any time moment t that precedes the arrival of the presynaptic spike, that is t <>

Temporal summation of postsynaptic potentials A single synaptic input is rarely sufficient to generate an action potential. The response of a neural cell is usually determined by the way it integrates multiple synaptic inputs. The basic arithmetic that dendrites of a real neuron compute is still a matter of controversy (Poirazi and Mel, 2000). Both linear and nonlinear interactions between synaptic inputs in the brain have been described by neurophysiological experiments (Cash and Yuste, 1999; Koch, 1999) and explored computationally with different formalisms (Rall, 1977; Mel, 1992). In SpikeNNS, we consider that both excitatory and inhibitory inputs accumulate linearly in time. The total synaptic input to a hidden neuron i at some moment t is given by the contribution of all previous spikes of the presynaptic neurons (see Figure 4.5). The set of presynaptic neurons to the node i is Ti = {j/ j is presynaptic to i}.. The set of all firing times of the


Figure 5: The presynaptic contribution to the membrane voltage V (t) of neuron i. Each presynaptic neuron j in Ti emits a series of spikes Fj with firing times t(f) j . Neuron i computes the weighted sum of the decayed PSPs. If the sum exceeds the threshold then an output spike is generated.

presynaptic node j is given by Fj = f t(f) j j Vj(t(f) j ) = g. In SpikeNNS, a limited number of spikes per neuron are stored (e.g., a maximum of 10 spikes/neuron were stored for the simulation run in Section 6.1). In our model, the slope of the postsynaptic potential curve is negatively correlated with the value of the membrane time constant Tm (see Figure 4a). That is, for large values of m, the postsynaptic potential persists longer and it allows the temporal summation of inputs that produce in this way, an aggregate PSP larger than would be elicited by an individual input. The neural threshold and the membrane time constant m represent the principal parameters in determining how many excitatory inputs are needed for a neuron to fire. For example, the choice of a membrane time constant Tm = 5 (the blue graph in Figure 4a) causes an exponential decay of the postsynaptic potential from the maximum value to 0, in about 10 ms. This relatively slow decay of the postsynaptic potential curve favors the significant summation of the synaptic inputs which arrive in a time window no larger than 4 - 5 ms. For instance, given the above value of the Tm, the threshold values were set so that, at least three synaptic inputs (e.g, most commonly 4 or 5 inputs) were necessary for a postsynaptic spike to be emitted.

For detailed PDF document please mail to sekharstuff@gmail.com

Wednesday, May 21, 2008

Integrate and Fire neuron model

Infor mation about Integrate and Fire neuron model please visit
http://icwww.epfl.ch/~gerstner//SPNM/node26.html

Hodgkin -Huxley neuron model

The standard Hodgkin - Huxley model of an excitatory neuron consists of the equation for the total membrane current, IM, obtained from Ohm's law:
where V denotes the membrane voltage, IK is the potassium current, INa is the sodium current and IL is leakage current carried by other ions that move passively through the membrane. This equation is derived by modeling the potassium, sodium and leakage currents using a simple electrical circuit model of the membrane. We think of a gate in the membrane as having an intrinsic resistance and the cell membrane itself as having an intrincis capacitance as shown in Figure 2.1:

Figure 2.1: The Membrane and Gate Circuit Model

Here we show an idealized cell with a small portion of the membrane blown up into an idealized circuit. We see a small piece of the lipid membrane with an inserted gate. We think of the gate as having some intrinsic resistance and capacitance. Now for our simple Hodgkin - Huxley model here, we want to model a sodium and potassium gate as well as the cell capacitance. So we will have a resistance for both the sodium and potassium. In addition, we know that other ions move across the membrane due to pumps, other gates and so forth. We will temporarily model this additional ion current as a leakage current with its own resistance. We also know that each ion has its own equilibrium potential which is determined by applying the Nernst equation. The driving electomotive force or driving emf is the difference between the ion equilibrium potential and the voltage across the membrane itself. Hence, if Ec is the equilibrium potential due to ion c and Vm is the membrane potential, the driving force is Vc - Vm. In Figure 2.2, we see an electric schematic that summarizes what we have just said. We model the membrane as a parellel circuit with a branch for the sodium and potassium ion, a branch for the leakage current and a branch for the membrane capacitance.

Figure 2.2: The Simple Hodgkin - Huxley Membrane Circuit Model
From circuit theory, we know that the charge q across a capacitator is q = C E, where C is the capacitance and E is the voltage across the capicitor. Hence, if the capacitance C is a constant, we see that the current through the capacitor is given by the time rate of change of the charge
If the voltage E was also space dependent, then we would write E(z,t) to indicate its dependence on both a space variable z and the time t. Then the capacitive current would be
From Ohm's law, we know that voltage is current times resistance; hence for each ion c, we can say where we label the voltage, current and resistance due to this ion with the subscript c. This implieswhere gc is the reciprocal resistance or conductance of ion c. Hence, we can model all of our ionic currents using a conductance equation of the form above. Of course, the potassium and sodium conductances are nonlinear functions of the membrane voltage V and time t. This reflects the fact that the amount of current that flows through the membrane for these ions is dependent on the voltage differential across the membrane which in turn is also time dependent. The general functional form for an ion c is thus
where as we mentioned previously, the driving force, V - Ec, is the difference between the voltage across the membrane and the equilibrium value for the ion in question, Ec. Note, the ion battery voltage Ec itself might also change in time (for example, extracellular potassium concentration changes over time ). Hence, the driving force is time dependent. The conductance is modeled as the product of a activation, m, and an inactivation, h, term that are essentially sigmoid nonlinearities. The activation and inactivationa are functions of V and t also. The conductance is assumed to have the form
where appropriate powers of p and q are found to match known data for a given ion conductance.We model the leakage current, IL, as
where the leakage battery voltage, EL, and the conductance gL are constants that are data driven.Hence, our full model would be
Activation and Inactivation Variables:We assume that the voltage dependence of our activation and inactivation has been fitted from data. Hodgkin and Huxley modeled the time dependence of these variables using first order kinetics. They assumed a typical variable of this type, say m, satisfies for each value of voltage, V:





Tuesday, May 20, 2008

Artificial Neural Model

Artificial Neuron Model
As it is mentioned in the previous section, the transmission of a signal from one neuron to
another through synapses is a complex chemical process in which specific transmitter substances are released from the sending side of the junction. The effect is to raise or lower the electrical potential inside the body of the receiving cell. If this graded potential reaches a threshold, the neuron fires. It is this characteristic that the artificial neuron model proposed by McCulloch and Pitts, attempt to reproduce. The neuron model shown in Figure 1.6 is the one that widely used in artificial neural networks with some minor modifications on it.
Figure 6. Artificial Neuron
The artificial neuron given in this figure has N input, denoted as u1, u2, ...uN. Each lineconnecting these inputs to the neuron is assigned a weight, which are denoted as w1, w2, .., wN respectively. Weights in the artificial model correspond to the synaptic connections in biological neurons. The threshold in artificial neuron is usually represented by θ and the activation corresponding to the graded potential is given by the formula: The inputs and the weights are real values. A negative value for a weight indicates an inhibitory connection while a positive value indicates an excitatory one. Although in biological neurons, θ has a negative value, it may be assigned a positive value in artificial neuron models. If θ is positive, it is usually referred as bias. For its mathematical convenience we will use (+) sign in the activation formula. Sometimes, the threshold is combined for simplicity into the summation part by assuming an imaginary input u0 =+1 and a connection weight w0 = θ. Hence the activation formula becomes:

The output value of the neuron is a function of its activation in an analogy to the firing frequency of the biological neurons:
x = f (a) Furthermore the vector notation
a=wTu+θ
is useful for expressing the activation for a neuron. Here, the jth element of the input vector u is uj and the jth element of the weight vector of w is wj. Both of these vectors are of size N. Notice that, wTu is the inner product of the vectors w and u, resulting in a scalar value. The inner product is an operation defined on equal sized vectors. In the case these vectors have unit length, the inner product is a measure of similarity of these vectors.
Originally the neuron output function f(a) in McCulloch Pitts model proposed as threshold function, however linear, ramp and sigmoid and functions (Figure 6.) are also widely used output functions:





Figure 7. Some neuron output functions
Though its simple structure, McCulloch-Pitts neuron is a powerful computational device. McCulloch and Pitts proved that a synchronous assembly of such neurons is capable in principle to perform any computation that an ordinary digital computer can, though not necessarily so rapidly or conveniently .


Biological Neuron Model and Artificial Neuron Model

Biological Neuron Model
It is claimed that the human central nervous system is comprised of about 1,3x1010 neurons and that about 1x1010 of them takes place in the brain. At any time, some of these neurons are firing and the power dissipation due this electrical activity is estimated to be in the order of 10 watts. Monitoring the activity in the brain has shown that, even when asleep, 5x107 nerveimpulses per second are being relayed back and forth between the brain and other parts of the body. This rate is increased significantly when awake. A neuron has a roughly spherical cell body called soma (Figure 1). The signals generated in soma are transmitted to other neurons through an extension on the cell body called axon or nerve fibres. Another kind of extensions around the cell body like bushy tree is the dendrites, which are responsible from receiving the incoming signals generated by other neurons.
Fig 1 Typical neuron
An axon (Figure 2), having a length varying from a fraction of a millimeter
to a meter in human body, prolongs from the cell body at the point called
axon hillock. At the other end, the axon is separated into several branches,
at the very end of which the axon enlarges and forms terminal buttons.
Terminal buttons are placed in special structures called the synapses which
are the junctions transmitting signals from one neuron to another (Figure 3).
A neuron typically drive 103 to 104 synaptic junctions

Fig2. Axon
The synaptic vesicles holding several thousands of molecules of chemical
transmitters, take place in terminal buttons. When a nerve impulse arrives
at the synapse, some of these chemical transmitters are discharged into
synaptic cleft, which is the narrow gap between the terminal button of the
neuron transmitting the signal and the membrane of the neuron receiving it.
In general the synapses take place between an axon branch of a neuron and
the dendrite of another one. Although it is not very common, synapses may
also take place betweentwo axons or two dendrites of different cells or between
an axon and a cell body.

Figure 3. The synapse
Neurons are covered with a semi-permeable membrane, with only 5 nanometer thickness. The membrane is able to selectively absorb and reject ions in the intracellular fluid. The membrane basically acts as an ion pump to maintain a different ion concentration between the intracellular fluid and extracellular fluid. While the sodium ions are continually removed from the intracellular fluid to extracellular fluid, the potassium ions are absorbed from the extracellular fluid in order to maintain an equilibrium condition. Due to the difference in the ion concentrations inside and outside, the cell membrane become polarized. In equilibrium the interior of the cell is observed to be 70 milivolts negative with respect to the outside of the cell. The mentioned potential is called the resting potential.
A neuron receives inputs from a large number of neurons via its synaptic connections. Nerve signals arriving at the presynaptic cell membrane cause chemical transmitters to be released in to the synaptic cleft. These chemical transmitters diffuse across the gap and join to the postsynaptic membrane of the receptor site. The membrane of the postsynaptic cell gathers the chemical transmitters. This causes either a decrease or an increase in the soma potatial, called graded potantial, depending on the type of the chemicals released in to the synaptic cleft. The kind of synapses encouraging depolarization is called excitatory and the others discouraging it are called inhibitory synapses. If the decrease in the polarization is adequate to exceed a threshold then the post-synaptic neuron fires.
The arrival of impulses to excitatory synapses adds to the depolarization of soma while inhibitory effect tends to cancel out the depolarizing effect of excitatory impulse. In general, although the depolarization due to a single synapse is not enough to fire the neuron, if some otherareas of the membrane are depolarized at the same time by the arrival of nerve impulses through other synapses, it may be adequate to exceed the threshold and fire.
At the axon hillock, the excitatory effects result in the interruption the regular ion transportation through the cell membrane, so that the ionic concentrations immediately begin to equalize as ions diffuse through the membrane. If the depolarization is large enough, the membrane potential eventually collapses, and for a short period of time the internal potential becomes positive. The action potential is the name of this brief reversal in the potential, which results in an electric current flowing from the region at action potential to an adjacent region on axon with a resting potential. This current causes the potential of the next resting region to change, so the effect propagates in this manner along the axon membrane.


Figure 4. The action potential on axon
Once an action potential has passed a given point, it is incapable of being reexcited for a while called refractory period. Because the depolarized parts of the neuron are in a state of recovery and can not immediately become active again, the pulse of electrical activity always propagates in only forward direction. The previously triggered region on the axon then rapidly recovers to the polarized resting state due to the action of the sodium potassium pumps. The refractory period is about 1 milliseconds, and this limits the nerve pulse transmission so that a neuron can typically fire and generate nerve pulses at a rate up to 1000 pulses per second. The number of impulses and the speed at which they arrive at the synaptic junctions to a particular neuron determine whether the total excitatory depolarization is sufficient to cause the neuron to fire and so to send a nerve impulse down its axon. The depolarization effect can propagate along the soma membrane but these effects can be dissipated before reaching the axon hillock.
However, once the nerve impulse reaches the axon hillock it will propagate until it reaches the synapses where the depolarization effect will cause the release of chemical transmitters into the synaptic cleft. The axons are generally enclosed by myelin sheath that is made of many layers of
Schwann cells promoting the growth of the axon. The speed of propagation down the axon depends on the thickness of the myelin sheath that provides for the insulation of the axon from the extracellular fluid and prevents the transmission of ions across the membrane. The myelin sheath is interrupted at regular intervals by narrow gaps called nodes of Ranvier where extracellular fluid makes contact with membrane and the transfer of ions occur. Since the axons themselves are poor conductors, the action potential is transmitted as depolarizations occur at the nodes of Ranvier. This happens in a sequential manner so that the depolarization of a node triggers the depolarization of the next one. The nerve impulse effectively jumps from a node to the next one along the axon each node acting rather like a regeneration amplifier to compensate for losses. Once an action potential is created at the axon hillock, it is transmitted through the axon to other neurons.
It is mostly tempted to conclude the signal transmission in the nervous system as having a digital nature in which a neuron is assumed to be either fully active or inactive.However this conclusion is not that correct, because the intensity of a neuron signal is coded in the frequency of pulses. A better conclusion would be to interpret the biological neural systems as if using a form of pulse frequency modulation to transmit information. The nerve pulses passing along the axon of a particular neuron are of approximately constant amplitude but the number generated pulses and their time spacing is controlled by the statistics associated with the arrival at the neuron's many synaptic junctions of sufficient excitatory inputs .
The representation of biophysical neuron output behavior is shown schematically inFigure 5 At time t=0 a neuron is excited; at time T, typically it may be of the order of 50 milliseconds, the neuron fires a train of impulses along its axon. Each of these impulses is practically of identical amplitude. Some time later, say around t=T+τ, the neuron may fire another train of impulses, as a result of the same excitation, though the second train of impulses will usually contain a smaller
number. Even when the neuron is not excited, it may send out impulses at random, though much less frequently than the case when it is excited.


Figure 5. Representation of biophysical neuron output signal after excitation at tine t=0

A considerable amount of research has been performed aiming to explain the electrochemical structure and operation of a neuron, however still remains several questions, which need to be answered in future.

Biological Neuron



The brain is a collection of about 10 billion interconnected neurons. Each neuron is a cell [right] that uses biochemical reactions to receive, process and transmit information.
A neuron's dendritic tree is connected to a thousand neighbouring neurons. When one of those neurons fire, a positive or negative charge is received by one of the dendrites. The strengths of all the received charges are added together through the processes of spatial and temporal summation. Spatial summation occurs when several weak signals are converted into a single large one, while temporal summation converts a rapid series of weak pulses from one source into one large signal. The aggregate input is then passed to the soma (cell body). The soma and the enclosed nucleus don't play a significant role in the processing of incoming and outgoing data. Their primary function is to perform the continuous maintenance required to keep the neuron functional. The part of the soma that does concern itself with the signal is the axon hillock. If the aggregate input is greater than the axon hillock's threshold value, then the neuron fires, and an output signal is transmitted down the axon. The strength of the output is constant, regardless of whether the input was just above the threshold, or a hundred times as great. The output strength is unaffected by the many divisions in the axon; it reaches each terminal button with the same intensity it had at the axon hillock. This uniformity is critical in an analogue device such as a brain where small errors can snowball, and where error correction is more difficult than in a digital system.
Each terminal button is connected to other neurons across a small gap called a synapse [left]. The physical and neurochemical characteristics of each synapse determines the strength and polarity of the new input signal. This is where the brain is the most flexible, and the most vulnerable. Changing the constitution of various neuro- transmitter chemicals can increase or decrease the amount of stimulation that the firing axon imparts on the neighbouring dendrite. Altering the neurotransmitters can also change whether the stimulation is excitatory or inhibitory. Many drugs such as alcohol and LSD have dramatic effects on the production or destruction of these critical chemicals. The infamous nerve gas sarin can kill because it neutralizes a chemical (acetylcholinesterase) that is normally responsible for the destruction of a neurotransmitter (acetylcholine). This means that once a neuron fires, it keeps on triggering all the neurons in the vicinity. One no longer has control over muscles, and suffocation ensues.
Flash about neuron you can find out in

Organisation of brain


The human brain controls the central nervous system (CNS), by way of the cranial nerves and spinal cord, the peripheral nervous system (PNS) and regulates virtually all human activity.Involuntary, or "lower," actions, such as heart rate, respiration, and digestion, are unconsciously governed by the brain, specifically through the autonomic nervous system. Complex, or "higher," mental activity, such as thought, reason, and abstraction, is consciously controlled.
Anatomically, the brain can be divided into three parts: the
forebrain, midbrain, and hindbrain;the forebrain includes the several lobes of the cerebral cortex that control higher functions, while the mid- and hindbrain are more involved with unconscious, autonomic functions. During encephalization, human brain mass increased beyond that of other species relative to body mass. This process was especially pronounced in the neocortex, a section of the brain involved with language and consciousness. The neocortex accounts for about 76% of the mass of the human brain; with a neocortex much larger than other animals, humans enjoy unique mental capacities despite having a neuroarchitecture similar to that of more primitive species. Basic systems that alert humans to stimuli, sense events in the environment, and maintain homeostasis are similar to those of basic vertebrates. Human consciousness is founded upon the extended capacity of the modern neocortex, as well as the greatly developed structures of the brain stem.
For simulation of brain visit

Humans and Computers

Man Vs Machine
Generally Speaking

Many of us think that computers are many many times faster, more powerful and more capable when compared to our brains simply because they can perform calculations thousands of time faster, workout logical computations without error and store memory at incredible speeds with flawless accuracy.But is the the computer really superior to the human brain in terms of ability , processing power and adaptability ?We now give you the real comparison.

Processing Power and Speed

The human brain - We can only estimate the processing power of the average human brain as there is no way to measure it quantitatively as of yet. If the theory of taking nerve volume to be proportional to processing power is true we then, may have a correct estimate of the human brain's processing power.
It is fortunate that we understand the neural assemblies is the retina of the vertebrate eye quite well (structurally and functionally) because it helps to give us a idea of the human brain's capability.
The retina is a nerve tissue in the back of the eyeball which detects lights and sends images to the brain. A human retina has a size of about a centimeter square is half a millimeter thick and is made up of 100 million neurons. Scientists say that the retina sends to the brain, particular patches of images indicating light intensity differences which are transported via the optic nerve, a million-fiber cable which reaches deep into the brain.
Overall, the retina seems to process about ten one-million-point images per second.
Because the 1,500 cubic centimeter human brain is about 100,000 times as large as the retina, by simple calculation, we can estimate the processing power of a average brain to be about 100 million MIPS (Million computer Instructions Per Second ). In case you're wondering how much speed that is, let us give you an idea.
1999's fastest PC processor chip on the market was a 700 MHz pentium that did 4200 MIPS. By simple calculation, we can see that we would need at least 24,000 of these processors in a system to match up to the total speed of the brain !! (Which means the brain is like a 168,0000 MHz Pentium computer). But even so, other factors like memory and the complexity of the system needed to handle so many processors will not be a simple task. Because of these factors, the figures we so childishly calculated will most probably be a very serious underestimate.

The computer - The most powerful experimental super computers in 1998, composed of thousands or tens of thousands of the fastest microprocessors and costing tens of millions of dollars, can do a few million MIPS. These systems were used mainly to stimulate physical events for high-value scientific calculations.
Here, we have a chart of processor speeds for the past few years.
Year Clock Speed (MHz) Instruction Rate (MIPS)
1992 200 200 (400)
1993.5 300 300 (600)
1995 400 800 (1600)
1996.5 500 1000 (2000)
1998 600 2400 (3600)
1999.5 700 2800 (4200)
2000 1000 ?
From the chart above, we can observe some break through s in microprocessor speeds. The current techniques used by research labs should be able to continue such improvements for about a decade. By then maybe prototype multiprocessor chips finally reaching MIPS matching that of the brain will be cheap enough to develop.
Improvements of computer speeds however have some limitations. The more memory it has, the slower it is because it takes longer to run through its memory once. Computers with less memory hence have more MIPS, but are confined to less space to run big programs. The latest, greatest super computers can do a trillion calculations per second and can have a trillion bytes of memory. As computer memory and processors improve, the Megabyte/MIPS ratio is a big factor to consider. So far, this ratio has remained constant throughout the history of computers.
So who has more processing power ?By estimation, the brain has about 100 million MIPS worth of processing power while recent super-computers only has a few million MIPS worth in processor speed. That said, the brain is still the winner in the race. Because of the cost, enthusiasm and efforts still required, computer technology has still some length to go before it will match the human brain's processing power.

Counting the Memory

The human brain - So far, we have never heard of anybody's brain being "overloaded" because it has ran out of memory. (So it seems as if, the human brain has no limit as to how much memory it can hold. That may not be true)
Our best possible guess of the average human brain's capacity would by calculating using the number of synapses connecting the neurons in the human brain. Because each of the synapses have different molecular states, we estimate each of them to be capable holding one megabyte worth of memory. Since the brain has 100-trillion-synapses, we can safely say that the average brain can hold about 100 million megabytes of memory !!!
Remember what we said about the Megabyte/MIPS ratio of a computer ? By calculation, scientists discovered that the brain's memory/MIPS ratio matches that of modern computers. The megabyte/MIPS ratio seems to hold for nervous systems too!
However, we all know that the memory of the brain is not absolute. It does not have set files or directories that can be deleted, copied or archived like those of a computer. For example, a particular person who thought he had memorized a telephone number for good suddenly realizes he can't recall the number. But some half-a-day later, he may suddenly recall the number again.) It is a strange phenomenal that we still can't really explain. A simple thoery is that the brain treats parts and peices of these ignored memories like a unactive "archives" sections until they are required. Memory spans of parts of the brain seem to depend on how often they are used. Even so, there is no such thing as deletion of data in a brain.

The computer - Computers have more than one form of memory. We can generally classify them into primary and secondary memory. Primary memory is used as a form of temporary memory for calculation processes and storage of temporary values that need rapid access or updating, the contents of the primary memory disappear when the power is turned off. Primary memory is important when executing programs, bigger programs require more primary memory. ( RAM(random access memory), Caches & buffers are just a few examples of primary memory)
Secondary memory often comes in the form of hard disks, removable disk drives and tape drives. Secondary memory is used for the storage of most of a system's data, programs and all other permanent data that should stay there even when the power is turned off. As a computer is fed with bigger, smarter programs and more data, it would naturally need more secondary memory to hold them.
The latest, greatest super computers (as of 1998) have a million megabytes of memory. Today's latest model of hard disk drives on the personal computer market (in early 2000) can hold about 40,000 megabytes (40 gigabytes) of memory.

So who is the Superior ?


The brain is still the overall winner in many fields when it comes to numbers. However, because of its other commitments, the brain is less efficient when a person tries to use it for one specific function. The brain is as we can put it, a general purpose processor when compared to the computer. It therefore loses out when it comes to efficiency and performance. We have given the estimate for total human performance at 100 million MIPS, but the level of efficiency for which this can be applied to any task may only be a small fraction of the total. (this fraction depends on the adaptibilty of the brain to the task)
Deep Blue, the chess machine that bested world chess champion Garry Kasparov in 1997, used specialized chips to process chess moves at a the speed equivalent to a 3 million MIPS universal computer. This is 1/30 of the estimate for total human performance. Since it is plausible that Kasparov, probably the best human player ever, can apply his brain power to the strange problems of chess with an efficiency of 1/30, Deep Blue's near parity with Kasparov's chess skill supports the theory of the level of efficiency of total performance. ( Garry Kasparov beat Deep Blue with a very close, 2 -1 )

Comparison between conventional computers and neural networks

Parallel processing

One of the major advantages of the neural network is its ability to do many things at once. With traditional computers, processing is sequential--one task, then the next, then the next, and so on. The idea of threading makes it appear to the human user that many things are happening at one time. For instance, the Netscape throbber is shooting meteors at the same time that the page is loading. However, this is only an appearance; processes are not actually happening simultaneously.
The artificial neural network is an inherently multiprocessor-friendly architecture. Without much modification, it goes beyond one or even two processors of the von Neumann architecture. The artificial neural network is designed from the onset to be parallel. Humans can listen to music at the same time they do their homework--at least, that's what we try to convince our parents in high school. With a massively parallel architecture, the neural network can accomplish a lot in less time. The tradeoff is that processors have to be specifically designed for the neural network.
The ways in which they function

Another fundamental difference between traditional computers and artificial neural networks is the way in which they function. While computers function logically with a set of rules and calculations, artificial neural networks can function via images, pictures, and concepts.
Based upon the way they function, traditional computers have to learn by rules, while artificial neural networks learn by example, by doing something and then learning from it. Because of these fundamental differences, the applications to which we can tailor them are extremely different. We will explore some of the applications later in the presentation.

Self-programming

The "connections" or concepts learned by each type of architecture is different as well. The von Neumann computers are programmable by higher level languages like C or Java and then translating that down to the machine's assembly language. Because of their style of learning, artificial neural networks can, in essence, "program themselves." While the conventional computers must learn only by doing different sequences or steps in an algorithm, neural networks are continuously adaptable by truly altering their own programming. It could be said that conventional computers are limited by their parts, while neural networks can work to become more than the sum of their parts.
Speed

The speed of each computer is dependant upon different aspects of the processor. Von Neumann machines requires either big processors or the tedious, error-prone idea of parallel processors, while neural networks requires the use of multiple chips customly built for the application.

Introduction

Introduction
The power and usefulness of artificial neural networks have been demonstrated in several applications including speech synthesis, diagnostic problems, medicine, business and finance, robotic control, signal processing, computer vision and many other problems that fall under the category of pattern recognition. For some application areas, neural models show promise in achieving human-like performance over more traditional artificial intelligence techniques.
What, then, are neural networks? And what can they be used for? Although von-Neumann-architecture computers are much faster than humansin numerical computation, humans are still far better at carrying out low-level tasks such as speech and image recognition. This is due in part to the massive parallelism employed by the brain, which makes it easier to solve problems with simultaneous constraints. It is with this type of problem that traditional artificial intelligence techniques have had limited success. The field of neural networks, however, looks at a variety of models with a structure roughly analogous to that of the set of neurons in the human brain.
The branch of artificial intelligence called neural networks dates back to the 1940s, when McCulloch and Pitts [1943] developed the first neural model. This was followed in 1962 by the perceptron model, devised by Rosenblatt, which generated much interest because of its ability to solve some simple pattern classification problems. This interest started to fade in 1969 when Minsky and Papert [1969] provided mathematical proofs of the limitations of the perceptron and pointed out its weakness in computation. In particular, it is incapable of solving the classic exclusive-or (XOR) problem, which will be discussed later. Such drawbacks led to the temporary decline of the field of neural networks.
The last decade, however, has seen renewed interest in neural netivorks, both among researchers and in areas of application. The development of more-powerful networks, better training algorithms, and improved hardware have all contributed to the revival of the field. Neural-network paradigms in recent years include the Boltzmann machine, Hopfield's network, Kohonen's network, Rumelhart's competitive learning model, Fukushima's model, and Carpenter and Grossberg's Adaptive Resonance Theory model [Wasserman 1989; Freeman and Skapura 1991]. The field has generated interest from researchers in such diverse areas as engineering, computer science, psychology, neuroscience, physics, and mathematics. We describe several of the more important neural models, followed by a discussion of some of the available hardware and software used to implement these models, and a sampling of applications.