Neural Networks in Neuroscience and Computer Science

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
Neural Networks

This wiki explores some of the applications and models of neural networks being applied to research in both biology and neuroscience as well as artificial intelligence and computer science. Modeling how the brain sends signals through these neural networks has brought along many breakthroughs in the field of learning.

Introduction

A Neural Network is a network of neurons working together to send a flow of signals to accomplish some task. The original biological neural networks consist of neurons which interact with their neighbors through axon terminals connected via synapses to dendrites in other neurons. A neural circuit is a functional entity of interconnected neurons that regulates its own activity using a feedback loop. Artificial intelligence in the field of Computer Science adopted this information processing paradigm to create artificial neural networks. These artificial neural networks have been applied successfully to speech recognition, image analysis, and recognition tasks. Lots of research in Stanford's own Professor Andrew Ng's lab is geared towards applying neural networks to unsupervised learning tasks. [1]

Neural Networks in Neuroscience

History

Neuron in the Brain

Neural networks were first discovered/modeled in the late 1800's by a couple of biologists/psychologists including Herbert Spencer, Theodor Meynert, William James, and Sigmund Freud. The first rule of neuronal learning, Hebbian learning, was described by Hebb[2] in 1949 which Hebb states that "the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability...when an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased." It attempts to explain "associative learning," in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells.

Neuron connections

Neuron connections basically consist of chemical synapses and electrical gap junctions. One principal by which neurons work is neural summation where potentials at the post synaptic membrane sum up in the cell body and if they surpass a certain threshold, an action potential will occur that travels down the action to the terminal endings to transmit a signal to other neurons. Neuroplasticity of the response characteristics consists of changes to the brain caused by activity or experience which lead to the idea of memory.

Back-propagating action potentials cannot occur because after an action potential travels down a given segment of the axon, the m gate becomes closed, thus blocking the generation of an action potential back towards the cell body. However, in some cells, back-propagation does occur through the dendritic arbor and have important effects on synaptic plasticity and computational applications.

Receptive fields

A receptive field is a small region within an entire visual field and is commonly used in the ideas of convolutional neural networks. Any given neuron only responds to a subset of stimuli within its receptive field (tuning). A neuron in V1 may fire to any vertical stimulus in its receptive field because of its simple tuning, but in higher visual areas such as the fusiform gyrus, the neuron may only fire when a certain face appears in the receptive field. Memories are very likely represented by patterns of activation amongst these neural networks. The study and modeling of these networks has attracted a lot of interest in many different fields. It has the potential to explain various aspects of behavior, learning, and memory. The most important property of neural networks is the ability to learn complex patterns which is heavily emphasized in other fields like Computer Science.

Neural Networks in Computer Science (Artificial Intelligence)

Artificial Neural Network

Neural Network Models

In an artificial neural network (ANN), there are multiple layers of neurons in the system. The first layer has input neurons which send data via synapses to the second layer of neurons and then more to the third layer of output neurons. Some systems have more complex systems which will have more layers of neurons with different responsibilities. The synapses store parameters called "weights" that manipulate the data in the calculations.

An ANN typically has the following:

  1. Interconnection pattern between different layers of neurons
  2. Learning process for updating the weights of the interconnections
  3. Activation function that converts neuron's weighted input to its output activation

The neuron's network function is defined as a composition of other functions in a weighted sum that is then passed through a non-linear activation function such as the hyperbolic tangent or sigmoid function, .

Tanh function

Networks such as these are commonly called feedforward networks because the graph is a directed acyclic graph.

Supervised Learning

Suppose we have a fixed training set of training examples. We can train our neural network using batch gradient descent. In detail, for a single training example , we define the cost function with respect to that single example to be:

This is a (one-half) squared-error cost function. Given a training set of examples, the overall cost function is defined as:

The first term in the definition of is an average sum-of-squares error term. The second term is a regularization term (also called a weight decay term) that tends to decrease the magnitude of the weights, and helps prevent overfitting.

The weight decay parameter controls the relative importance of the two terms. is the squared error cost with respect to a single example whereas is the overall cost function, which includes the weight decay term.

Our goal is to minimize as a function of and . To train our neural network, we initialize all the network parameters to a small random value near zero and then apply an optimization algorithm such as batch gradient descent. Since is a a non-convex function, gradient descent could converge to a local optima, however, in practice, it works fairly well.

One iteration of gradient descent updates the parameters as follows:

The key component is the gradients which can be calculated using a back-propagation algorithm. The intuition behind the back-propagation algorithm is as follows. Given a training example , we will first run a "forward pass" to compute all the activations throughout the network, including the output value of the hypothesis . Then, for each node in layer , we would like to compute an "error term" that measures how much that node was "responsible" for any errors in our output. For an output node, we can directly measure the difference between the network's activation and the true target value, and use that to define (where layer is the output layer). For the hidden units, we compute based on a weighted average of the error terms of the nodes that uses as an input.

To train our neural network, we can now repeatedly take steps of gradient descent to reduce our cost function

Unsupervised Learning

An autoencoder[3] neural network is an unsupervised learning algorithm that applies back-propagation, setting the target values to be equal to the inputs. I.e., it uses .

Autoencoder

The autoencoder tries to learn an approximation to the identity function. By limiting the number of hidden units, we can discover interesting structure about the data. Supposedly there are only 50 hidden units in layer for a 100 pixel input, the network is then forced to learn a compressed representation of the input. The algorithm would be able to discover some of the correlations in the input features.

Example Autoencoder features

When using a sparse autoencoder trained on 100 hidden units on 10x10 pixel inputs, we can get a lot of features that look like edges at different positions and orientations. When passing a new image through this neural network, edges that are similar to these features will set off the activations and send off "synapses" similar to the biological network. If enough activations are sent out, then the network would recognize the image as positive for the object of interest (such as face, numerical digit, etc.).

Sample Application

One example of neural networks being applied in a real-world setting is training a pedestrian detector. In Andrew Ng's lab, a pedestrian detector neural network was trained using some help from Yann Lecun's lab and the EBLearn framework (http://eblearn.cs.nyu.edu:21991/doku.php). It was then connected to a haptic belt worn across the waist in order to notify a blind user as to where people are around him/her. You can watch the video of Justin Chen wearing the apparatus here: Stanford AI Lab Pedestrian Detection Haptic Belt.

Conclusion

Artificial Neural networks have been successfully applied in many applications now but one common criticism is the large diversity of training it requires for real-world problems. It often takes days or even weeks to fully train a neural network. However, the results after training are quite impressive in computer vision[4], text recognition[5], autonomous flying aircraft[6], and credit card fraud.

Many different neuroimaging techniques have been developed to further understand neural networks from a biological standpoint as well in order to potentially better simulate it artificially. Such brain-scanning technologies include fMRI (functional magnetic resonance imaging), PET (positron emission tomography) and CAT (computed axial tomography). Lesioning studies can also yield important insights on the working of several cell assemblies (e.g. removing nodes in an artificial neural network to see what the effect is).

It is still unclear how well ANN's can model the brain's processes, but as more powerful and complex neural networks (convolutional neural network, deep belief networks[7], recursive neural networks) are being developed, we are getting closer to recreating the immense computational power that the brain has.

References

  1. Ng, Andrew. Neural Networks Representation. 2012. Retrieved from http://cs.uky.edu/~jacobs/classes/2012_learning/lectures/neuralnets_ng.pdf.
  2. Hebb, D.O. (1949). The Organization of Behavior. New York: Wiley and Sons.
  3. Autoencoders and Sparsity. Retrieved from http://ufldl.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity.
  4. Socher R, Lin C, Ng, A, Manning C. Parsing Natural Scenes and Natural Language with Recursive Neural Networks. Retrieved from: http://ai.stanford.edu/~ang/papers/icml11-ParsingWithRecursiveNeuralNetworks.pdf.
  5. Wang T, Wu D, Coates A, Ng A. End-to-End Text Recognition with Convolutional Neural Networks. Retrieved from: http://ai.stanford.edu/~ang/papers/ICPR12-TextRecognitionConvNeuralNets.pdf.
  6. NASA Neural Network Project Passes Milestone. 2003. Retrieved from: http://www.nasa.gov/centers/dryden/news/NewsReleases/2003/03-49.html.
  7. Deep Networks: Overview. Retrieved from: http://ufldl.stanford.edu/wiki/index.php/Deep_Networks:_Overview.