Neural Networks in Neuroscience and Computer Science

From Psych 221 Image Systems Engineering
Revision as of 07:49, 6 June 2013 by imported>Psych202 (Neural Networks in Neuroscience)
Jump to navigation Jump to search
Neural Networks

This wiki explores some of the applications and models of neural networks being applied to research in both biology and neuroscience as well as artificial intelligence and computer science. Modeling how the brain sends signals through these neural networks has brought along many breakthroughs in the field of learning.

Introduction

A Neural Network is a network of neurons working together to send a flow of signals to accomplish some task. The original biological neural networks consist of neurons which interact with their neighbors through axon terminals connected via synapses to dendrites in other neurons. A neural circuit is a functional entity of interconnected neurons that regulates its own activity using a feedback loop. Artificial intelligence in the field of Computer Science adopted this information processing paradigm to create artificial neural networks. These artificial neural networks have been applied successfully to speech recognition, image analysis, and recognition tasks. Lots of research in Professor Andrew Ng's lab is geared towards applying neural networks to unsupervised learning tasks. [1]

Neural Networks in Neuroscience

Neuron in the Brain

Neural networks were first discovered/modeled in the late 1800's by a couple of biologists/psychologists including Herbert Spencer, Theodor Meynert, William James, and Sigmund Freud. The first rule of neuronal learning, Hebbian learning, was described by Hebb[2] in 1949 which Hebb states that "the persistence or repitition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability...when an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased." It attempts to explain "associative learning," in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells.

Neural Networks in Computer Science (Artificial Intelligence)

Conclusion

References

  1. Ng, Andrew. Neural Networks Representation. 2012. Retrieved from http://cs.uky.edu/~jacobs/classes/2012_learning/lectures/neuralnets_ng.pdf.
  2. Hebb, D.O. (1949). The Organization of Behavior. New York: Wiley and Sons.