An Introduction to Convolutional Neural Networks: Difference between revisions
imported>Psych202 No edit summary |
imported>Psych202 |
||
| Line 2: | Line 2: | ||
=== The Early Neural Network Models === | === The Early Neural Network Models === | ||
Computational models of neural networks have been around for more than half a century, beginning with the simple model that McCulloch and Pitts developed in 1943 <sup>[[An_Introduction_to_Convolutional_Neural_Networks#References|[1]]]</sup>. Hebb subsequently contributed a learning algorithm to train such models <sup>[[An_Introduction_to_Convolutional_Neural_Networks#References|[2]]]</sup>, summed up by the familiar refrain: 'Neuron that fire together, wire together'. Hebb's rule, and a popular variant known as the Delta rule, were crucial for early models of cognition, but they quickly ran into trouble with respect to their computational power. In their extremely influencial book, ''Perceptrons'', Minsky and Papert proved that these networks couldn't even learn the boolean XOR function, due to due only being able to learn a single layer of weights<sup>[[An_Introduction_to_Convolutional_Neural_Networks#References|[3]]]</sup>. Luckily, a more complex learning algorithm, backpropagation, eventually emerged, which could learn across arbitrarily many layers, and was later proven to be capable of approximating any computable function. | Computational models of neural networks have been around for more than half a century, beginning with the simple model that McCulloch and Pitts developed in 1943 <sup>[[An_Introduction_to_Convolutional_Neural_Networks#References|[1]]]</sup>. Hebb subsequently contributed a learning algorithm to train such models <sup>[[An_Introduction_to_Convolutional_Neural_Networks#References|[2]]]</sup>, summed up by the familiar refrain: 'Neuron that fire together, wire together'. Hebb's rule, and a popular variant known as the Delta rule, were crucial for early models of cognition, but they quickly ran into trouble with respect to their computational power. In their extremely influencial book, ''Perceptrons'', Minsky and Papert proved that these networks couldn't even learn the boolean XOR function, due to due only being able to learn a single layer of weights<sup>[[An_Introduction_to_Convolutional_Neural_Networks#References|[3]]]</sup>. Luckily, a more complex learning algorithm, backpropagation, eventually emerged, which could learn across arbitrarily many layers, and was later proven to be capable of approximating any computable function. | ||
[[Image:| | [[Image:MyPerceptron.png|Its a perceptron.]] | ||
=== Backpropagation Basics === | === Backpropagation Basics === | ||
Revision as of 10:51, 8 June 2013
Background
The Early Neural Network Models
Computational models of neural networks have been around for more than half a century, beginning with the simple model that McCulloch and Pitts developed in 1943 [1]. Hebb subsequently contributed a learning algorithm to train such models [2], summed up by the familiar refrain: 'Neuron that fire together, wire together'. Hebb's rule, and a popular variant known as the Delta rule, were crucial for early models of cognition, but they quickly ran into trouble with respect to their computational power. In their extremely influencial book, Perceptrons, Minsky and Papert proved that these networks couldn't even learn the boolean XOR function, due to due only being able to learn a single layer of weights[3]. Luckily, a more complex learning algorithm, backpropagation, eventually emerged, which could learn across arbitrarily many layers, and was later proven to be capable of approximating any computable function. Its a perceptron.
Backpropagation Basics
Problems with Backpropagation
Convolutional Neural Networks
LeCun's formulation
Serre's H-max Pools
Open Questions
References
- McCulloch, Warren; Walter Pitts (1943). "A Logical Calculus of Ideas Immanent in Nervous Activity". Bulletin of Mathematical Biophysics 5 (4): 115–133.
- Hebb, Donald (1949). The Organization of Behavior. New York: Wiley.
- Minsky, M. and Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry. MIT Press, Cambridge, MA.