Adaptability of the backpropagation procedure
Citations Over Time
Abstract
Possible paradigms for concept learning by feedforward neural networks include discrimination and recognition. An interesting aspect of this dichotomy is that the recognition-based implementation can learn certain domains much more efficiently than the discrimination-based one, despite the close structural relationship between the two systems. The purpose of this paper is to explain this difference in efficiency. We suggest that it is caused by a difference in the generalization strategy adopted by the backpropagation procedure in both cases: while the autoassociator uses a (fast) bottom-up strategy, MLP has recourse to a (slow) top-down one, despite the fact that the two systems are both optimized by the backpropagation procedure. This result is important because it sheds some light on the nature of backpropagation's adaptive capability. From a practical viewpoint, it suggests a deterministic way to increase the efficiency of backpropagation-trained feedforward networks.
Related Papers
- → A general backpropagation algorithm for feedforward neural networks learning(2002)217 cited
- → Fast Terminal Attractor Based Backpropagation Algorithm For Feedforward Neural Networks(2007)2 cited
- → Genetic Algorithms as Optimisers for Feedforward Neural Networks(1994)1 cited
- OMBP: Optic Modified BackPropagation training algorithm for fast convergence of Feedforward Neural Network(2011)
- → Multilayer Feedforward Neural Network Based on Multi-valued Neurons (MLMVN) and a Backpropagation Learning Algorithm(2007)