Speeding Up Back-Propagation Neural Networks
Citations Over TimeTop 10% of 2005 papers
Abstract
There are many successful applications of Backpropagation (BP) for training multilayer neural networks. However, it has many shortcomings. Learning often takes long time to converge, and it may fall into local minima. One of the possible remedies to escape from local minima is by using a very small learning rate, which slows down the learning process. The proposed algorithm presented in this study used for training depends on a multilayer neural network with a very small learning rate, especially when using a large training set size. It can be applied in a generic manner for any network size that uses a backpropgation algorithm through an optical time (seen time). The paper describes the proposed algorithm, and how it can improve the performance of back-propagation (BP). The feasibility of proposed algorithm is shown through out number of experiments on different network architectures.
Related Papers
- Optimizing Weights of Artificial Neural Networks using Genetic Algorithms(2012)
- → Speeding Up Back-Propagation Neural Networks(2005)44 cited
- → Training of Multi-Branch Neural Networks using RasID-GA(2007)2 cited
- Marquardt Algorithm and Development for Training BP Neural Network in Chemistry Processing Engineering(2001)