Enhanced conjugate gradient methods for training MLP-networks
2010Vol. 2, pp. 139–143
Citations Over TimeTop 19% of 2010 papers
Abstract
The paper investigates the enhancement in various conjugate gradient training algorithms applied to a multilayer perceptron (MLP) neural network architecture. The paper investigates seven different conjugate gradient algorithms proposed by different researchers from 1952-2005, the classical batch back propagation, full-memory and memory-less BFGS (Broyden, Fletcher, Goldfarb and Shanno) algorithms. These algorithms are tested in predicting fluid height in two different control tank benchmark problems. Simulations results show that Full-Memory BFGS has overall better performance or less prediction error however it has higher memory usage and longer computational time conjugate gradients.
Related Papers
- → A modified conjugate gradient method based on the self-scaling memoryless BFGS update(2021)5 cited
- → Convergence properties of a class of nonlinear conjugate gradient methods(2013)6 cited
- → Nonlinear Conjugate Gradient Methods(2011)224 cited
- → New adaptive conjugate gradient methods choices for unconstrained optimization(2018)
- A Modified Spectral Gradient Method for Solving Non Linear System(2020)