Performance Comparison of Multi-layer Perceptron (Back Propagation, Delta Rule and Perceptron) algorithms in Neural Networks
Citations Over TimeTop 10% of 2009 papers
Abstract
A multilayer perceptron is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate output. It is a modification of the standard linear perceptron in that it uses three or more layers of neurons (nodes) with nonlinear activation functions, and is more powerful than the perceptron in that it can distinguish data that is not linearly separable, or separable by a hyper plane. MLP networks are general-purpose, flexible, nonlinear models consisting of a number of units organised into multiple layers. The complexity of the MLP network can be changed by varying the number of layers and the number of units in each layer. Given enough hidden units and enough data, it has been shown that MLPs can approximate virtually any function to any desired accuracy. This paper presents the performance comparison between Multi-layer Perceptron (back propagation, delta rule and perceptron). Perceptron is a steepest descent type algorithm that normally has slow convergence rate and the search for the global minimum often becomes trapped at poor local minima. The current study investigates the performance of three algorithms to train MLP networks. Its was found that the Perceptron algorithm are much better than others algorithms.
Related Papers
- → A Dynamic Rectified Linear Activation Units(2019)27 cited
- → A variant of second-order multilayer perceptron and its application to function approximations(2003)16 cited
- → Auto-Rotating Perceptrons(2019)1 cited
- → Auto-Rotating Perceptrons(2019)
- → Two phases based training method for designing codewords for a set of perceptrons with each perceptron having multi-pulse type activation function(2023)