A Hardware Architecture for Radial Basis Function Neural Network Classifier
Citations Over TimeTop 10% of 2017 papers
Abstract
In this paper we present design and analysis of scalable hardware architectures for training learning parameters of RBFNN to classify large data sets. We design scalable hardware architectures for K-means clustering algorithm to training the position of hidden nodes at hidden layer of RBFNN and pseudoinverse algorithm for weight adjustments at output layer. These scalable parallel pipelined architectures are capable of implementing data sets with no restriction on their dimensions. This paper also presents a flexible and scalable hardware accelerator for realization of classification using RBFNN, which puts no limitation on the dimension of the input data is developed. We report FPGA synthesis results of our implementations. We compare results of our hardware accelerator with CPU, GPU and implementations of the same algorithms and with other existing algorithms. Analysis of these results show that scalability of our hardware architecture makes it favorable solution for classification of very large data sets.
Related Papers
- → CuNoC: A Scalable Dynamic NoC for Dynamically Reconfigurable FPGAs(2007)42 cited
- → Software Defined Radio Implementation of a QPSK Modulator/Demodulator in an Extensive Hardware Platform Based on FPGAs Xilinx ZYNQ(2015)8 cited
- → Scalable Full Hardware Logic Architecture for Gradient Boosted Tree Training(2020)2 cited
- → Adaptively Increasing Performance and Scalability of Automatically Parallelized Programs(2005)2 cited