Fixed-point feedforward deep neural network design using weights +1, 0, and −1
Citations Over TimeTop 1% of 2014 papers
Abstract
Feedforward deep neural networks that employ multiple hidden layers show high performance in many applications, but they demand complex hardware for implementation. The hardware complexity can be much lowered by minimizing the word-length of weights and signals, but direct quantization for fixed-point network design does not yield good results. We optimize the fixed-point design by employing backpropagation based retraining. The designed fixed-point networks with ternary weights (+1, 0, and -1) and 3-bit signal show only negligible performance loss when compared to the floating-point coun-terparts. The backpropagation for retraining uses quantized weights and fixed-point signal to compute the output, but utilizes high precision values for adapting the networks. A character recognition and a phoneme recognition examples are presented.
Related Papers
- → FIR and IIR Synapses, a New Neural Network Architecture for Time Series Modeling(1991)211 cited
- → Backpropagation (neural) networks for fast pre-evaluation of spectroscopic ellipsometric measurements(1994)14 cited
- → The Evolution of a Feedforward Neural Network trained under Backpropagation(1998)5 cited
- OMBP: Optic Modified BackPropagation training algorithm for fast convergence of Feedforward Neural Network(2011)
- → Physics-Guided Neural Networks for Feedforward Control: An Orthogonal Projection-Based Approach(2022)