NICE: Noise Injection and Clamping Estimation for Neural Network Quantization
Citations Over TimeTop 12% of 2021 papers
Abstract
Convolutional Neural Networks (CNNs) are very popular in many fields including computer vision, speech recognition, natural language processing, etc. Though deep learning leads to groundbreaking performance in those domains, the networks used are very computationally demanding and are far from being able to perform in real-time applications even on a GPU, which is not power efficient and therefore does not suit low power systems such as mobile devices. To overcome this challenge, some solutions have been proposed for quantizing the weights and activations of these networks, which accelerate the runtime significantly. Yet, this acceleration comes at the cost of a larger error unless spatial adjustments are carried out. The method proposed in this work trains quantized neural networks by noise injection and a learned clamping, which improve accuracy. This leads to state-of-the-art results on various regression and classification tasks, e.g., ImageNet classification with architectures such as ResNet-18/34/50 with as low as 3 bit weights and activations. We implement the proposed solution on an FPGA to demonstrate its applicability for low-power real-time applications. The quantization code will become publicly available upon acceptance.
Related Papers
- → Deep Residual Learning for Image Recognition(2016)216,989 cited
- → LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks(2018)706 cited
- → DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients(2016)1,796 cited
- → Quantizing deep convolutional networks for efficient inference: A whitepaper(2018)756 cited
- → PACT: Parameterized Clipping Activation for Quantized Neural Networks(2018)719 cited