Improving the Robustness of Deep Neural Networks via Stability Training
Citations Over TimeTop 1% of 2016 papers
Abstract
In this paper we address the issue of output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network. Such instability affects many deep architectures with state-of-the-art performance on a wide range of computer vision tasks. We present a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and cropping. We validate our method by stabilizing the state of-the-art Inception architecture [11] against these types of distortions. In addition, we demonstrate that our stabilized model gives robust state-of-the-art performance on largescale near-duplicate detection, similar-image ranking, and classification on noisy datasets.
Related Papers
- → Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach(2021)10 cited
- → Performance robustness of feature extraction for target detection & classification(2014)7 cited
- → Comprehensive Analysis of Hyperdimensional Computing Against Gradient Based Attacks(2023)3 cited
- → A Causal View on Robustness of Neural Networks(2020)36 cited
- → Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks(2021)31 cited