Search to Distill: Pearls Are Everywhere but Not the Eyes
Citations Over TimeTop 10% of 2020 papers
Abstract
Standard Knowledge Distillation (KD) approaches distill the knowledge of a cumbersome teacher model into the parameters of a student model with a pre-defined architecture. However, the knowledge of a neural network, which is represented by the network's output distribution conditioned on its input, depends not only on its parameters but also on its architecture. Hence, a more generalized approach for KD is to distill the teacher's knowledge into both the parameters and architecture of the student. To achieve this, we present a new \textit{Architecture-aware Knowledge Distillation (AKD)} approach that finds student models (pearls for the teacher) that are best for distilling the given teacher model. In particular, we leverage Neural Architecture Search (NAS), equipped with our KD-guided reward, to search for the best student architectures for a given teacher. Experimental results show our proposed AKD consistently outperforms the conventional NAS plus KD approach, and achieves state-of-the-art results on the ImageNet classification task under various latency settings. Furthermore, the best AKD student architecture for the ImageNet classification task also transfers well to other tasks such as million level face recognition and ensemble learning.
Related Papers
- → Leverage and Corporate Performance: Evidence from Unsuccessful Takeovers(1999)222 cited
- → Effect of aluminum balls on the productivity of solar distillate(2020)100 cited
- → News media coverage and corporate leverage adjustments(2019)90 cited
- Effect of different absorbing materials on solar distillation under the climatic condition of Manipal(2013)
- → Estimation of the Number of Distillation Sequences with Dividing Wall Column for Multi-component Separation(2017)3 cited