Distillation-Based Training for Multi-Exit Architectures
Citations Over TimeTop 10% of 2019 papers
Abstract
Multi-exit architectures, in which a stack of processing layers is interleaved with early output layers, allow the processing of a test example to stop early and thus save computation time and/or energy. In this work, we propose a new training procedure for multi-exit architectures based on the principle of knowledge distillation. The method encourages early exits to mimic later, more accurate exits, by matching their probability outputs. Experiments on CIFAR100 and ImageNet show that distillation-based training significantly improves the accuracy of early exits while maintaining state-of-the-art accuracy for late ones. The method is particularly beneficial when training data is limited and also allows a straight-forward extension to semi-supervised learning, i.e. make use also of unlabeled data at training time. Moreover, it takes only a few lines to implement and imposes almost no computational overhead at training time, and none at all at test time.
Related Papers
- → Effect of various stack parameters on temperature rise in molten carbonate fuel cell stack operation(2000)72 cited
- → Non-uniform, multi-stack solid oxide fuel cell (SOFC) system design for small system size and high efficiency(2019)43 cited
- → Development of a Lightweight 200W Direct Methanol Fuel Cell Stack for UAV Applications and Study of its Operating Characteristics (II)(2012)2 cited
- Modleing of Proton Exchange Membrane Fuel Cell Stack Thermal System Based on the Method of BP Neural Networks(2007)
- → Performance of a Polymer Electrolyte Fuel Cell for Automotive Applications. 2nd Report. Characteristics of Self-Humidifying Operation of a 200W PEFC Stack.(2001)