Do Better ImageNet Models Transfer Better?
Citations Over TimeTop 1% of 2019 papers
Abstract
Transfer learning is a cornerstone of computer vision, yet little work has been done to evaluate the relationship between architecture and transfer. An implicit hypothesis in modern computer vision research is that models that perform better on ImageNet necessarily perform better on other vision tasks. However, this hypothesis has never been systematically tested. Here, we compare the performance of 16 classification networks on 12 image classification datasets. We find that, when networks are used as fixed feature extractors or fine-tuned, there is a strong correlation between ImageNet accuracy and transfer accuracy (r = 0.99 and 0.96, respectively). In the former setting, we find that this relationship is very sensitive to the way in which networks are trained on ImageNet; many common forms of regularization slightly improve ImageNet accuracy but yield features that are much worse for transfer learning. Additionally, we find that, on two small fine-grained image classification datasets, pretraining on ImageNet provides minimal benefits, indicating the learned features from ImageNet do not transfer well to fine-grained tasks. Together, our results show that ImageNet architectures generalize well across datasets, but ImageNet features are less general than previously suggested.
Related Papers
- → Automated Brain Image Classification Based on VGG-16 and Transfer Learning(2019)199 cited
- → DCNN-Based Vegetable Image Classification Using Transfer Learning: A Comparative Study(2021)48 cited
- → Multiple Classification of Flower Images Using Transfer Learning(2019)36 cited
- → Performance of True Transfer Learning using CNN DenseNet121 for COVID-19 Detection from Chest X-Ray Images(2021)22 cited
- → Transfer learning-based Plant Disease Detection(2021)6 cited