The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
Citations Over TimeTop 1% of 2021 papers
Abstract
We introduce four new real-world distribution shift datasets consisting of changes in image style, image blurriness, geographic location, camera operation, and more. With our new datasets, we take stock of previously proposed methods for improving out-of-distribution robustness and put them to the test. We find that using larger models and artificial data augmentations can improve robustness on real-world distribution shifts, contrary to claims in prior work. We find improvements in artificial robustness benchmarks can transfer to real-world distribution shifts, contrary to claims in prior work. Motivated by our observation that data augmentations can help with real-world distribution shifts, we also introduce a new data augmentation method which advances the state-of-the-art and outperforms models pre-trained with 1000× more labeled data. Overall we find that some methods consistently help with distribution shifts in texture and local image statistics, but these methods do not help with some other distribution shifts like geographic changes. Our results show that future research must study multiple distribution shifts simultaneously, as we demonstrate that no evaluated method consistently improves robustness.
Related Papers
- → Physician-Friendly Machine Learning: A Case Study with Cardiovascular Disease Risk Prediction(2019)71 cited
- → Artificial Intelligence, Machine Learning, and Medicine: A Little Background Goes a Long Way Toward Understanding(2021)29 cited
- → Application of Machine Learning in Animal Disease Analysis and Prediction(2020)26 cited
- → Sentiment Analysis by Using Supervised Machine Learning and Deep Learning Approaches(2020)3 cited
- → Breakdown of Machine Learning Algorithms(2022)1 cited