Rainbow Memory: Continual Learning with a Memory of Diverse Samples
Citations Over TimeTop 1% of 2021 papers
Abstract
Continual learning is a realistic learning scenario for AI models. Prevalent scenario of continual learning, however, assumes disjoint sets of classes as tasks and is less realistic rather artificial. Instead, we focus on ‘blurry’ task boundary; where tasks shares classes and is more realistic and practical. To address such task, we argue the importance of diversity of samples in an episodic memory. To enhance the sample diversity in the memory, we propose a novel memory management strategy based on per-sample classification uncertainty and data augmentation, named Rainbow Memory (RM). With extensive empirical validations on MNIST, CIFAR10, CIFAR100, and ImageNet datasets, we show that the proposed method significantly improves the accuracy in blurry continual learning setups, outperforming state of the arts by large margins despite its simplicity. Code and data splits will be available in https://github.com/clovaai/rainbow-memory.
Related Papers
- → Symmetric Rectified Linear Units for Fully Connected Deep Models(2018)4 cited
- → Deeper into Image Classification(2020)1 cited
- → Training on test data: Removing near duplicates in Fashion-MNIST(2019)3 cited
- → РЕЗУЛЬТАТИ НАЛАШТУВАННЯ ПАРАМЕТРІВ НЕЙРОННИХ ГЛИБОКИХ МЕРЕЖ ЩОДО РОЗПІЗНАВАННЯ FASHION MNIST DATASET(2023)
- → Machine Learning Could be Easier if All Data Were MNIST(2023)