DASA: Domain Adaptation via Saliency Augmentation
Abstract
This paper aims for supervised domain adaptation of image classifiers via saliency augmentation. The idea is to utilize domain-independent saliency extraction to enrich source and target domains and bring them closer. We then align their lower-order statistics to solve the problem. Because saliency augmentation suppresses uncommon background features across the domains, only the foreground features get aligned, as one would desire in the domain adaptation of image classifiers. Exploring this new direction of saliency augmentation for domain adaptation makes our work novel and promising. Despite providing far fewer labeled data in the target domain than in the source domain, our extensive experiments comprehensively demonstrate our method's commendable effectiveness and accuracy.
Related Papers
- → Clarinet: A One-step Approach Towards Budget-friendly Unsupervised Domain Adaptation(2020)27 cited
- → Learning under Unknown Bias(2013)
- → MADI: Inter-domain Matching and Intra-domain Discrimination for Cross-domain Speech Recognition(2023)
- → DASA: Domain Adaptation via Saliency Augmentation(2023)
- → Collaborative Multi-source Domain Adaptation Through Optimal Transport(2024)