Deep Learning of Transferable Representation for Scalable Domain Adaptation
Citations Over TimeTop 1% of 2016 papers
Abstract
Domain adaptation generalizes a learning model across source domain and target domain that are sampled from different distributions. It is widely applied to cross-domain data mining for reusing labeled information and mitigating labeling consumption. Recent studies reveal that deep neural networks can learn abstract feature representation, which can reduce, but not remove, the cross-domain discrepancy. To enhance the invariance of deep representation and make it more transferable across domains, we propose a unified deep adaptation framework for jointly learning transferable representation and classifier to enable scalable domain adaptation, by taking the advantages of both deep learning and optimal two-sample matching. The framework constitutes two inter-dependent paradigms, unsupervised pre-training for effective training of deep models using deep denoising autoencoders, and supervised fine-tuning for effective exploitation of discriminative information using deep neural networks, both learned by embedding the deep representations to reproducing kernel Hilbert spaces (RKHSs) and optimally matching different domain distributions. To enable scalable learning, we develop a linear-time algorithm using unbiased estimate that scales linearly to large samples. Extensive empirical results show that the proposed framework significantly outperforms state of the art methods on diverse adaptation tasks: sentiment polarity prediction, email spam filtering, newsgroup content categorization, and visual object recognition.
Related Papers
- → Discriminative Mutual Learning for Multi-target Domain Adaptation(2022)7 cited
- → Discriminative vs. Generative Classifiers for Cost Sensitive Learning(2006)7 cited
- → TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation(2021)2 cited
- Multiple-Source Adaptation with Domain Classifiers(2020)