Domain adaptation with structural correspondence learning
2006pp. 120–120
Citations Over TimeTop 1% of 2006 papers
Abstract
Discriminative learning methods are widely used in natural language processing. These methods work best when their training and test data are drawn from the same distribution. For many NLP tasks, however, we are confronted with new domains in which labeled data is scarce or non-existent. In such cases, we seek to adapt existing models from a resource-rich source domain to a resource-poor target domain. We introduce structural correspondence learning to automatically induce correspondences among features from different domains. We test our technique on part of speech tagging and show performance gains for varying amounts of source and target training data, as well as improvements in target domain parsing accuracy using our improved tagger.
Related Papers
- → Clarinet: A One-step Approach Towards Budget-friendly Unsupervised Domain Adaptation(2020)27 cited
- → Progressive learning with style transfer for distant domain adaptation(2020)6 cited
- → Sentiment Classification based on Domain Prediction(2016)5 cited
- → Domain Adaptive Transfer Learning with Specialist Models(2018)89 cited
- → Weak Adaptation Learning -- Addressing Cross-domain Data Insufficiency with Weak Annotator(2021)