SeqVAT: Virtual Adversarial Training for Semi-Supervised Sequence Labeling
Citations Over TimeTop 1% of 2020 papers
Abstract
Virtual adversarial training (VAT) is a powerful technique to improve model robustness in both supervised and semi-supervised settings. It is effective and can be easily adopted on lots of image classification and text classification tasks. However, its benefits to sequence labeling tasks such as named entity recognition (NER) have not been shown as significant, mostly, because the previous approach can not combine VAT with the conditional random field (CRF). CRF can significantly boost accuracy for sequence models by putting constraints on label transitions, which makes it an essential component in most state-of-theart sequence labeling model architectures. In this paper, we propose SeqVAT, a method which naturally applies VAT to sequence labeling models with CRF. Empirical studies show that SeqVAT not only significantly improves the sequence labeling performance over baselines under supervised settings, but also outperforms state-of-the-art approaches under semisupervised settings.
Related Papers
- → A study of deep learning approaches for medication and adverse drug event extraction from clinical text(2019)127 cited
- → SeqVAT: Virtual Adversarial Training for Semi-Supervised Sequence Labeling(2020)117 cited
- → RoSeq: Robust Sequence Labeling(2019)36 cited
- → CRF-based active learning for Chinese named entity recognition(2009)20 cited
- → On Adversarial Bias and the Robustness of Fair Machine Learning(2020)37 cited