Integrating Task Specific Information into Pretrained Language Models for Low Resource Fine Tuning
2020pp. 3181–3186
Citations Over TimeTop 21% of 2020 papers
Abstract
Pretrained Language Models (PLMs) have improved the performance of natural language understanding in recent years. Such models are pretrained on large corpora, which encode the general prior knowledge of natural languages but are agnostic to information characteristic of downstream tasks. This often results in overfitting when fine-tuned with low resource datasets where task-specific information is limited. In this paper, we integrate label information as a task-specific prior into the self-attention component of pretrained BERT models. Experiments on several benchmarks and real-word datasets suggest that the proposed approach can largely improve the performance of pretrained models when finetuning with small datasets.
Related Papers
- → An Overview of Overfitting and its Solutions(2019)2,168 cited
- → Lazy Overfitting Control(2013)10 cited
- → Machine Learning Students Overfit to Overfitting(2022)5 cited
- → Overfitting: Causes and Solutions (Seminar Slides)(2020)3 cited
- → Benign Overfitting in Classification: Provably Counter Label Noise with Larger Models(2022)3 cited