Exploring Versatile Generative Language Model Via Parameter-Efficient Transfer Learning
2020pp. 441–459
Citations Over TimeTop 10% of 2020 papers
Abstract
Fine-tuning pre-trained generative language models to down-stream language generation tasks has shown promising results. However, this comes with the cost of having a single, large model for each task, which is not ideal in low-memory/power scenarios (e.g., mobile). In this paper, we propose an effective way to fine-tune multiple down-stream generation tasks simultaneously using a single, large pretrained model. The experiments on five diverse language generation tasks show that by just using an additional 2-3% parameters for each task, our model can maintain or even improve the performance of fine-tuning the whole model 1 .
Related Papers
- → A Comprehensive Review of the Latest Advancements in Large Generative AI Models(2023)29 cited
- → Auxiliary Deep Generative Models(2016)154 cited
- → Towards Understanding the Interplay of Generative Artificial Intelligence and the Internet(2023)9 cited
- → Generative Model for Person Re-Identification: A Review(2020)
- → TC-VAE: Uncovering Out-of-Distribution Data Generative Factors(2023)