Controllable Natural Language Generation with Contrastive Prefixes
Citations Over TimeTop 10% of 2022 papers
Abstract
To guide the generation of large pretrained language models (LM), previous work has focused on directly fine-tuning the language model or utilizing an attribute discriminator. In this work, we propose a novel lightweight framework for controllable GPT2 Different from Li and Liang (2021), where each prefix is trained independently, we take the relationship among prefixes into consideration and train multiple prefixes simultaneously, as illustrated in Figure We propose a novel supervised method and also an unsupervised method to train the prefixes for single-aspect control while the combination of these two methods can achieve multi-aspect control. Experimental results on both singleaspect and multi-aspect control show that our methods can guide generation towards the desired attributes while keeping high linguistic quality.
Related Papers
- → Controllable Natural Language Generation with Contrastive Prefixes(2022)61 cited
- → Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order(2020)28 cited
- → Evaluation in the context of natural language generation(1998)67 cited
- Evaluation in Natural Language Generation: Lessons from Referring Expression Generation(2007)
- Affective Natural Language Generation(1999)