Towards a Unified Multi-Dimensional Evaluator for Text Generation
Citations Over TimeTop 10% of 2022 papers
Abstract
Multi-dimensional evaluation is the dominant paradigm for human evaluation in Natural Language Generation (NLG), i.e., evaluating the generated text from multiple explainable dimensions, such as coherence and fluency. However, automatic evaluation in NLG is still dominated by similarity-based metrics, and we lack a reliable framework for a more comprehensive evaluation of advanced models. In this paper, we propose a unified multi-dimensional evaluator UniEval for NLG. We re-frame NLG evaluation as a Boolean Question Answering (QA) task, and by guiding the model with different questions, we can use one evaluator to evaluate from multiple dimensions. Furthermore, thanks to the unified Boolean QA format, we are able to introduce an intermediate learning phase that enables UniEval to incorporate external knowledge from multiple related tasks and gain further improvement. Experiments on three typical NLG tasks show that UniEval correlates substantially better with human judgments than existing metrics. Specifically, compared to the top-performing unified evaluators, UniEval achieves a 23% higher correlation on text summarization, and over 43% on dialogue response generation. Also, UniEval demonstrates a strong zero-shot learning ability for unseen evaluation dimensions and tasks. Source code, data, and all pre-trained evaluators are available at https://github.com/maszhongming/UniEval.
Related Papers
- Proceedings of the Workshop on Task-Focused Summarization and Question Answering(2006)
- → Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering -(2003)12 cited
- → QURIOUS: Question Generation Pretraining for Text Generation(2020)7 cited
- UofL at TAC 2008 Update Summarization and Question Answering(2008)
- → Proceedings of the Workshop on Task-Focused Summarization and Question Answering - SumQA '06(2006)1 cited