A Survey of Evaluation Metrics Used for NLG Systems
ACM Computing Surveys2022Vol. 55(2), pp. 1–39
Citations Over Time
Abstract
In the last few years, a large number of automatic evaluation metrics have been proposed for evaluating Natural Language Generation (NLG) systems. The rapid development and adoption of such automatic evaluation metrics in a relatively short time has created the need for a survey of these metrics. In this survey, we (i) highlight the challenges in automatically evaluating NLG systems, (ii) propose a coherent taxonomy for organising existing evaluation metrics, (iii) briefly describe different existing metrics, and finally (iv) discuss studies criticising the use of automatic evaluation metrics. We then conclude the article highlighting promising future directions of research.
Related Papers
- → (2019)31,543 cited
- → BLEU(2001)21,090 cited
- → HISTORIAE, History of Socio-Cultural Transformation as Linguistic Data Science. A Humanities Use Case(2019)17,209 cited
- ROUGE: A Package for Automatic Evaluation of Summaries(2004)
- BERTScore: Evaluating Text Generation with BERT(2020)