Measuring Attribution in Natural Language Generation Models
Citations Over TimeTop 10% of 2023 papers
Abstract
Abstract Large neural models have brought a new challenge to natural language generation (NLG): It has become imperative to ensure the safety and reliability of the output of models that generate freely. To this end, we present an evaluation framework, Attributable to Identified Sources (AIS), stipulating that NLG output pertaining to the external world is to be verified against an independent, provided source. We define AIS and a two-stage annotation pipeline for allowing annotators to evaluate model output according to annotation guidelines. We successfully validate this approach on generation datasets spanning three tasks (two conversational QA datasets, a summarization dataset, and a table-to-text dataset). We provide full annotation guidelines in the appendices and publicly release the annotated data at https://github.com/google-research-datasets/AIS.
Related Papers
- → Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order(2020)28 cited
- → Evaluation in the context of natural language generation(1998)67 cited
- Context-aware Natural Language Generation for Spoken Dialogue Systems.(2016)
- Evaluation in Natural Language Generation: Lessons from Referring Expression Generation(2007)
- Affective Natural Language Generation(1999)