TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
Citations Over TimeTop 1% of 2017 papers
Abstract
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that Trivi-aQA is a challenging testbed that is worth significant future study. 1
Related Papers
- → Glove: Global Vectors for Word Representation(2014)33,357 cited
- → (2019)31,533 cited
- → Natural Questions: A Benchmark for Question Answering Research(2019)1,923 cited
- → Analysis of Points of Interests Recommended for Leisure Walk Descriptions(2024)1,286 cited
- → SQuAD: 100,000+ Questions for Machine Comprehension of Text(2016)821 cited