A Neural Network for Factoid Question Answering over Paragraphs
Citations Over TimeTop 1% of 2014 papers
Abstract
Text classification methods for tasks like factoid question answering typically use manually defined string matching rules or bag of words representations. These methods are ineective when question text contains very few individual words (e.g., named entities) that are indicative of the answer. We introduce a recursive neural network (rnn) model that can reason over such input by modeling textual compositionality. We apply our model, qanta, to a dataset of questions from a trivia competition called quiz bowl. Unlike previous rnn models, qanta learns word and phrase-level representations that combine across sentences to reason about entities. The model outperforms multiple baselines and, when combined with information retrieval methods, rivals the best human players.
Related Papers
- → Bidirectional recurrent neural network language models for automatic speech recognition(2015)83 cited
- → Phrase embedding learning based on external and internal context with compositionality constraint(2018)13 cited
- → C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0 Dataset(2017)47 cited
- → A Comparative Study of Transformer-Based Language Models on Extractive Question Answering(2021)22 cited
- → Measuring non-trivial compositionality in emergent communication(2020)8 cited