Improving Passage Retrieval with Zero-Shot Question Generation
Citations Over TimeTop 10% of 2022 papers
Abstract
We propose a simple and effective re-ranking method for improving passage retrieval in open question answering. The re-ranker re-scores retrieved passages with a zero-shot question generation model, which uses a pre-trained language model to compute the probability of the input question conditioned on a retrieved passage. This approach can be applied on top of any retrieval method (e.g. neural or keyword-based), does not require any domain- or task-specific training (and therefore is expected to generalize better to data distribution shifts), and provides rich cross-attention between query and passage (i.e. it must explain every token in the question). When evaluated on a number of open-domain retrieval datasets, our re-ranker improves strong unsupervised retrieval models by 6%-18% absolute and strong supervised models by up to 12% in terms of top-20 passage retrieval accuracy. We also obtain new state-of-the-art results on full open-domain question answering by simply adding the new re-ranker to existing models with no further changes.
Related Papers
- → Building mutually beneficial relationships between question retrieval and answer ranking to improve performance of community question answering(2016)5 cited
- → Open-Domain Conversational Question Answering with Historical Answers(2022)4 cited
- → Fine-Grained Relevance Annotations for Multi-Task Document Ranking and\n Question Answering(2020)1 cited