Rethinking search
Citations Over TimeTop 10% of 2021 papers
Abstract
When experiencing an information need, users want to engage with a domain expert, but often turn to an information retrieval system, such as a search engine, instead. Classical information retrieval systems do not answer information needs directly, but instead provide references to (hopefully authoritative) answers. Successful question answering systems offer a limited corpus created on-demand by human experts, which is neither timely nor scalable. Pre-trained language models, by contrast, are capable of directly generating prose that may be responsive to an information need, but at present they are dilettantes rather than domain experts - they do not have a true understanding of the world, they are prone to hallucinating, and crucially they are incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over. This paper examines how ideas from classical information retrieval and pre-trained language models can be synthesized and evolved into systems that truly deliver on the promise of domain expert advice.
Related Papers
- → Few-Shot Learning via Saliency-Guided Hallucination of Samples(2019)235 cited
- → Source monitoring deficits in hallucinating compared to non-hallucinating patients with schizophrenia(2006)76 cited
- → Learning to Hallucinate Examples from Extrinsic and Intrinsic Supervision(2021)5 cited
- → Hallucinations in Children: A Follow‐up Study(1988)39 cited
- The hallucinating patient and nursing intervention.(1976)