Beyond Goldfish Memory: Long-Term Open-Domain Conversation
Citations Over TimeTop 10% of 2022 papers
Abstract
Despite recent improvements in open-domain dialogue models, state-of-the-art models are trained and evaluated on short conversations with little context. In contrast, the long-term conversation setting has hardly been studied. In this work we collect and release a humanhuman dataset consisting of multiple chat sessions whereby the speaking partners learn about each other's interests and discuss the things they have learnt from past sessions. We show how existing models trained on existing datasets perform poorly in this long-term conversation setting in both automatic and human evaluations, and we study long-context models that can perform much better. In particular, we find retrieval-augmented methods and methods with an ability to summarize and recall previous conversations outperform the standard encoder-decoder architectures currently considered state-of-the-art. * We use this term colloquially, see
Related Papers
- → Social Media Conversation Monitoring: Visualize Information Contents of Twitter Messages Using Conversational Metrics(2016)6 cited
- → MEMCONS: How Contemporaneous Note‐Taking Shapes Memory for Conversation(2023)6 cited
- → Distributed memory parallel approaches for HEVC encoder(2016)7 cited
- → Age Effects in Earwitness Recall of a Novel Conversation(2005)6 cited
- → The next best thing to being there: A test of the joint conversation reconstruction method(1996)2 cited