Stretching Sentence-pair NLI Models to Reason over Long Documents and Clusters
Citations Over TimeTop 10% of 2022 papers
Abstract
Natural Language Inference (NLI) has been extensively studied by the NLP community as a framework for estimating the semantic relation between sentence pairs. While early work identified certain biases in NLI models, recent advancements in modeling and datasets demonstrated promising performance.In this work, we further explore the direct zero-shot applicability of NLI models to real applications, beyond the sentence-pair setting they were trained on. First, we analyze the robustness of these models to longer and out-of-domain inputs. Then, we develop new aggregation methods to allow operating over full documents, reaching state-of-the-art performance on the ContractNLI dataset. Interestingly, we find NLI scores to provide strong retrieval signals, leading to more relevant evidence extractions compared to common similarity-based methods. Finally, we go further and investigate whole document clusters to identify both discrepancies and consensus among sources. In a test case, we find real inconsistencies between Wikipedia pages in different languages about the same topic.
Related Papers
- → POSITIVE-NEGATIVE ASYMMETRY IN MENTAL STATE INFERENCE: REPLICATION AND EXTENSION(2006)1 cited
- → Romans 12.4–8: One Sentence or Two?(2006)2 cited
- The Pragmatic Analysis of the Subject of "Bei"——sentence in Dunuang Bianwen(2010)
- A Kind of Insincere Q-Imperative Sentence(2005)
- → Researching robustness of information system for measuring of microcontrollers average power consumption(2017)