DisSent: Learning Sentence Representations from Explicit Discourse Relations
Citations Over TimeTop 10% of 2019 papers
Abstract
Learning effective representations of sentences is one of the core missions of natural language understanding. Existing models either train on a vast amount of text, or require costly, manually curated sentence relation datasets. We show that with dependency parsing and rule-based rubrics, we can curate a high quality sentence relation task by leveraging explicit discourse relations. We show that our curated dataset provides an excellent signal for learning vector representations of sentence meaning, representing relations that can only be determined when the meanings of two sentences are combined. We demonstrate that the automatically curated corpus allows a bidirectional LSTM sentence encoder to yield high quality sentence embeddings and can serve as a supervised fine-tuning dataset for larger models such as BERT. Our fixed sentence embeddings achieve high performance on a variety of transfer tasks, including SentEval, and we achieve state-of-the-art results on Penn Discourse Treebank's implicit relation prediction task.
Related Papers
- → Annotation schemes and their influence on parsing results(2006)19 cited
- → Wide Coverage Incremental Parsing by Learning Attachment Preferences(2001)17 cited
- Utilizing State-of-the-art Parsers to Diagnose Problems in Treebank Annotation for a Less Resourced Language(2013)
- Interactive predictive parsing framework for the Spanish language(2010)
- → Exploiting Diversity in Natural Language Processing: Combining Parsers(2000)106 cited