Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers
Citations Over TimeTop 10% of 2019 papers
Abstract
Many approaches to extract multiple relations from a paragraph require multiple passes over the paragraph. In practice, multiple passes are computationally expensive and this makes difficult to scale to longer paragraphs and larger text corpora. In this work, we focus on the task of multiple relation extractions by encoding the paragraph only once. We build our solution upon the pre-trained self-attentive models (Transformer), where we first add a structured prediction layer to handle extraction between multiple entity pairs, then enhance the paragraph embedding to capture multiple relational information associated with each entity with entity-aware attention. We show that our approach is not only scalable but can also perform state-of-the-art on the standard benchmark ACE 2005.
Related Papers
- → Review of entity relation extraction(2023)16 cited
- → Information Extraction in the Medical Domain(2015)10 cited
- Relation extraction from text documents(2011)
- → Chinese Triple Extraction Based on BERT Model(2021)5 cited
- Simple ontologies for practical information extraction and advanced information extraction for practical ontologies(2013)