DeepStruct: Pretraining of Language Models for Structure Prediction
Citations Over TimeTop 10% of 2022 papers
Abstract
We introduce a method for improving the structural understanding abilities of language models. Unlike previous approaches that finetune the models with task-specific augmentation, we pretrain language models on a collection of task-agnostic corpora to generate structures from text. Our structure pretraining enables zero-shot transfer of the learned knowledge that models have about the structure tasks. We study the performance of this approach on 28 datasets, spanning 10 structure prediction tasks including open information extraction, joint entity and relation extraction, named entity recognition, relation classification, semantic role labeling, event extraction, coreference resolution, factual probe, intent detection, and dialogue state tracking. We further enhance the pretraining with the task-specific training sets. We show that a 10B parameter language model transfers non-trivially to most tasks and obtains state-of-the-art performance on 21 of 28 datasets that we evaluate. 1
Related Papers
- → Review of entity relation extraction(2023)16 cited
- → Sortal anaphora resolution to enhance relation extraction from biomedical literature(2016)24 cited
- → Information Extraction in the Medical Domain(2015)10 cited
- Relation extraction from text documents(2011)
- Simple ontologies for practical information extraction and advanced information extraction for practical ontologies(2013)