Deep Semantic Role Labeling: What Works and What’s Next
Citations Over TimeTop 1% of 2017 papers
Abstract
We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.
Related Papers
- → Reducing Neural Network Parameter Initialization Into an SMT Problem (Student Abstract)(2021)2 cited
- → A New Initialization Method for Neural Networks with Weight Sharing(2021)2 cited
- → Remarks on the initialization of Caputo derivative(2012)4 cited
- The Distributed Initialization Algorithm Based on Known n MSs(2004)
- → Comparison of Random Weight Initialization to New Weight Initialization CONEXP(2020)