A Fast and Accurate Dependency Parser using Neural Networks
2014pp. 740–750
Citations Over TimeTop 1% of 2014 papers
Abstract
Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2% improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2% unlabeled attachment score on the English Penn Treebank.
Related Papers
- → Exploiting Synergies Between Open Resources for German Dependency Parsing, POS-tagging, and Morphological Analysis(2013)44 cited
- Easy-First Chinese POS Tagging and Dependency Parsing(2012)
- → From ranked words to dependency trees: two-stage unsupervised non-projective dependency parsing(2011)4 cited
- → Factors influencing dependency parsing of coordinating structure(2009)1 cited
- → A simulated shallow dependency parser based on weighted hierarchical structure learning(2008)