Three new probabilistic models for dependency parsing
Citations Over TimeTop 10% of 1996 papers
Abstract
After presenting a novel O(n3) parsing algorithm for dependency grammar, we develop three contrasting ways to stochasticize it. We propose (a) a lexical affinity model where words struggle to modify each other, (b) a sense tagging model where words fluctuate randomly in their selectional preferences, and (c) a generative model where the speaker fleshes out each word's syntactic and conceptual structure without regard to the implications for the hearer. We also give preliminary empirical results from evaluating the three models' parsing performance on annotated Wall Street Journal training text (derived from the Penn Treebank). In these results, the generative model performs significantly better than the others, and does about equally well at assigning part-of-speech tags.
Related Papers
- → Annotation schemes and their influence on parsing results(2006)19 cited
- → Wide Coverage Incremental Parsing by Learning Attachment Preferences(2001)17 cited
- Utilizing State-of-the-art Parsers to Diagnose Problems in Treebank Annotation for a Less Resourced Language(2013)
- Interactive predictive parsing framework for the Spanish language(2010)
- → Exploiting Diversity in Natural Language Processing: Combining Parsers(2000)106 cited