Learning accurate, compact, and interpretable tree annotation
Citations Over TimeTop 1% of 2006 papers
Abstract
We present an automatic approach to tree annotation in which basic nonterminal symbols are alternately split and merged to maximize the likelihood of a training treebank. Starting with a simple X-bar grammar, we learn a new grammar whose nonterminals are subsymbols of the original nonterminals. In contrast with previous work, we are able to split various terminals to different degrees, as appropriate to the actual complexity in the data. Our grammars automatically learn the kinds of linguistic distinctions exhibited in previous work on manual tree annotation. On the other hand, our grammars are much more compact and substantially more accurate than previous work on automatic annotation. Despite its simplicity, our best grammar achieves an F1 of 90.2% on the Penn Treebank, higher than fully lexicalized systems.
Related Papers
- → Reflections on the Penn Discourse TreeBank, Comparable Corpora, and Complementary Annotation(2014)92 cited
- → Merging PropBank, NomBank, TimeBank, Penn Discourse Treebank and Coreference(2005)25 cited
- Contemplata, a Free Platform for Constituency Treebank Annotation(2020)
- Prague Dependency Treebank Annotation Errors: A PreliminaryAnalysis(2009)
- Post-annotation checking of Prague Dependency Treebank 2.0 data.(2006)