Shallow parsing with conditional random fields
Citations Over TimeTop 1% of 2003 papers
Abstract
Conditional random fields for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position. Among sequence labeling tasks in language processing, shallow parsing has received much attention, with the development of standard evaluation datasets and extensive comparison among methods. We show here how to train a conditional random field to achieve performance as good as any reported base noun-phrase chunking method on the CoNLL task, and better than any reported single model. Improved training methods based on modern optimization algorithms were critical in achieving these results. We present extensive comparisons between models and training methods that confirm and strengthen previous results on shallow parsing and training methods for maximum-entropy models.
Related Papers
- → Recurrent conditional random field for language understanding(2014)112 cited
- → Hybrid semi-Markov CRF for Neural Sequence Labeling(2018)56 cited
- → Upgrading CRFS to JRFS and its Benefits to Sequence Modeling and Labeling(2020)5 cited
- → A Recognition Approach Study on Chinese Field Term Based Mutual Information /Conditional Random Fields(2012)3 cited
- → Hybrid semi-Markov CRF for Neural Sequence Labeling(2018)3 cited