Incorporating non-local information into information extraction systems by Gibbs sampling
Citations Over TimeTop 1% of 2005 papers
Abstract
Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9% over state-of-the-art systems on two established information extraction tasks.
Related Papers
- Gesture Classification Using Hidden Markov Models and Viterbi Path Counting(2003)
- → A Viterbi algorithm for a trajectory model derived from HMM with explicit relationship between static and dynamic features(2004)27 cited
- → A Layered Hidden Markov Model for Predicting Human Trajectories in a Multi-floor Building(2015)8 cited
- → HMM with global path constraint in Viterbi decoding for isolated word recognition(2002)5 cited
- → Information Extraction from Chinese Papers Based on Hidden Markov Model(2013)2 cited