Dynamic conditional random fields
Citations Over TimeTop 1% of 2004 papers
Abstract
In sequence modeling, we often wish to represent complex interaction between labels, such as when performing multiple, cascaded labeling tasks on the same sequence, or when long-range dependencies exist. We present dynamic conditional random fields (DCRFs), a generalization of linear-chain conditional random fields (CRFs) in which each time slice contains a set of state variables and edges---a distributed state representation as in dynamic Bayesian networks (DBNs)---and parameters are tied across slices. Since exact inference can be intractable in such models, we perform approximate inference using several schedules for belief propagation, including tree-based reparameterization (TRP). On a natural-language chunking task, we show that a DCRF performs better than a series of linear-chain CRFs, achieving comparable performance using only half the training data.
Related Papers
- → Hybrid semi-Markov CRF for Neural Sequence Labeling(2018)56 cited
- → Upgrading CRFS to JRFS and its Benefits to Sequence Modeling and Labeling(2020)5 cited
- → Linear Co-occurrence Rate Networks (L-CRNs) for Sequence Labeling(2014)2 cited
- → Hybrid semi-Markov CRF for Neural Sequence Labeling(2018)3 cited