Modeling for text compression
Citations Over TimeTop 10% of 1989 papers
Abstract
The best schemes for text compression use large models to help them predict which characters will come next. The actual next characters are coded with respect to the prediction, resulting in compression of information. Models are best formed adaptively, based on the text seen so far. This paper surveys successful strategies for adaptive modeling that are suitable for use in practical text compression systems. The strategies fall into three main classes: finite-context modeling, in which the last few characters are used to condition the probability distribution for the next one; finite-state modeling, in which the distribution is conditioned by the current state (and which subsumes finite-context modeling as an important special case); and dictionary modeling, in which strings of characters are replaced by pointers into an evolving dictionary. A comparison of different methods on the same sample texts is included, along with an analysis of future research directions.
Related Papers
- → Automatic Labeling of Semantic Roles with a Dependency Parser in Hungarian Economic Texts(2015)1 cited
- → A Framework for Language Resource Construction and Syntactic Analysis: Case of Arabic(2018)1 cited
- → Morphological and Syntactic Processing for Text Retrieval(2004)8 cited
- Syntactic Parsing based on Phrase Structure in Natural Language Processing(2009)
- Exploiting the Translation Context for Multilingual WSD(2008)