Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
Citations Over TimeTop 1% of 2021 papers
Abstract
A possible explanation for the impressive performance of masked language model (MLM) pre-training is that such models have learned to represent the syntactic structures prevalent in classical NLP pipelines. In this paper, we propose a different explanation: MLMs succeed on downstream tasks mostly due to their ability to model higher-order word cooccurrence statistics. To demonstrate this, we pre-train MLMs on sentences with randomly shuffled word order, and we show that these models still achieve high accuracy after finetuning on many downstream tasks -including tasks specifically designed to be challenging for models that ignore word order. Our models also perform surprisingly well according to some parametric syntactic probes, indicating possible deficiencies in how we test representations for syntactic information. Overall, our results show that purely distributional information largely explains the success of pretraining, and they underscore the importance of curating challenging evaluation datasets that require deeper linguistic knowledge.
Related Papers
- → Upstream Competition with Complex and Unobservable Contracts(2020)3 cited
- → Upstream privatization and downstream licensing(2022)2 cited
- The Downstream Strategy and Business Model of Manufacturing(2003)
- Equilibrium downstream mark-up and upstream free entry
- → Collusion to Raise Downstream Prices: Downstream Intermediary as Hub(2021)