Contextual Embeddings: When Are They Worth It?
2020pp. 2650–2663
Citations Over Time
Abstract
We study the settings for which deep contextual embeddings (e.g., BERT) give large improvements in performance relative to classic pretrained embeddings (e.g., GloVe), and an even simpler baseline-random word embeddings-focusing on the impact of the training set size and the linguistic properties of the task. Surprisingly, we find that both of these simpler baselines can match contextual embeddings on industry-scale data, and often perform within 5 to 10% accuracy (absolute) on benchmark tasks. Furthermore, we identify properties of data for which contextual embeddings give particularly large gains: language containing complex structure, ambiguous word usage, and words unseen in training.
Related Papers
- Contextual String Embeddings for Sequence Labeling(2018)
- → Pre-trained Contextualized Character Embeddings Lead to Major Improvements in Time Normalization: a Detailed Analysis(2019)8 cited
- → The Expressive Power of Word Embeddings(2013)83 cited
- → Utility of General and Specific Word Embeddings for Classifying Translational Stages of Research(2017)7 cited