Improving Distributional Similarity with Lessons Learned from Word Embeddings
Transactions of the Association for Computational Linguistics2015Vol. 3, pp. 211–225
Citations Over TimeTop 1% of 2015 papers
Abstract
Recent trends suggest that neural-network-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others.
Related Papers
- ANALOGY MODEL AND ANALOGY CORRESPONDENCE(1992)
- → Introduction Reflections on the Analogy of Being(1967)1 cited
- Source of analogy, outcome of analogy and knowledge unit of analogy(2003)
- Try to Analyze the Relational Analogy in the Analects of Confucius(2008)
- On choices of three factors of analogy(2004)