Visual Exploration of Semantic Relationships in Neural Word Embeddings
Citations Over TimeTop 1% of 2017 papers
Abstract
Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). However, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. In particular, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or even misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. Here, we introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.
Related Papers
- → The Analogies of Being in St. Thomas Aquinas(1994)2 cited
- ANALOGY MODEL AND ANALOGY CORRESPONDENCE(1992)
- Source of analogy, outcome of analogy and knowledge unit of analogy(2003)
- Try to Analyze the Relational Analogy in the Analects of Confucius(2008)
- On choices of three factors of analogy(2004)