Interpretable Semantic Vectors from a Joint Model of Brain- and Text- Based Meaning
Citations Over TimeTop 11% of 2014 papers
Abstract
Vector space models (VSMs) represent word meanings as points in a high dimensional space. VSMs are typically created using a large text corpora, and so represent word semantics as observed in text. We present a new algorithm (JNNSE) that can incorporate a measure of semantics not previously used to create VSMs: brain activation data recorded while people read words. The resulting model takes advantage of the complementary strengths and weaknesses of corpus and brain activation data to give a more complete representation of semantics. Evaluations show that the model 1) matches a behavioral measure of semantics more closely, 2) can be used to predict corpus data for unseen words and 3) has predictive power that generalizes across brain imaging technologies and across subjects. We believe that the model is thus a more faithful representation of mental vocabularies.
Related Papers
- → Application of Topic Based Vector Space Model with WordNet(2011)9 cited
- → Semantic Based Text Similarity Computation(2017)4 cited
- → Knowledge-Enhanced Multi-semantic Fusion for Concept Similarity Measurement in Continuous Vector Space(2015)2 cited
- → A Kind of Vector Space Representation Model Based on Semantic in the Field of English Standard Information(2010)1 cited
- The English Text Difficulty Measurement Based Vector Space Model(2010)