Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification
Citations Over TimeTop 1% of 2014 papers
Abstract
We present a method that learns word embedding for Twitter sentiment classification in this paper. Most existing algorithms for learning continuous word representations typically only model the syntactic context of words but ignore the sentiment of text. This is problematic for sentiment analysis as they usually map words with similar syntactic context but opposite sentiment polarity, such as good and bad, to neighboring word vectors. We address this issue by learning sentimentspecific word embedding (SSWE), which encodes sentiment information in the continuous representation of words. Specifically, we develop three neural networks to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions. To obtain large scale training corpora, we learn the sentiment-specific word embedding from massive distant-supervised tweets collected by positive and negative emoticons. Experiments on applying SS-WE to a benchmark Twitter sentiment classification dataset in SemEval 2013 show that (1) the SSWE feature performs comparably with hand-crafted features in the top-performed system; (2) the performance is further improved by concatenating SSWE with existing feature set.
Related Papers
- → How to Generate a Good Word Embedding(2016)316 cited
- → A Hybrid CNN-LSTM: A Deep Learning Approach for Consumer Sentiment Analysis Using Qualitative User-Generated Contents(2021)135 cited
- → How to Generate a Good Word Embedding?(2017)28 cited
- → Sentiment Analysis of Reviews Based on Deep Learning Model(2019)20 cited
- → Deep Learning and Word Embeddings for Tweet Classification for Crisis\n Response(2019)21 cited