Dual embedding with input embedding and output embedding for better word representation
Abstract
Recent <span lang="EN-US">studies in distributed vector representations for words have variety of ways to represent words. We propose a various ways using input embedding and output embedding to better represent words than single model. We compared the performance in terms of word analogy and word similarity with each input and output embeddings and various dual embeddings which are the combination of those two embeddings. Performance evaluation results show that the proposed dual embeddings outperform each single embedding, especially with the way of simply adding input and output embeddings. We figured out two things in this paper, i) not only input embedding but also output embedding has such meaning to represent the words and ii) combining input embedding and output embedding as dual embedding outperforms the single embedding when we use input embedding and output embedding individually.</span>
Related Papers
- → How to Generate a Good Word Embedding(2016)316 cited
- → The Word Analogy Testing Caveat(2018)28 cited
- → How to Generate a Good Word Embedding?(2017)28 cited
- → Deep Learning and Word Embeddings for Tweet Classification for Crisis\n Response(2019)21 cited
- → Dual embedding with input embedding and output embedding for better word representation(2022)