Network Embedding as Matrix Factorization
Citations Over TimeTop 1% of 2018 papers
Abstract
Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.
Related Papers
- → Support vector machines and Word2vec for text classification with semantic features(2015)457 cited
- → Review on Word2Vec Word Embedding Neural Net(2020)73 cited
- → Weighted word2vec based on the distance of words(2017)23 cited
- → LSTM and Bi-LSTM Models For Identifying Natural Disasters Reports From Social Media(2023)8 cited
- → Analyzing Semantic Relations of Word Vectors trained by The Word2vec Model(2019)1 cited