GraphMAE: Self-Supervised Masked Graph Autoencoders
Citations Over TimeTop 1% of 2022 papers
Abstract
Self-supervised learning (SSL) has been extensively explored in recent years. Particularly, generative SSL has seen emerging success in natural language processing and other fields, such as the wide adoption of BERT and GPT. Despite this, contrastive learning---which heavily relies on structural data augmentation and complicated training strategies---has been the dominant approach in graph SSL, while the progress of generative SSL on graphs, especially graph autoencoders (GAEs), has thus far not reached the potential as promised in other fields. In this paper, we identify and examine the issues that negatively impact the development of GAEs, including their reconstruction objective, training robustness, and error metric. We present a masked graph autoencoder GraphMAE (code is publicly available at https://github.com/THUDM/GraphMAE) that mitigates these issues for generative self-supervised graph learning. Instead of reconstructing structures, we propose to focus on feature reconstruction with both a masking strategy and scaled cosine error that benefit the robust training of GraphMAE. We conduct extensive experiments on 21 public datasets for three different graph learning tasks. The results manifest that GraphMAE---a simple graph autoencoder with our careful designs---can consistently generate outperformance over both contrastive and generative state-of-the-art baselines. This study provides an understanding of graph autoencoders and demonstrates the potential of generative self-supervised learning on graphs.
Related Papers
- → Deep Medical Image Reconstruction with Autoencoders using Deep Boltzmann Machine Training(2020)23 cited
- → Simultaneous Cell Detection and Classification with an Asymmetric Deep Autoencoder in Bone Marrow Histology Images(2017)8 cited
- → A Novel Harris Hawk Algorithm (HHA) to Optimize Deep Autoencoder (DAE) Method of Deep Learning for Cancer Diagnosis(2023)2 cited
- → Residual Sparse Autoencoders for Unsupervised Feature Learning and Its Application to HEp-2 Cell Staining Pattern Recognition(2019)1 cited
- → A deep learning approach for stock market prediction using deep autoencoder and long short-term memory(2021)1 cited