Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method
Citations Over TimeTop 10% of 2021 papers
Abstract
Graph Convolutional Network (GCN) is an emerging technique for information retrieval (IR) applications. While GCN assumes the homophily property of a graph, real-world graphs are never perfect: the local structure of a node may contain discrepancy, e.g., the labels of a node's neighbors could vary. This pushes us to consider the discrepancy of local structure in GCN modeling. Existing work approaches this issue by introducing an additional module such as graph attention, which is expected to learn the contribution of each neighbor. However, such module may not work reliably as expected, especially when there lacks supervision signal, e.g., when the labeled data is small. Moreover, existing methods focus on modeling the nodes in the training data, and never consider the local structure discrepancy of testing nodes.
Related Papers
- → Social Networks and Causal Inference(2013)81 cited
- → Inferring Hidden Causal Structure(2010)48 cited
- → Confidence in causal inference under structure uncertainty in linear causal models with equal variances(2023)4 cited
- → Introduction(2022)
- → Confidence in Causal Inference under Structure Uncertainty in Linear Causal Models with Equal Variances(2023)