Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling
Citations Over TimeTop 10% of 2020 papers
Abstract
In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties, and they are fundamental for building deep GNNs that learn hierarchical representations. In this work, we propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology. During training, the GNN learns new node representations and fits them to a pyramid of coarsened graphs, which is computed offline in a preprocessing stage. NDP consists of three steps. First, a node decimation procedure selects the nodes belonging to one side of the partition identified by a spectral algorithm that approximates the MAXCUT solution. Afterward, the selected nodes are connected with Kron reduction to form the coarsened graph. Finally, since the resulting graph is very dense, we apply a sparsification procedure that prunes the adjacency matrix of the coarsened graph to reduce the computational cost in the GNN. Notably, we show that it is possible to remove many edges without significantly altering the graph structure. Experimental results show that NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks.
Related Papers
- → A novel decimation method in parallel based acquisition system(2019)2 cited
- → Two dimensional DFT using mixed time and frequency decimations(2005)2 cited
- Arbitrary factorisation of decimation ratio for efficient multistage realisation(2004)
- → Efficient Decimation And Interpolation(1986)
- Optimization of Multistage DecimationProcessing Using Genetic Algorithm(2015)