Code for Understanding Pooling in Graph Neural Networks

Related tags

Deep LearningSRC
Overview

Select, Reduce, Connect

This repository contains the code used for the experiments of:

"Understanding Pooling in Graph Neural Networks"

Setup

Install TensorFlow and other dependencies:

pip install -r requirements.txt

Running experiments

Experiments are found in the following folders:

  • autoencoder/
  • spectral_similarity/
  • graph_classification/

Each folder has a bash script called run_all.sh that will reproduce the results reported in the paper.

To generate the plots and tables that we included in the paper, you can use the plots.py, plots_datasets.py, or tables.py found in the folders.

To run experiments for an individual pooling operator, you can use the run_[OPERATOR NAME].py scripts in each folder.

The pooling operators that we used for the experiments are found in layers/ (trainable) and modules/ (non-trainable). The GNN architectures used in the experiments are found in models/.

The SRCPool class

The core of this repository is the SRCPool class that implements a general interface to create SRC pooling layers with the Keras API.

Our implementation of MinCutPool, DiffPool, LaPool, Top-K, and SAGPool using the SRCPool class can be found in src/layers.

In general, SRC layers compute:

Where is a node equivariant selection function that computes the supernode assignments , is a permutation-invariant function to reduce the supernodes into the new node attributes, and is a permutation-invariant connection function that computes the links between the pooled nodes.

By extending this class, it is possible to create any pooling layer in the SRC framework.

Input

  • X: Tensor of shape ([batch], N, F) representing node features;
  • A: Tensor or SparseTensor of shape ([batch], N, N) representing the adjacency matrix;
  • I: (optional) Tensor of integers with shape (N, ) representing the batch index;

Output

  • X_pool: Tensor of shape ([batch], K, F), representing the node features of the output. K is the number of output nodes and depends on the specific pooling strategy;
  • A_pool: Tensor or SparseTensor of shape ([batch], K, K) representing the adjacency matrix of the output;
  • I_pool: (only if I was given as input) Tensor of integers with shape (K, ) representing the batch index of the output;
  • S_pool: (if return_sel=True) Tensor or SparseTensor representing the supernode assignments;

API

  • pool(X, A, I, **kwargs): pools the graph and returns the reduced node features and adjacency matrix. If the batch index I is not None, a reduced version of I will be returned as well. Any given kwargs will be passed as keyword arguments to select(), reduce() and connect() if any matching key is found. The mandatory arguments of pool() (X, A, and I) must be computed in call() by calling self.get_inputs(inputs).
  • select(X, A, I, **kwargs): computes supernode assignments mapping the nodes of the input graph to the nodes of the output.
  • reduce(X, S, **kwargs): reduces the supernodes to form the nodes of the pooled graph.
  • connect(A, S, **kwargs): connects the reduced supernodes.
  • reduce_index(I, S, **kwargs): helper function to reduce the batch index (only called if I is given as input).

When overriding any function of the API, it is possible to access the true number of nodes of the input (N) as a Tensor in the instance variable self.N (this is populated by self.get_inputs() at the beginning of call()).

Arguments:

  • return_sel: if True, the Tensor used to represent supernode assignments will be returned with X_pool, A_pool, and I_pool;
Owner
Daniele Grattarola
PhD student @ Università della Svizzera italiana
Daniele Grattarola
Classification Modeling: Probability of Default

Credit Risk Modeling in Python Introduction: If you've ever applied for a credit card or loan, you know that financial firms process your information

Aktham Momani 2 Nov 07, 2022
Reviving Iterative Training with Mask Guidance for Interactive Segmentation

This repository provides the source code for training and testing state-of-the-art click-based interactive segmentation models with the official PyTorch implementation

Visual Understanding Lab @ Samsung AI Center Moscow 406 Jan 01, 2023
DLL: Direct Lidar Localization

DLL: Direct Lidar Localization Summary This package presents DLL, a direct map-based localization technique using 3D LIDAR for its application to aeri

Service Robotics Lab 127 Dec 16, 2022
ContourletNet: A Generalized Rain Removal Architecture Using Multi-Direction Hierarchical Representation

ContourletNet: A Generalized Rain Removal Architecture Using Multi-Direction Hierarchical Representation (Accepted by BMVC'21) Abstract: Images acquir

10 Dec 08, 2022
Equivariant Imaging: Learning Beyond the Range Space

[Project] Equivariant Imaging: Learning Beyond the Range Space Project about the

Georges Le Bellier 3 Feb 06, 2022
Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation

OSCAR Project Page | Paper This repository contains the codebase used in OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Ma

NVIDIA Research Projects 74 Dec 22, 2022
Le dataset des images du projet d'IA de 2021

face-mask-dataset-ilc-2021 Le dataset des images du projet d'IA de 2021, Indiquez vos id git dans la issue pour les droits TL;DR: Choisir 200 images J

7 Nov 15, 2021
An evaluation toolkit for voice conversion models.

Voice-conversion-evaluation An evaluation toolkit for voice conversion models. Sample test pair Generate the metadata for evaluating models. The direc

30 Aug 29, 2022
Image super-resolution through deep learning

srez Image super-resolution through deep learning. This project uses deep learning to upscale 16x16 images by a 4x factor. The resulting 64x64 images

David Garcia 5.3k Dec 28, 2022
Continuous Conditional Random Field Convolution for Point Cloud Segmentation

CRFConv This repository is the implementation of "Continuous Conditional Random Field Convolution for Point Cloud Segmentation" 1. Setup 1) Building c

Fei Yang 8 Dec 08, 2022
Repository for publicly available deep learning models developed in Rosetta community

trRosetta2 This package contains deep learning models and related scripts used by Baker group in CASP14. Installation Linux/Mac clone the package git

81 Dec 29, 2022
Generative Exploration and Exploitation - This is an improved version of GENE.

GENE This is an improved version of GENE. In the original version, the states are generated from the decoder of VAE. We have to check whether the gere

33 Mar 23, 2022
HyperLib: Deep learning in the Hyperbolic space

HyperLib: Deep learning in the Hyperbolic space Background This library implements common Neural Network components in the hypberbolic space (using th

105 Dec 25, 2022
Perform Linear Classification with Multi-way Data

MultiwayClassification This is an R package to perform linear classification for data with multi-way structure. The distance-weighted discrimination (

Eric F. Lock 2 Dec 15, 2020
Neural Scene Flow Fields using pytorch-lightning, with potential improvements

nsff_pl Neural Scene Flow Fields using pytorch-lightning. This repo reimplements the NSFF idea, but modifies several operations based on observation o

AI葵 178 Dec 21, 2022
Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning

We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input. To this end, we introdu

OATML 360 Dec 28, 2022
Tech Resources for Academic Communities

Free tech resources for faculty, students, researchers, life-long learners, and academic community builders for use in tech based courses, workshops, and hackathons.

Microsoft 2.5k Jan 04, 2023
Jetson Nano-based smart camera system that measures crowd face mask usage in real-time.

MaskCam MaskCam is a prototype reference design for a Jetson Nano-based smart camera system that measures crowd face mask usage in real-time, with all

BDTI 212 Dec 29, 2022
Hierarchical Cross-modal Talking Face Generation with Dynamic Pixel-wise Loss (ATVGnet)

Hierarchical Cross-modal Talking Face Generation with Dynamic Pixel-wise Loss (ATVGnet) By Lele Chen , Ross K Maddox, Zhiyao Duan, Chenliang Xu. Unive

Lele Chen 218 Dec 27, 2022
Framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample resolution

Sample-specific Bayesian Networks A framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample or per-patient re

Caleb Ellington 1 Sep 23, 2022