Expressive Power of Invariant and Equivaraint Graph Neural Networks (ICLR 2021)

Overview

Expressive Power of Invariant and Equivaraint Graph Neural Networks

In this repository, we show how to use powerful GNN (2-FGNN) to solve a graph alignment problem. This code was used to derive the practical results in the following paper:

Waiss Azizian, Marc Lelarge. Expressive Power of Invariant and Equivariant Graph Neural Networks, ICLR 2021.

arXiv OpenReview

Problem: alignment of graphs

The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. Here we consider a noisy version of this problem: the two graphs below are noisy versions of a parent graph. There is no strict isomorphism between them. Can we still match the vertices of graph 1 with the corresponding vertices of graph 2?

graph 1 graph 2

With our GNN, we obtain the following results: green vertices are well paired vertices and red vertices are errors. Both graphs are now represented using the layout from the right above but the color of the vertices are the same on both sides. At inference, our GNN builds node embedding for the vertices of graphs 1 and 2. Finally a node of graph 1 is matched to its most similar node of graph 2 in this embedding space.

graph 1 graph 2

Below, on the left, we plot the errors made by our GNN: errors made on red vertices are represented by links corresponding to a wrong matching or cycle; on the right, we superpose the two graphs: green edges are in both graphs (they correspond to the parent graph), orange edges are in graph 1 only and blue edges are in graph 2 only. We clearly see the impact of the noisy edges (orange and blue) as each red vertex (corresponding to an error) is connected to such edges (except the isolated red vertex).

Wrong matchings/cycles Superposing the 2 graphs

To measure the performance of our GNN, instead of looking at vertices, we can look at edges. On the left below, we see that our GNN recovers most of the green edges present in graphs 1 and 2 (edges from the parent graph). On the right, mismatched edges correspond mostly to noisy (orange and blue) edges (present in only one of the graphs 1 or 2).

Matched edges Mismatched edges

Training GNN for the graph alignment problem

For the training of our GNN, we generate synthetic datasets as follows: first sample the parent graph and then add edges to construct graphs 1 and 2. We obtain a dataset made of pairs of graphs for which we know the true matching of vertices. We then use a siamese encoder as shown below where the same GNN (i.e. shared weights) is used for both graphs. The node embeddings constructed for each graph are then used to predict the corresponding permutation index by taking the outer product and a softmax along each row. The GNN is trained with a standard cross-entropy loss. At inference, we can add a LAP solver to get a permutation from the matrix .

Various architectures can be used for the GNN and we find that FGNN (first introduced by Maron et al. in Provably Powerful Graph Networks NeurIPS 2019) are best performing for our task. In our paper Expressive Power of Invariant and Equivariant Graph Neural Networks, we substantiate these empirical findings by proving that FGNN has a better power of approximation among all equivariant architectures working with tensors of order 2 presented so far (this includes message passing GNN or linear GNN).

Results

Each line corresponds to a model trained at a given noise level and shows its accuracy across all noise levels. We see that pretrained models generalize very well at noise levels unseen during the training.

We provide a simple notebook to reproduce this result for the pretrained model released with this repository (to run the notebook create a ipykernel with name gnn and with the required dependencies as described below).

We refer to our paper for comparisons with other algorithms (message passing GNN, spectral or SDP algorithms).

To cite our paper:

@inproceedings{azizian2020characterizing,
  title={Expressive power of invariant and equivariant graph neural networks},
  author={Azizian, Wa{\"\i}ss and Lelarge, Marc},
  booktitle={International Conference on Learning Representations},
  year={2021},
  url={https://openreview.net/forum?id=lxHgXYN4bwl}
}

Overview of the code

Project structure

.
├── loaders
|   └── dataset selector
|   └── data_generator.py # generating random graphs
|   └── test_data_generator.py
|   └── siamese_loader.py # loading pairs 
├── models
|   └── architecture selector
|   └── layers.py # equivariant block
|   └── base_model.py # powerful GNN Graph -> Graph
|   └── siamese_net.py # GNN to match graphs
├── toolbox
|   └── optimizer and losses selectors
|   └── logger.py  # keeping track of most results during training
|   └── metrics.py # computing scores
|   └── losses.py  # computing losses
|   └── optimizer.py # optimizers
|   └── utility.py
|   └── maskedtensor.py # Tensor-like class to handle batches of graphs of different sizes
├── commander.py # main file from the project serving for calling all necessary functions for training and testing
├── trainer.py # pipelines for training and validation
├── eval.py # testing models

Dependencies

Dependencies are listed in requirements.txt. To install, run

pip install -r requirements.txt

Training

Run the main file commander.py with the command train

python train commander.py

To change options, use Sacred command-line interface and see default.yaml for the configuration structure. For instance,

python commander.py train with cpu=No data.generative_model=Regular train.epoch=10 

You can also copy default.yaml and modify the configuration parameters there. Loading the configuration in other.yaml (or other.json) can be done with

python commander.py train with other.yaml

See Sacred documentation for an exhaustive reference.

To save logs to Neptune, you need to provide your own API key via the dedicated environment variable.

The model is regularly saved in the folder runs.

Evaluating

There are two ways of evaluating the models. If you juste ran the training with a configuration conf.yaml, you can simply do,

python commander.py eval with conf.yaml

You can omit with conf.yaml if you are using the default configuartion.

If you downloaded a model with a config file from here, you can edit the section test_data of this config if you wish and then run,

python commander.py eval with /path/to/config model_path=/path/to/model.pth.tar
Comments
  • need 2 different model_path variables

    need 2 different model_path variables

    When not starting anew, model_path_load should contain the path to the model. model_path should then be the path to save the new learned model. Right now, the model_path_load is used for the evaluation!

    bug 
    opened by mlelarge 1
  • Unify the different problems

    Unify the different problems

    The previous commands for training and evaluation should still work the same, even though it uses a different configuration file at the beginning. It still uses Sacred the same way for the commander.py. The problem can be switched from the config file. The main change in the previous code is the use of the Helper class (in toolbox/helper.py) which is the class that coordinates each problem (see the file for further info).

    Also added the 'article_commander.py' which generates the data for comparison between planted problems and the corresponding NP-problem, which works in a similar way to commander.py.

    opened by MauTrib 1
  • Reorganize a commander.py

    Reorganize a commander.py

    This PR tries to unify the train and eval CLI wth the use of a single config file. See default.yaml for the result and the README for more information. Please, don't hesitate to comment on changes you don't approve or dislike. Bests,

    opened by wazizian 1
  • Fix convention for tensor shapes

    Fix convention for tensor shapes

    Hi! I am modifying the code so the same convention for the shape of tensors is used everywhere (see PR). But I'm not sure we have chosen the right one. You and we have chosen(bs, n_vertices, n_vertices, features)but Marron preferred (bs, features, n_vertices, n_vertices). Indeed the later matches Pytorch's convention for images and so makes convolutions natural. In one case, MLPs and blocks have to be adapted and in the other, it is the data generation process, so it is a similar amount of work. What do you think ? Bests, Waïss

    opened by wazizian 1
  • added normalization + embeddings

    added normalization + embeddings

    I modified slightly the architecture and training seems much better and more stable. I added a first embedding layer and BN after multiplication and at the output.

    opened by mlelarge 0
Releases(QAP)
  • QAP(Jan 21, 2021)

    Config: "data": {"num_examples_train": 20000, "num_examples_val": 1000, "generative_model": "Regular", "noise_model": "ErdosRenyi", "edge_density": 0.2, "n_vertices": 50, "vertex_proba": 1.0, "noise": 0.15, "path_dataset": "dataset"}, "train": {"epoch": 50, "batch_size": 32, "lr": 0.0001, "scheduler_step": 5, "scheduler_decay": 0.9, "print_freq": 100, "loss_reduction": "mean"}, "arch": {"arch": "Siamese_Model", "model_name": "Simple_Node_Embedding", "num_blocks": 2, "original_features_num": 2, "in_features": 64, "out_features": 64, "depth_of_mlp": 3}

    Source code(tar.gz)
    Source code(zip)
    config.json(601 bytes)
    model_best.pth.tar(333.26 KB)
Creative Applications of Deep Learning w/ Tensorflow

Creative Applications of Deep Learning w/ Tensorflow This repository contains lecture transcripts and homework assignments as Jupyter Notebooks for th

Parag K Mital 1.5k Dec 30, 2022
Source code for "OmniPhotos: Casual 360° VR Photography"

OmniPhotos: Casual 360° VR Photography Project Page | Video | Paper | Demo | Data This repository contains the source code for creating and viewing Om

Christian Richardt 144 Dec 30, 2022
AdamW optimizer and cosine learning rate annealing with restarts

AdamW optimizer and cosine learning rate annealing with restarts This repository contains an implementation of AdamW optimization algorithm and cosine

Maksym Pyrozhok 133 Dec 20, 2022
Part-Aware Data Augmentation for 3D Object Detection in Point Cloud

Part-Aware Data Augmentation for 3D Object Detection in Point Cloud This repository contains a reference implementation of our Part-Aware Data Augment

Jaeseok Choi 62 Jan 03, 2023
Manifold-Mixup implementation for fastai V2

Manifold Mixup Unofficial implementation of ManifoldMixup (Proceedings of ICML 19) for fast.ai (V2) based on Shivam Saboo's pytorch implementation of

Nestor Demeure 16 Jul 25, 2022
Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021)

PGpoints Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021) Hyeontae Son, Young Min Kim Pre

Hyeontae Son 9 Jun 06, 2022
Learning View Priors for Single-view 3D Reconstruction (CVPR 2019)

Learning View Priors for Single-view 3D Reconstruction (CVPR 2019) This is code for a paper Learning View Priors for Single-view 3D Reconstruction by

Hiroharu Kato 38 Aug 17, 2022
Python binding for Khiva library.

Khiva-Python Build Documentation Build Linux and Mac OS Build Windows Code Coverage README This is the Khiva Python binding, it allows the usage of Kh

Shapelets 46 Oct 16, 2022
Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021)

Pano-AVQA Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021) [Paper] [Poster] [Video] Getting Starte

Heeseung Yun 9 Dec 23, 2022
Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction".

GNN_PPI Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction". Lear

Ursa Zrimsek 2 Dec 14, 2022
Official Implementation of SWAD (NeurIPS 2021)

SWAD: Domain Generalization by Seeking Flat Minima (NeurIPS'21) Official PyTorch implementation of SWAD: Domain Generalization by Seeking Flat Minima.

Junbum Cha 97 Dec 20, 2022
Training, generation, and analysis code for Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics

Location-Aware Generative Adversarial Networks (LAGAN) for Physics Synthesis This repository contains all the code used in L. de Oliveira (@lukedeo),

Deep Learning for HEP 57 Oct 22, 2022
Graph Convolutional Networks in PyTorch

Graph Convolutional Networks in PyTorch PyTorch implementation of Graph Convolutional Networks (GCNs) for semi-supervised classification [1]. For a hi

Thomas Kipf 4.5k Dec 31, 2022
Code and Data for the paper: Molecular Contrastive Learning with Chemical Element Knowledge Graph [AAAI 2022]

Knowledge-enhanced Contrastive Learning (KCL) Molecular Contrastive Learning with Chemical Element Knowledge Graph [ AAAI 2022 ]. We construct a Chemi

Fangyin 58 Dec 26, 2022
Vit-ImageClassification - Pytorch ViT for Image classification on the CIFAR10 dataset

Vit-ImageClassification Introduction This project uses ViT to perform image clas

Kaicheng Yang 4 Jun 01, 2022
Fine-Tune EleutherAI GPT-Neo to Generate Netflix Movie Descriptions in Only 47 Lines of Code Using Hugginface And DeepSpeed

GPT-Neo-2.7B Fine-Tuning Example Using HuggingFace & DeepSpeed Installation cd venv/bin ./pip install -r ../../requirements.txt ./pip install deepspe

Nikita 180 Jan 05, 2023
Application of the L2HMC algorithm to simulations in lattice QCD.

l2hmc-qcd 📊 Slides Recent talk on Training Topological Samplers for Lattice Gauge Theory from the Machine Learning for High Energy Physics, on and of

Sam Foreman 37 Dec 14, 2022
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Ramin Nakhli 71 Dec 04, 2022
Convert ONNX model graph to Keras model format.

Convert ONNX model graph to Keras model format.

Grigory Malivenko 175 Dec 28, 2022
A toolkit for document-level event extraction, containing some SOTA model implementations

❤️ A Toolkit for Document-level Event Extraction with & without Triggers Hi, there 👋 . Thanks for your stay in this repo. This project aims at buildi

Tong Zhu(朱桐) 159 Dec 22, 2022