PyTorch code of "SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks"

Overview

SLAPS-GNN

This repo contains the implementation of the model proposed in SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks.

Datasets

ogbn-arxiv dataset will be loaded automatically, while Cora, Citeseer, and Pubmed are included in the GCN package, available here. Place the relevant files in the folder data_tf.

Dependencies

To train the models, you need a machine with a GPU.

To install the dependencies, it is recommended to use a virtual environment. You can create a virtual environment and install all the dependencies with the following command:

conda env create -f environment.yml

The file requirements.txt was written for CUDA 9.2 and Linux so you may need to adapt it to your infrastructure.

Usage

To run the model you should define the following parameters:

  • dataset: The dataset you want to run the model on
  • ntrials: number of runs
  • epochs_adj: number of epochs
  • epochs: number of epochs for GNN_C (used for knn_gcn and 2step learning of the model)
  • lr_adj: learning rate of GNN_DAE
  • lr: learning rate of GNN_C
  • w_decay_adj: l2 regularization parameter for GNN_DAE
  • w_decay: l2 regularization parameter for GNN_C
  • nlayers_adj: number of layers for GNN_DAE
  • nlayers: number of layers for GNN_C
  • hidden_adj: hidden size of GNN_DAE
  • hidden: hidden size of GNN_C
  • dropout1: dropout rate for GNN_DAE
  • dropout2: dropout rate for GNN_C
  • dropout_adj1: dropout rate on adjacency matrix for GNN_DAE
  • dropout_adj2: dropout rate on adjacency matrix for GNN_C
  • dropout2: dropout rate for GNN_C
  • k: k for knn initialization with knn
  • lambda_: weight of loss of GNN_DAE
  • nr: ratio of zeros to ones to mask out for binary features
  • ratio: ratio of ones to mask out for binary features and ratio of features to mask out for real values features
  • model: model to run (choices are end2end, knn_gcn, or 2step)
  • sparse: whether to make the adjacency sparse and run operations on sparse mode
  • gen_mode: identifies the graph generator
  • non_linearity: non-linearity to apply on the adjacency matrix
  • mlp_act: activation function to use for the mlp graph generator
  • mlp_h: hidden size of the mlp graph generator
  • noise: type of noise to add to features (mask or normal)
  • loss: type of GNN_DAE loss (mse or bce)
  • epoch_d: epochs_adj / epoch2 of the epochs will be used for training GNN_DAE
  • half_val_as_train: use half of validation for train to get Cora390 and Citeseer370

Reproducing the Results in the Paper

In order to reproduce the results presented in the paper, you should run the following commands:

Cora

FP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.001 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.25 -k 30 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5

MLP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_h 1433 -mlp_act relu -epoch_d 5

MLP-D

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 15 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act relu -epoch_d 5

Citeseer

FP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.4 -dropout_adj2 0.4 -k 30 -lambda_ 1.0 -nr 1 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5

MLP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 30 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_act relu -mlp_h 3703 -epoch_d 5

MLP-D

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.001 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act tanh -epoch_d 5

Cora390

FP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 100.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5 -half_val_as_train 1

MLP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_h 1433 -mlp_act relu -epoch_d 5 -half_val_as_train 1

MLP-D

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.001 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act relu -epoch_d 5 -half_val_as_train 1

Citeseer370

FP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.5 -k 30 -lambda_ 1.0 -nr 1 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5 -half_val_as_train 1

MLP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.25 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 30 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_act tanh -mlp_h 3703 -epoch_d 5 -half_val_as_train 1

MLP-D

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act tanh -epoch_d 5 -half_val_as_train 1

Pubmed

MLP

Run the following command:

python main.py -dataset pubmed -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 128 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.5 -k 15 -lambda_ 10.0 -nr 5 -ratio 20 -model end2end -gen_mode 1 -non_linearity relu -mlp_h 500 -mlp_act relu -epoch_d 5 -sparse 1

MLP-D

Run the following command:

python main.py -dataset pubmed -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 128 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.25 -k 15 -lambda_ 100.0 -nr 5 -ratio 20 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act tanh -epoch_d 5 -sparse 1

ogbn-arxiv

MLP

Run the following command:

python main.py -dataset ogbn-arxiv -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0 -nlayers 2 -nlayers_adj 2 -hidden 256 -hidden_adj 256 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 15 -lambda_ 10.0 -nr 5 -ratio 100 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_h 128 -mlp_act relu -epoch_d 2001 -sparse 1 -loss mse -noise mask

MLP-D

Run the following command:

python main.py -dataset ogbn-arxiv -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0 -nlayers 2 -nlayers_adj 2 -hidden 256 -hidden_adj 256 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.25 -k 15 -lambda_ 10.0 -nr 5 -ratio 100 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act relu -epoch_d 2001 -sparse 1 -loss mse -noise normal

Cite SLAPS

If you use this package for published work, please cite the following:

@inproceedigs{fatemi2021slaps,
  title={SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks},
  author={Fatemi, Bahare and Asri, Layla El and Kazemi, Seyed Mehran},
  booktitle={Advances in Neural Information Processing Systems},
  year={2021}
}
UMich 500-Level Mobile Robotics Course

MOBILE ROBOTICS: METHODS & ALGORITHMS - WINTER 2022 University of Michigan - NA 568/EECS 568/ROB 530 For slides, lecture notes, and example codes, see

393 Dec 29, 2022
Resources for the Ki testnet challenge

Ki Testnet Challenge This repository hosts ki-testnet-challenge. A set of scripts and resources to be used for the Ki Testnet Challenge What is the te

Ki Foundation 23 Aug 08, 2022
This is an open solution to the Home Credit Default Risk challenge 🏡

Home Credit Default Risk: Open Solution This is an open solution to the Home Credit Default Risk challenge 🏡 . More competitions 🎇 Check collection

minerva.ml 427 Dec 27, 2022
traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation toolbox based on PyTorch.

traiNNer traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation to

202 Jan 04, 2023
YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone

YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone In our recent paper we propose the YourTTS model. YourTTS bri

Edresson Casanova 390 Dec 29, 2022
SHIFT15M: multiobjective large-scale fashion dataset with distributional shifts

[arXiv] The main motivation of the SHIFT15M project is to provide a dataset that contains natural dataset shifts collected from a web service IQON, wh

ZOZO, Inc. 138 Nov 24, 2022
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis

A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis Project Page | Paper A Shading-Guided Generative Implicit Model

Xingang Pan 115 Dec 18, 2022
PyTorch code for the NAACL 2021 paper "Improving Generation and Evaluation of Visual Stories via Semantic Consistency"

Improving Generation and Evaluation of Visual Stories via Semantic Consistency PyTorch code for the NAACL 2021 paper "Improving Generation and Evaluat

Adyasha Maharana 28 Dec 08, 2022
TeachMyAgent is a testbed platform for Automatic Curriculum Learning methods in Deep RL.

TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL Paper Website Documentation TeachMyAgent is a testbed platform for Automatic Cu

Flowers Team 51 Dec 25, 2022
API for RL algorithm design & testing of BCA (Building Control Agent) HVAC on EnergyPlus building energy simulator by wrapping their EMS Python API

RL - EmsPy (work In Progress...) The EmsPy Python package was made to facilitate Reinforcement Learning (RL) algorithm research for developing and tes

20 Jan 05, 2023
Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image

CenterPose Overview This repository is the official implementation of the paper "Single-stage Keypoint-based Category-level Object Pose Estimation fro

NVIDIA Research Projects 188 Dec 27, 2022
🤗 Paper Style Guide

🤗 Paper Style Guide (Work in progress, send a PR!) Libraries to Know booktabs natbib cleveref Either seaborn, plotly or altair for graphs algorithmic

Hugging Face 66 Dec 12, 2022
A Structured Self-attentive Sentence Embedding

Structured Self-attentive sentence embeddings Implementation for the paper A Structured Self-Attentive Sentence Embedding, which was published in ICLR

Kaushal Shetty 488 Nov 28, 2022
Code for "Learning Graph Cellular Automata"

Learning Graph Cellular Automata This code implements the experiments from the NeurIPS 2021 paper: "Learning Graph Cellular Automata" Daniele Grattaro

Daniele Grattarola 37 Oct 26, 2022
This is the 3D Implementation of 《Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical Image Segmentation》

CoraNet This is the 3D Implementation of 《Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical Image Segmentation》 Environment pytor

25 Nov 08, 2022
Cockpit is a visual and statistical debugger specifically designed for deep learning.

Cockpit: A Practical Debugging Tool for Training Deep Neural Networks

Felix Dangel 421 Dec 29, 2022
KinectFusion implemented in Python with PyTorch

KinectFusion implemented in Python with PyTorch This is a lightweight Python implementation of KinectFusion. All the core functions (TSDF volume, fram

Jingwen Wang 80 Jan 03, 2023
Optimizaciones incrementales al problema N-Body con el fin de evaluar y comparar las prestaciones de los traductores de Python en el ámbito de HPC.

Python HPC Optimizaciones incrementales de N-Body (all-pairs) con el fin de evaluar y comparar las prestaciones de los traductores de Python en el ámb

Andrés Milla 12 Aug 04, 2022
Second-order Attention Network for Single Image Super-resolution (CVPR-2019)

Second-order Attention Network for Single Image Super-resolution (CVPR-2019) "Second-order Attention Network for Single Image Super-resolution" is pub

516 Dec 28, 2022
dyld_shared_cache processing / Single-Image loading for BinaryNinja

Dyld Shared Cache Parser Author: cynder (kat) Dyld Shared Cache Support for BinaryNinja Without any of the fuss of requiring manually loading several

cynder 76 Dec 28, 2022