Code to accompany the paper "Finding Bipartite Components in Hypergraphs", which is published in NeurIPS'21.

Overview

Finding Bipartite Components in Hypergraphs

This repository contains code to accompany the paper "Finding Bipartite Components in Hypergraphs", published in NeurIPS 2021. It provides an implementation of the proposed algorithm based on the new hypergraph diffusion process, as well as the baseline algorithm based on the clique reduction.

Below, you can find instructions for running the code which will reproduce the results reported in the paper.

Feel free to contact me with any questions or comments at [email protected].

Set-up

The code was written to work with Python 3.6, although other versions of Python 3 should also work. We recommend that you run inside a virtual environment.

To install the dependencies of this project, run

pip install -r requirements.txt

Viewing the visualisation

In order to demonstrate our algorithm, you can view the visualisation of the 2-graph constructed at each step by running

python show_visualisation.py

This example was used to create Figure 1 in the paper.

Experiments

In this section, we give instructions for running the experiments reported in the paper.

Penn Treebank Preprocessing

We are unfortunately not able to share the data used for the Penn Treebank experiment, and so we give instructions here for how to preprocess this data for use with our code. You will need to have your own access to the Penn Treebank corpus.

Follow the instructions in this repository, passing the --task pos command line option to generate the files train.tsv, test.tsv, and dev.tsv. Copy these three files to the data/nlp/penn-treebank directory.

Running the real-world experiments

To run the experiments on real-world data, you should run

python run_experiment.py {experiment_name}

where {experiment_name} is one of 'ptb', 'dblp', 'imdb', or 'wikipedia' to run the Penn Treebank, DBLP, IMDB and Wikipedia experiments respectively.

Running the synthetic experiments

To run an experiment on a single synthetic hypergraph, run

python run_experiment_synthetic.py {n} {r} {p} {q}

where {n} is the number of vertices in the hypergraph, {r} is the rank of the hypergraph, {p} is the probability of an edge inside a cluster, and {q} is the probability of an edge between clusters. Be careful not to set p or q to be too large. See the main paper for more information about the random hypergraph model. This will construct the hypergraph if needed, and report the performance of the diffusion algorithm and the clique algorithm on the constructed hypergraph.

Results

The full results from our experiments on synthetic hypergraphs are provided in the data/sbm/results directory, along with a Mathematica notebook for viewing them, and plotting the figures shown in the paper.

Owner
Peter Macgregor
Computer Science PhD Student, University of Edinburgh.
Peter Macgregor
Robot Hacking Manual (RHM). From robotics to cybersecurity. Papers, notes and writeups from a journey into robot cybersecurity.

RHM: Robot Hacking Manual Download in PDF RHM v0.4 ┃ Read online The Robot Hacking Manual (RHM) is an introductory series about cybersecurity for robo

Víctor Mayoral Vilches 233 Dec 30, 2022
Official Pytorch implementation of MixMo framework

MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks Official PyTorch implementation of the MixMo framework | paper | docs Alexandr

79 Nov 07, 2022
A map update dataset and benchmark

MUNO21 MUNO21 is a dataset and benchmark for machine learning methods that automatically update and maintain digital street map datasets. Previous dat

16 Nov 30, 2022
Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"

Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline Ankit Goyal, Hei Law, Bowei Liu, Alejandro Newell, Jia Deng Internati

Princeton Vision & Learning Lab 115 Jan 04, 2023
Extract MNIST handwritten digits dataset binary file into bmp images

MNIST-dataset-extractor Extract MNIST handwritten digits dataset binary file into bmp images More info at http://yann.lecun.com/exdb/mnist/ Dependenci

Omar Mostafa 6 May 24, 2021
PyTorch implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Erik Linder-Norén 13.4k Jan 08, 2023
PyTorch implementation of SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching This is the official PyTorch implementation of SMODICE: Versatile Offline I

Jason Ma 14 Aug 30, 2022
Rendering color and depth images for ShapeNet models.

Color & Depth Renderer for ShapeNet This library includes the tools for rendering multi-view color and depth images of ShapeNet models. Physically bas

Yinyu Nie 41 Dec 19, 2022
EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.

EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos. In this project, we provide the basic code for fitt

ZJU3DV 2.2k Jan 05, 2023
An addernet CUDA version

Training addernet accelerated by CUDA Usage cd adder_cuda python setup.py install cd .. python main.py Environment pytorch 1.10.0 CUDA 11.3 benchmark

LingXY 4 Jun 20, 2022
Asymmetric Bilateral Motion Estimation for Video Frame Interpolation, ICCV2021

ABME (ICCV2021) Junheum Park, Chul Lee, and Chang-Su Kim Official PyTorch Code for "Asymmetric Bilateral Motion Estimation for Video Frame Interpolati

Junheum Park 86 Dec 28, 2022
Data Consistency for Magnetic Resonance Imaging

Data Consistency for Magnetic Resonance Imaging Data Consistency (DC) is crucial for generalization in multi-modal MRI data and robustness in detectin

Dimitris Karkalousos 19 Dec 12, 2022
EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
Crowd-sourced Annotation of Human Motion.

Motion Annotation Tool Live: https://motion-annotation.humanoids.kit.edu Paper: The KIT Motion-Language Dataset Installation Start by installing all P

Matthias Plappert 4 May 25, 2020
Official code for "Maximum Likelihood Training of Score-Based Diffusion Models", NeurIPS 2021 (spotlight)

Maximum Likelihood Training of Score-Based Diffusion Models This repo contains the official implementation for the paper Maximum Likelihood Training o

Yang Song 84 Dec 12, 2022
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
Official implementation for the paper: Multi-label Classification with Partial Annotations using Class-aware Selective Loss

Multi-label Classification with Partial Annotations using Class-aware Selective Loss Paper | Pretrained models Official PyTorch Implementation Emanuel

99 Dec 27, 2022
pq is a jq-like Pickle file viewer

pq PQ is a jq-like viewer/processing tool for pickle files. howto # pq '' file.pkl {'other': 456, 'test': 123} # pq 'table' file.pkl |other|test| | 45

3 Mar 15, 2022
Spatial Transformer Nets in TensorFlow/ TensorLayer

MOVED TO HERE Spatial Transformer Networks Spatial Transformer Networks (STN) is a dynamic mechanism that produces transformations of input images (or

Hao 36 Nov 23, 2022
Non-Official Pytorch implementation of "Face Identity Disentanglement via Latent Space Mapping" https://arxiv.org/abs/2005.07728 Using StyleGAN2 instead of StyleGAN

Face Identity Disentanglement via Latent Space Mapping - Implement in pytorch with StyleGAN 2 Description Pytorch implementation of the paper Face Ide

Daniel Roich 58 Dec 24, 2022