Code for "Causal autoregressive flows" - AISTATS, 2021

Related tags

Deep Learningcarefl
Overview

Code for "Causal Autoregressive Flow"

This repository contains code to run and reproduce experiments presented in Causal Autoregressive Flows, presented at the 24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021).

The repository originally contained the code to reproduce results presented in Autoregressive flow-based causal discovery and inference, presented at the 2nd ICML workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models (2020). Switch to the workshop branch to access this version of the code.

Dependencies

This project was tested with the following versions:

  • python 3.7
  • numpy 1.18.2
  • pytorch 1.4
  • scikit-learn 0.22.2
  • scipy 1.4.1
  • matplotlib 3.2.1
  • seaborn 0.10

This project uses normalizing flows implementation from this repository.

Usage

The main.py script is the main gateway to reproduce the experiments detailed in the mansucript, and is straightforward to use. Type python main.py -h to learn about the options.

Hyperparameters can be changed through the configuration files under configs/. The main.py is setup to read the corresponding config file for each experiment, but this can be overwritten using the -y or --config flag.

The results are saved under the run/ folder. This can be changed using the --run flag.

Running the main.py script will only produce data for a single set of parameters, which are specified in the config file. These parameters include the dataset type, the number of simulations, the algorithm, the number of observations, the architectural parameters for the neural networks (number of layers, dimension of the hidden layer...), etc...

To reproduce the figures in the manuscript, the script should be run multiple time for each different combination of parameters, to generate the data used for the plots. Convience scripts are provided to do this in parallel using SLURM (see below). These make use of certain debugging flags that overwrite certain fields in the config file.

Finally, the flow.scale field in the config files is used to switch from CAREFL to CAREFL-NS by setting it to false.

Examples

Experiments where run using the SLURM system. The slurm_main_cpu.sbatch is used to run jobs on CPU, and slurm_main.sbatch for the GPU.

To run simulations in parallel:

for SIZE in 25 50 75 100 150 250 500; do
    for ALGO in lrhyv reci anm; do
        for DSET in linear hoyer2009 nueralnet_l1 mnm veryhighdim; do
            sbatch slurm_main_cpu.sbatch -s -m $DSET -a $ALGO -n $SIZE
        done
    done
done
ALGO=carefl
for SIZE in 25 50 75 100 150 250 500; do
    for DSET in linear hoyer2009 nueralnet_l1 mnm veryhighdim; do
        sbatch slurm_main_cpu.sbatch -s -m $DSET -a $ALGO -n $SIZE
    done
done

To run interventions:

for SIZE in 250 500 750 1000 1250 1500 2000 2500; do
    for ALGO in gp linear; do
        sbatch slurm_main_cpu.sbatch -i -a $ALGO -n $SIZE
    done
done
ALGO=carefl
for SIZE in 250 500 750 1000 1250 1500 2000 2500; do
    sbatch slurm_main_cpu.sbatch -i -a $ALGO -n $SIZE
done

To run arrow of time on EEG data:

for ALGO in LRHyv RECI ANM; do
    for IDX in {0..117}; do
        sbatch slurm_main_cpu.sbatch -e -n $IDX -a $ALGO --n-sims 11
    done
done
ALGO=carefl
for IDX in {0..117}; do
    sbatch slurm_main.sbatch -e -n $IDX -a $ALGO --n-sims 11
done

To run interventions on fMRI data (this experiment outputs to standard output):

python main.py -f

To run pairs:

for IDX in {1..108}; do
    sbatch slurm_main_cpu.sbatch -p -n $IDX --n-sims 10
done

Reference

If you find this code helpful/inspiring for your research, we would be grateful if you cite the following:

@inproceedings{khemakhem2021causal,
  title = { Causal Autoregressive Flows },
  author = {Khemakhem, Ilyes and Monti, Ricardo and Leech, Robert and Hyvarinen, Aapo},
  booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics},
  pages = {3520--3528},
  year = {2021},
  editor = {Banerjee, Arindam and Fukumizu, Kenji},
  volume = {130},
  series = {Proceedings of Machine Learning Research},
  month = {13--15 Apr},
  publisher = {PMLR}
}

License

A full copy of the license can be found here.

MIT License

Copyright (c) 2020 Ilyes Khemakhem and Ricardo Pio Monti

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Owner
Ricardo Pio Monti
Ricardo Pio Monti
Caffe models in TensorFlow

Caffe to TensorFlow Convert Caffe models to TensorFlow. Usage Run convert.py to convert an existing Caffe model to TensorFlow. Make sure you're using

Saumitro Dasgupta 2.8k Dec 31, 2022
The Official TensorFlow Implementation for SPatchGAN (ICCV2021)

SPatchGAN: Official TensorFlow Implementation Paper "SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation"

39 Dec 30, 2022
MMdet2-based reposity about lightweight detection model: Nanodet, PicoDet.

Lightweight-Detection-and-KD MMdet2-based reposity about lightweight detection model: Nanodet, PicoDet. This repo also includes detection knowledge di

Egqawkq 12 Jan 05, 2023
Implementation of DocFormer: End-to-End Transformer for Document Understanding, a multi-modal transformer based architecture for the task of Visual Document Understanding (VDU)

DocFormer - PyTorch Implementation of DocFormer: End-to-End Transformer for Document Understanding, a multi-modal transformer based architecture for t

171 Jan 06, 2023
这是一个yolox-keras的源码,可以用于训练自己的模型。

YOLOX:You Only Look Once目标检测模型在Keras当中的实现 目录 性能情况 Performance 实现的内容 Achievement 所需环境 Environment 小技巧的设置 TricksSet 文件下载 Download 训练步骤 How2train 预测步骤 Ho

Bubbliiiing 64 Nov 10, 2022
PyTorch Implementation of PortaSpeech: Portable and High-Quality Generative Text-to-Speech

PortaSpeech - PyTorch Implementation PyTorch Implementation of PortaSpeech: Portable and High-Quality Generative Text-to-Speech. Model Size Module Nor

Keon Lee 279 Jan 04, 2023
Deep Learning to Improve Breast Cancer Detection on Screening Mammography

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Deep Learning to Improve Breast

Li Shen 305 Jan 03, 2023
This repository contains all the code and materials distributed in the 2021 Q-Programming Summer of Qode.

Q-Programming Summer of Qode This repository contains all the code and materials distributed in the Q-Programming Summer of Qode. If you want to creat

Sammarth Kumar 11 Jun 11, 2021
Official PyTorch implementation for "Low Precision Decentralized Distributed Training with Heterogenous Data"

Low Precision Decentralized Training with Heterogenous Data Official PyTorch implementation for "Low Precision Decentralized Distributed Training with

Aparna Aketi 0 Nov 23, 2021
an implementation of softmax splatting for differentiable forward warping using PyTorch

softmax-splatting This is a reference implementation of the softmax splatting operator, which has been proposed in Softmax Splatting for Video Frame I

Simon Niklaus 338 Dec 28, 2022
Revisiting Temporal Alignment for Video Restoration

Revisiting Temporal Alignment for Video Restoration [arXiv] Kun Zhou, Wenbo Li, Liying Lu, Xiaoguang Han, Jiangbo Lu We provide our results at Google

52 Dec 25, 2022
Fast, flexible and fun neural networks.

Brainstorm Discontinuation Notice Brainstorm is no longer being maintained, so we recommend using one of the many other,available frameworks, such as

IDSIA 1.3k Nov 21, 2022
UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering

UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering This repository holds all the code and data for our recent work on

Mohamed El Banani 118 Dec 06, 2022
Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"

One-Shot Free-View Neural Talking Head Synthesis Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Vide

ZLH 406 Dec 23, 2022
Doosan robotic arm, simulation, control, visualization in Gazebo and ROS2 for Reinforcement Learning.

Robotic Arm Simulation in ROS2 and Gazebo General Overview This repository includes: First, how to simulate a 6DoF Robotic Arm from scratch using GAZE

David Valencia 12 Jan 02, 2023
An official implementation of the paper Exploring Sequence Feature Alignment for Domain Adaptive Detection Transformers

Sequence Feature Alignment (SFA) By Wen Wang, Yang Cao, Jing Zhang, Fengxiang He, Zheng-jun Zha, Yonggang Wen, and Dacheng Tao This repository is an o

WangWen 79 Dec 24, 2022
Official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th ICML Workshop on AutoML)

Automated Learning Rate Scheduler for Large-Batch Training The official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th

Kakao Brain 35 Jan 04, 2023
PyTorch implementations of Top-N recommendation, collaborative filtering recommenders.

PyTorch implementations of Top-N recommendation, collaborative filtering recommenders.

Yoonki Jeong 129 Dec 22, 2022
Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM)

Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM) Introduction The average lifetime of the $D^{0}$ me

Son Gyo Jung 1 Dec 17, 2021
PyTorch 1.0 inference in C++ on Windows10 platforms

Serving PyTorch Models in C++ on Windows10 platforms How to use Prepare Data examples/data/train/ - 0 - 1 . . . - n examples/data/test/

Henson 88 Oct 15, 2022