source code for 'Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge' by A. Shah, K. Shanmugam, K. Ahuja

Overview

Source code for "Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge"

Reference: Abhin Shah, Karthikeyan Shanmugam, Kartik Ahuja, "Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge," The 25th International Conference on Artificial Intelligence and Statistics (AISTATS), 2022

Contact: [email protected]

Arxiv: https://arxiv.org/pdf/2106.11560.pdf

Dependencies:

In order to successfully execute the code, the following libraries must be installed:

  1. Python --- causallib, sklearn, multiprocessing, contextlib, scipy, functools, pandas, numpy, itertools, random, argparse, time, matplotlib, pickle, pyreadr, rpy2, torch

  2. R --- RCIT

Command inputs:

  • nr: number of repetitions (default = 100)
  • no: number of observations (default = 50000)
  • use_t_in_e: indicator for whether t should be used to generate e (default = 1)
  • ne: number of environments (default = 3)
  • number_IRM_iterations - number of iterations of IRM (default = 15000)
  • nrd - number of features for sparse subset search (default = 5)

Reproducing the figures and tables:

  1. To reproduce Figure 3a and Figure 10a, run the following three commands:
$ mkdir synthetic_theory
$ python3 -W ignore synthetic_theory.py --nr 100
$ python3 plot_synthetic_theory.py --nr 100
  1. To reproduce Figure 3b and Figure 10b, run the following three commands:
$ mkdir synthetic_algorithms
$ python3 -W ignore synthetic_algorithms.py --nr 100
$ python3 plot_synthetic_algorithms.py --nr 100
  1. To reproduce Figure 3c, run the following three commands:
$ mkdir synthetic_high_dimension
$ python3 -W ignore synthetic_high_dimension.py --nr 100
$ python3 plot_synthetic_high_dimension.py --nr 100
  1. To reproduce Table 1, run the following two commands:
$ mkdir syn-entner 
$ python3 -W ignore syn-entner --nr 100
  1. To reproduce Table 2, run the following two commands:
$ mkdir syn-cheng 
$ python3 -W ignore syn-cheng --nr 100
  1. To reproduce Figure 4, Figure 12a and Figure 12b, run the following three commands:
$ mkdir ihdp
$ python3 -W ignore ihdp.py --nr 100
$ python3 plot_ihdp.py --nr 100
  1. To reproduce Figure 5, run the following three commands:
$ mkdir cattaneo
$ python3 -W ignore cattaneo.py --nr 100
$ python3 plot_cattaneo.py --nr 100
  1. To reproduce Figure 11a and Figure 11c, run the following three commands:
$ mkdir synthetic_theory
$ python3 -W ignore synthetic_theory.py --nr 100 --use_t_in_e 0
$ python3 plot_synthetic_theory.py --nr 100 --use_t_in_e 0
  1. To reproduce Figure 11b and Figure 11d, run the following three commands:
$ mkdir synthetic_algorithms
$ python3 -W ignore synthetic_algorithms.py --nr 100 --use_t_in_e 0
$ python3 plot_ synthetic_algorithms.py --nr 100 --use_t_in_e 0
Owner
Abhin Shah
Graduate student at MIT. Former undergrad at IITBombay. Former intern at IBM and EPFL
Abhin Shah
PartImageNet is a large, high-quality dataset with part segmentation annotations

PartImageNet: A Large, High-Quality Dataset of Parts We will release our dataset and scripts soon after cleaning and approval. Introduction PartImageN

Ju He 77 Nov 30, 2022
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

Kim Seonghyeon 502 Jan 03, 2023
Efficient Multi Collection Style Transfer Using GAN

Proposed a new model that can make style transfer from single style image, and allow to transfer into multiple different styles in a single model.

Zhaozheng Shen 2 Jan 15, 2022
Fast mesh denoising with data driven normal filtering using deep variational autoencoders

Fast mesh denoising with data driven normal filtering using deep variational autoencoders This is an implementation for the paper entitled "Fast mesh

9 Dec 02, 2022
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
Super Resolution for images using deep learning.

Neural Enhance Example #1 — Old Station: view comparison in 24-bit HD, original photo CC-BY-SA @siv-athens. As seen on TV! What if you could increase

Alex J. Champandard 11.7k Dec 29, 2022
RGBD-Net - This repository contains a pytorch lightning implementation for the 3DV 2021 RGBD-Net paper.

[3DV 2021] We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator

Phong Nguyen Ha 4 May 26, 2022
This is the repo for the paper `SumGNN: Multi-typed Drug Interaction Prediction via Efficient Knowledge Graph Summarization'. (published in Bioinformatics'21)

SumGNN: Multi-typed Drug Interaction Prediction via Efficient Knowledge Graph Summarization This is the code for our paper ``SumGNN: Multi-typed Drug

Yue Yu 58 Dec 21, 2022
Unsupervised phone and word segmentation using dynamic programming on self-supervised VQ features.

Unsupervised Phone and Word Segmentation using Vector-Quantized Neural Networks Overview Unsupervised phone and word segmentation on speech data is pe

Herman Kamper 13 Dec 11, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Master Docs License Apache MXNet (incubating) is a deep learning framework designed for both efficiency an

ROCm Software Platform 29 Nov 16, 2022
Generative code template for PixelBeasts 10k NFT project.

generator-template Generative code template for combining transparent png attributes into 10,000 unique images. Used for the PixelBeasts 10k NFT proje

Yohei Nakajima 9 Aug 24, 2022
Hierarchical Attentive Recurrent Tracking

Hierarchical Attentive Recurrent Tracking This is an official Tensorflow implementation of single object tracking in videos by using hierarchical atte

Adam Kosiorek 147 Aug 07, 2021
Bridging Vision and Language Model

BriVL BriVL (Bridging Vision and Language Model) 是首个中文通用图文多模态大规模预训练模型。BriVL模型在图文检索任务上有着优异的效果,超过了同期其他常见的多模态预训练模型(例如UNITER、CLIP)。 BriVL论文:WenLan: Bridgi

235 Dec 27, 2022
A simple baseline for the 2022 IEEE GRSS Data Fusion Contest (DFC2022)

DFC2022 Baseline A simple baseline for the 2022 IEEE GRSS Data Fusion Contest (DFC2022) This repository uses TorchGeo, PyTorch Lightning, and Segmenta

isaac 24 Nov 28, 2022
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 04, 2023
[CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

Visual-Reasoning-eXplanation [CVPR 2021 A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts] Project Page | Vid

Andy_Ge 54 Dec 21, 2022
Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Beanie - is an asynchronous ODM for MongoDB, based on Motor and Pydantic. It uses an abstraction over Pydantic models and Motor collections to work wi

295 Dec 29, 2022
Differentiable Annealed Importance Sampling (DAIS)

Differentiable Annealed Importance Sampling (DAIS) This repository contains the code to reproduce the DAIS results from the paper Differentiable Annea

Guodong Zhang 6 Dec 26, 2021
QAT(quantize aware training) for classification with MQBench

MQBench Quantization Aware Training with PyTorch I am using MQBench(Model Quantization Benchmark)(http://mqbench.tech/) to quantize the model for depl

Ling Zhang 29 Nov 18, 2022
Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020

Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020 BibTeX @INPROCEEDINGS{punnappurath2020modeling, author={Abhi

Abhijith Punnappurath 22 Oct 01, 2022