Code for NeurIPS2021 submission "A Surrogate Objective Framework for Prediction+Programming with Soft Constraints"

Overview

This repository is the code for NeurIPS 2021 submission "A Surrogate Objective Framework for Prediction+Programming with Soft Constraints".

Edit 2021/8/30: KKT-based (Decision-focused) baseline is added to the first experiment.

Requirements

pytorch>=1.7.0

scipy

gurobipy (and Gurobi>=9.1 license - you can get Academic license for free at https://www.gurobi.com/downloads/end-user-license-agreement-academic/; download and install Gurobi first.)

Quandl

h5py

bs4

tqdm

sklearn

pandas

lxml

qpth

cvxpy

cvxpylayers

Running Experiments

You should be able to run all experiments by fulfilling the requirements and cloning this repo to your local machine.

Synthetic Linear Programming

The dataset for this problem is generated at runtime. To run a single problem instance, type the following command:

python run_main_synth.py --method=2 --dim_context=40 --dim_hard=40 --dim_soft=20 --seed=2006 --dim_features=80 --loss=l1 --K=0.2

The four methods (L1,L2,SPO+,ours) we used in the experiment are respectively

--method=0 --loss=l1 # L1
--method=0 --loss=l2 # L2
--method=1 --loss=l1 # SPO+
--method=2 --loss=l1 # ours
--method=3 --loss=l1 # decision-focused (KKT-based)

The other parameters can be seen in run_script.py and run_main_synth.py. To get multiple data for a single method, modify with the parameters listed above, and then run run_script.py. The outcome containing prediction error and regret is in the result folder. See dataprocess.py for a reference on how to interpret the data; the data with suffix "...test.txt" is used for evaluation. Also, to change batch size and training set size, alter the default parameters in run_main_synth.py.

Portfolio Optimization

The dataset for this problem will be automatically downloaded when you first run this code, as Wilder et al.'s code does[1]. It is the daily price data of SP500 from 2004 to 2017 downloaded by Quandl API. To run a single problem instance, type the following command:

python main.py --method=3 --n=50 --seed=471298479

The four methods (L1, DF, L2, ours) are labeled as method 0, 1, 2 and 3. To get multiple data for a single method, run run_script.py.

The result is in the res/K100 folder.

Resource Provisioning

The dataset of this problem is attached in the github repository, which are the eight csv file, one for each region. It is the ERCOT dataset taken from (...to be filled...), and is processed by resource_provisioning/data_energy/data_loader.py at runtime. When you first run this code, it will generate several large .npy file as the cached feature, which will accelerate the preprocessing of the following runs. This experiment requires large memory and is recommended to run on a server. To run a single problem instance, type the following command:

python run_main_newnet.py --method=1 --seed=16900000 --loss=l1

The four methods (L1, L2, weighted L1, ours) are respectively

--method=0 --loss=l1 # L1
--method=0 --loss=l2 # L2
--method=0 --loss=l3 # weighted L1
--method=1 --loss=l1 # ours

To run different ratio of alpha1/alpha2, modify line 157-158 in synthesize.py

 alpha1 = torch.ones(dim_context, 1) * 50
 alpha2 = torch.ones(dim_context, 1) * 0.5

to a desired ratio. Furthermore, modify line 174 in main_newnet.py

netname = "50to0.5"

to "5to0.5"/"1to1"/"0.5to5"/"0.5to50", and line 199 in main_newnet.py

self.alpha1, self.alpha2 = 0.5, 50

to (0.5, 5)/(1, 1)/(5, 0.5)/(50, 0.5) respectively.

run run_script.py to get multiple data. The result is in the result/2013to18_+str(netname)+newnet folder. The interpretation of output data is similar to synthetic linear programming.

[1] Automatically Learning Compact Quality-aware Surrogates for Optimization Problems, Wilder et al., 2020 (https://arxiv.org/abs/2006.10815)

Empirical Evaluation of Lambda_max in Theorem 6

run test.py directly to get results (note it takes a long time to finish the whole run, especially for the option of beta distribution). The results for uniform, Gaussian and beta are respectively in test1.txt, test2.txt and test3.txt.

A framework for Quantification written in Python

QuaPy QuaPy is an open source framework for quantification (a.k.a. supervised prevalence estimation, or learning to quantify) written in Python. QuaPy

41 Dec 14, 2022
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"

Dataset Distillation by Matching Training Trajectories Project Page | Paper This repo contains code for training expert trajectories and distilling sy

George Cazenavette 256 Jan 05, 2023
Implementation of Graph Convolutional Networks in TensorFlow

Graph Convolutional Networks This is a TensorFlow implementation of Graph Convolutional Networks for the task of (semi-supervised) classification of n

Thomas Kipf 6.6k Dec 30, 2022
Generate pixel-style avatars with python.

face2pixel Generate pixel-style avatars with python. Run: Clone the project: git clone https://github.com/theodorecooper/face2pixel install requiremen

Theodore Cooper 2 May 11, 2022
Towards Improving Embedding Based Models of Social Network Alignment via Pseudo Anchors

PSML paper: Towards Improving Embedding Based Models of Social Network Alignment via Pseudo Anchors PSML_IONE,PSML_ABNE,PSML_DEEPLINK,PSML_SNNA: numpy

13 Nov 27, 2022
NOMAD - A blackbox optimization software

################################################################################### #

Blackbox Optimization 78 Dec 29, 2022
Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

DTU Acoustic Technology Group 11 Dec 17, 2022
Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation.

AVATAR Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation. AVATAR stands for jAVA-pyThon progrAm tRanslation. AV

Wasi Ahmad 26 Dec 03, 2022
The world's largest toxicity dataset.

The Toxicity Dataset by Surge AI Saving the internet is fun. Combing through thousands of online comments to build a toxicity dataset isn't. That's wh

Surge AI 134 Dec 19, 2022
The implementation for the SportsCap (IJCV 2021)

SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos ProjectPage | Paper | Video | Dataset (Part01

Chen Xin 79 Dec 16, 2022
Code and data of the EMNLP 2021 paper "Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer"

StyleAttack Code and data of the EMNLP 2021 paper "Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer" Prepare Pois

THUNLP 19 Nov 20, 2022
Official implementation of "Learning Not to Reconstruct" (BMVC 2021)

Official PyTorch implementation of "Learning Not to Reconstruct Anomalies" This is the implementation of the paper "Learning Not to Reconstruct Anomal

Marcella Astrid 13 Dec 04, 2022
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

Kim Seonghyeon 502 Jan 03, 2023
The code for paper "Contrastive Spatio-Temporal Pretext Learning for Self-supervised Video Representation" which is accepted by AAAI 2022

Contrastive Spatio Temporal Pretext Learning for Self-supervised Video Representation (AAAI 2022) The code for paper "Contrastive Spatio-Temporal Pret

8 Jun 30, 2022
🌈 PyTorch Implementation for EMNLP'21 Findings "Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer"

SGLKT-VisDial Pytorch Implementation for the paper: Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer Gi-Cheon Kang, Junseok P

Gi-Cheon Kang 9 Jul 05, 2022
Unofficial implementation (replicates paper results!) of MINER: Multiscale Implicit Neural Representations in pytorch-lightning

MINER_pl Unofficial implementation of MINER: Multiscale Implicit Neural Representations in pytorch-lightning. 📖 Ref readings Laplacian pyramid explan

AI葵 51 Nov 28, 2022
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 272 Dec 23, 2022
MAME is a multi-purpose emulation framework.

MAME's purpose is to preserve decades of software history. As electronic technology continues to rush forward, MAME prevents this important "vintage" software from being lost and forgotten.

Michael Murray 6 Oct 25, 2020
WRENCH: Weak supeRvision bENCHmark

🔧 What is it? Wrench is a benchmark platform containing diverse weak supervision tasks. It also provides a common and easy framework for development

Jieyu Zhang 176 Dec 28, 2022
Implementation of DropLoss for Long-Tail Instance Segmentation in Pytorch

[AAAI 2021]DropLoss for Long-Tail Instance Segmentation [AAAI 2021] DropLoss for Long-Tail Instance Segmentation Ting-I Hsieh*, Esther Robb*, Hwann-Tz

Tim 37 Dec 02, 2022