Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Overview

Implicit Representations of Meaning in Neural Language Models

Preliminaries

Create and set up a conda environment as follows:

conda create -n state-probes python=3.7
conda activate state-probes
pip install -r requirements.txt

Install the appropriate torch 1.7.0 for your cuda version:

conda install pytorch==1.7.0 cudatoolkit=<cuda_version> -c pytorch

Before running any command below, run

export PYTHONPATH=.
export TOKENIZERS_PARALLELISM=true

Data

The Alchemy data is downloaded from their website.

wget https://nlp.stanford.edu/projects/scone/scone.zip
unzip scone.zip

The synthetic version of alchemy was generated by running:

echo 0 > id #the code requires a file called id with a number in it ...
python alchemy_artificial_generator.py --num_scenarios 3600 --output synth_alchemy_train
python alchemy_artificial_generator.py --num_scenarios 500 --output synth_alchemy_dev
python alchemy_artificial_generator.py --num_scenarios 900 --output synth_alchemy_test

You can also just download our generated data through:

wget http://web.mit.edu/bzl/www/synth_alchemy.tar.gz
tar -xzvf synth_alchemy.tar.gz

The Textworld data is under

wget http://web.mit.edu/bzl/www/tw_data.tar.gz
tar -xzvf tw_data.tar.gz

LM Training

To train a BART or T5 model on Alchemy data

python scripts/train_alchemy.py \
    --arch [t5|bart] [--no_pretrain] \
    [--synthetic] --encode_init_state NL

Saves model checkpoints under sconeModels/*.

To train a BART or T5 model on Textworld data

python scripts/train_textworld.py \
    --arch [t5/bart] [--no_pretrain] \
    --data tw_data/simple_traces --gamefile tw_games/simple_games

Saves model checkpoints nder twModels/*.

Probe Training & Evaluation

Alchemy

The main probe command is as follows:

python scripts/probe_alchemy.py \
    --arch [bart|t5] --lm_save_path <path_to_lm_checkpoint> [--no_pretrain] \
    --encode_init_state NL --nonsynthetic \
    --probe_target single_beaker_final.NL --localizer_type single_beaker_init_full \
    --probe_type linear --probe_agg_method avg \
    --encode_tgt_state NL.[bart|t5] --tgt_agg_method avg \
    --batchsize 128 --eval_batchsize 1024 --lr 1e-4

For evaluation, add --eval_only --probe_save_path <path_to_probe_checkpoint>. This will save model predictions to a .jsonl file under the same directory as the probe checkpoint.

Add --control_input for No LM experiment.

Change --probe_target to single_beaker_init.NL to decode initial state.

For localization experiments, set --localizer_type single_beaker_init_{$i}.offset{$off} for some token i in {article, pos.[R0|R1|R2], beaker.[R0|R1], verb, amount, color, end_punct} and some integer offset off between 0 and 6.

Saves probe checkpoints under probe_models_alchemy/*.

Intervention experiment results follow from running the script:

python scripts/intervention.py \
    --arch [bart|t5] \
    --encode_init_state NL \
    --create_type drain_1 \
    --lm_save_path <path_to_lm_checkpoint>

which creates two contexts and replaces a select few encoded tokens to modify the underlying belief state.

Textworld

Begin by creating the full set of encoded proposition representations

python scripts/get_all_tw_facts.py \
    --data tw_data/simple_traces --gamefile tw_data/simple_games \
    --state_model_arch [bart|t5] \
    --probe_target belief_facts_pair \
    --state_model_path [None|pretrain|<path_to_lm_checkpoint>] \
    --out_file <path_to_prop_encodings>

Run the probe with

python scripts/probe_textworld.py \
    --arch [bart|t5] --data tw_data/simple_traces --gamefile tw_data/simple_games \
    --probe_target final.full_belief_facts_pair --encode_tgt_state NL.[bart|t5] \
    --localizer_type belief_facts_pair_[first|last|all] --probe_type 3linear_classify \
    --probe_agg_method avg --tgt_agg_method avg \
    --lm_save_path <path_to_lm_checkpoint> [--no_pretrain] \
    --ents_to_states_file <path_to_prop_encodings> \
    --eval_batchsize 256 --batchsize 32

For evaluation, add --eval_only --probe_save_path <path_to_probe_checkpoint>. This will save model predictions to a .jsonl file under the same directory as the probe checkpoint.

Add --control_input for No LM experiment.

Change --probe_target to init.full_belief_facts_pair to decode initial state.

For remap experiments, change --probe_target to final.full_belief_facts_pair.control_with_rooms.

For decoding from just one side of propositions, replace any instance of belief_facts_pair (in --probe_target and --localizer_type) with belief_facts_single and rerun both commands (first get the full proposition encodings, then run the probe).

Saves probe checkpoints under probe_models_textworld/*.

Print Metrics

Print full metrics (state EM, entity EM, subdivided by relations vs. propositions, etc.) using scripts/print_metrics.py.

python scripts/print_metrics.py \
    --arch [bart|t5] --domain [alchemy|textworld] \
    --pred_files <path_to_model_predictions_1>,<path_to_model_predictions_2>,<path_to_model_predictions_3>,... \
    [--use_remap_domain --remap_fn <path_to_remap_model_predictions>] \
    [--single_side_probe]
TensorLight - A high-level framework for TensorFlow

TensorLight is a high-level framework for TensorFlow-based machine intelligence applications. It reduces boilerplate code and enables advanced feature

Benjamin Kan 10 Jul 31, 2022
[CVPR2021] De-rendering the World's Revolutionary Artefacts

De-rendering the World's Revolutionary Artefacts Project Page | Video | Paper In CVPR 2021 Shangzhe Wu1,4, Ameesh Makadia4, Jiajun Wu2, Noah Snavely4,

49 Nov 06, 2022
PyTorch implementation of Memory-based semantic segmentation for off-road unstructured natural environments.

MemSeg: Memory-based semantic segmentation for off-road unstructured natural environments Introduction This repository is a PyTorch implementation of

11 Nov 28, 2022
Kaggle | 9th place (part of) solution for the Bristol-Myers Squibb – Molecular Translation challenge

Part of the 9th place solution for the Bristol-Myers Squibb – Molecular Translation challenge translating images containing chemical structures into I

Erdene-Ochir Tuguldur 22 Nov 30, 2022
Unofficial Implementation of Oboe (SIGCOMM'18').

Oboe-Reproduce This is the unofficial implementation of the paper "Oboe: Auto-tuning video ABR algorithms to network conditions, Zahaib Akhtar, Yun Se

Tianchi Huang 13 Nov 04, 2022
Polynomial-time Meta-Interpretive Learning

Louise - polynomial-time Program Learning Getting help with Louise Louise's author can be reached by email at Stassa Patsantzis 64 Dec 26, 2022

Official codes for the paper "Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech"

ResDAVEnet-VQ Official PyTorch implementation of Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech What is in this repo? M

Wei-Ning Hsu 21 Aug 23, 2022
A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

張致強 14 Dec 02, 2022
Toolbox to analyze temporal context invariance of deep neural networks

PyTCI A toolbox that estimates the integration window of a sensory response using the "Temporal Context Invariance" paradigm (TCI). The TCI method Int

4 Oct 23, 2022
A python interface for training Reinforcement Learning bots to battle on pokemon showdown

The pokemon showdown Python environment A Python interface to create battling pokemon agents. poke-env offers an easy-to-use interface for creating ru

Haris Sahovic 184 Dec 30, 2022
Motion planning environment for Sampling-based Planners

Sampling-Based Motion Planners' Testing Environment Sampling-based motion planners' testing environment (sbp-env) is a full feature framework to quick

Soraxas 23 Aug 23, 2022
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Thalles Silva 1.7k Dec 28, 2022
render sprites into your desktop environment as shaped windows using GTK

spritegtk render static or animated sprites into your desktop environment as dynamic shaped windows using GTK requires pycairo and PYGobject: pip inst

hermit 20 Oct 27, 2022
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Introduction 1. Usage (For MSS) 1.1 Prepare running environment 1.2 Use pretrained model 1.3 Train new MSS models from scratch 1.3.1 How to train 1.3.

Leo 100 Dec 25, 2022
Attempt at implementation of a simple GAN using Keras

Simple GAN This is my attempt to make a wrapper class for a GAN in keras which can be used to abstract the whole architecture process. Simple GAN Over

Deven96 7 May 23, 2019
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

Kunal Wadhwa 2 Jan 05, 2022
Improving Object Detection by Label Assignment Distillation

Improving Object Detection by Label Assignment Distillation This is the official implementation of the WACV 2022 paper Improving Object Detection by L

Cybercore Co. Ltd 51 Dec 08, 2022
Learning Efficient Online 3D Bin Packing on Packing Configuration Trees

Learning Efficient Online 3D Bin Packing on Packing Configuration Trees This repository is being continuously updated, please stay tuned! Any code con

86 Dec 28, 2022
Code for "NeRS: Neural Reflectance Surfaces for Sparse-View 3D Reconstruction in the Wild," in NeurIPS 2021

Code for Neural Reflectance Surfaces (NeRS) [arXiv] [Project Page] [Colab Demo] [Bibtex] This repo contains the code for NeRS: Neural Reflectance Surf

Jason Y. Zhang 234 Dec 30, 2022
Implementation for Paper "Inverting Generative Adversarial Renderer for Face Reconstruction"

StyleGAR TODO: add arxiv link Implementation of Inverting Generative Adversarial Renderer for Face Reconstruction TODO: for test Currently, some model

155 Oct 27, 2022