Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

Overview

pair-emnlp2020

Official repository for the paper:

Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation

If you find our work useful, please cite:

@inproceedings{hua-wang-2020-pair,
    title = "PAIR: Planning and Iterative Refinement in Pre-trained Transformersfor Long Text Generation",
    author = "Hua, Xinyu  and
      Wang, Lu",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
}

Requirements

  • Python 3.7
  • PyTorch 1.4.0
  • PyTorchLightning 0.9.0
  • transformers 3.3.0
  • numpy
  • tqdm
  • pycorenlp (for preprocessing nytimes data)
  • nltk (for preprocessing nytimes data)

Data

We release the data sets in the following link(1.2G uncompressed) Please download and uncompress the file, and put under ./data directory. For opinion and news domains, the The New York Times Annotated Corpus is licensed by LDC. We therefore only provide the ids for train/dev/test. Please follow the instructions to generate the dataset.

Text Planning

To train a BERT planner:

cd planning
python train.py \
    --data-path=../data/ \
    --domain=[arggen,opinion,news] \
    --exp-name=demo \
    --save-interval=1 \ # how frequent to save checkpoints 
    --max-epoch=30 \
    --lr=5e-4 \
    --warmup-updates=5000 \
    --train-set=train \
    --valid-set=dev \
    --tensorboard-logdir=tboard/ \
    --predict-keyphrase-offset \
    --max-samples=32 \ # max number of samples per batch
    [--quiet] \ # whether to print intermediate information

The checkpoints will be dumped to checkpoints/planning/[domain]/[exp-name]. Tensorboard will be available under planning/tboard/.

To run inference using a trained model, with greedy decoding:

cd planning
python decode.py \
    --data-path=../data/ \
    --domain=arggen \
    --test-set=test \
    --max-samples=32 \
    --predict-keyphrase-offset \
    --exp-name=demo \
    [--quiet]

The results will be saved to planning/output/.

Iterative Refinement

We provide implementations for four different setups:

  • Seq2seq: prompt -> tgt
  • KPSeq2seq: prompt + kp-set -> tgt
  • PAIR-light: prompt + kp-plan + masks -> tgt
  • PAIR-full: prompt + kp-plan + template -> tgt

To train a model:

cd refinement
python train.py \
    --domain=[arggen,opinion,news] \
    --setup=[seq2seq,kpseq2seq,pair-light,pair-full] \
    --train-set=train \
    --valid-set=dev \
    --train-batch-size=10 \
    --valid-batch-size=5 \
    --num-train-epochs=20 \
    --ckpt-dir=../checkpoints/[domain]/[setup]/demo \
    --tensorboard-dir=demo \
    [--quiet]

To run iterative refinement:

cd refinement
python generate.py \
    --domain=[arggen,opinion,news] \
    --setup=[seq2seq,kpseq2seq,pair-light,pair-full] \
    --test-set=test \
    --output-name=test_demo \
    --enforce-template-strategy=flexible \
    --do-sampling \
    --sampling-topk=100 \
    --sampling-topp=0.9 \
    --sample-times=3 \
    --ckpt-dir=../checkpoints/[domain]/[setup]/demo

Contact

Xinyu Hua (hua.x [at] northeastern.edu)

License

See the LICENSE file for details.

Owner
Xinyu Hua
PhD student at Northeastern University
Xinyu Hua
Pre-Training 3D Point Cloud Transformers with Masked Point Modeling

Point-BERT: Pre-Training 3D Point Cloud Transformers with Masked Point Modeling Created by Xumin Yu*, Lulu Tang*, Yongming Rao*, Tiejun Huang, Jie Zho

Lulu Tang 306 Jan 06, 2023
A Python script that creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editing software such as FinalCut Pro for further adjustments.

Text to Subtitles - Python This python file creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editin

Dmytro North 9 Dec 24, 2022
Unsupervised Feature Loss (UFLoss) for High Fidelity Deep learning (DL)-based reconstruction

Unsupervised Feature Loss (UFLoss) for High Fidelity Deep learning (DL)-based reconstruction Official github repository for the paper High Fidelity De

28 Dec 16, 2022
PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition

PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition The unofficial code of CDistNet. Now, we ha

25 Jul 20, 2022
bio_inspired_min_nets_improve_the_performance_and_robustness_of_deep_networks

Code Submission for: Bio-inspired Min-Nets Improve the Performance and Robustness of Deep Networks Run with docker To build a docker environment, chan

0 Dec 09, 2021
TinyML Cookbook, published by Packt

TinyML Cookbook This is the code repository for TinyML Cookbook, published by Packt. Author: Gian Marco Iodice Publisher: Packt About the book This bo

Packt 93 Dec 29, 2022
Azion the best solution of Edge Computing in the world.

Azion Edge Function docker action Create or update an Edge Functions on Azion Edge Nodes. The domain name is the key for decision to a create or updat

8 Jul 16, 2022
This repository includes different versions of the prescribed-time controller as Simulink blocks and MATLAB script codes for engineering applications.

Prescribed-time Control Prescribed-time control (PTC) blocks in Simulink environment, MATLAB R2020b. For more theoretical details, refer to the papers

Amir Shakouri 1 Mar 11, 2022
Flexible-Modal Face Anti-Spoofing: A Benchmark

Flexible-Modal FAS This is the official repository of "Flexible-Modal Face Anti-

Zitong Yu 22 Nov 10, 2022
[ICCV 2021 Oral] NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo

NerfingMVS Project Page | Paper | Video | Data NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo Yi Wei, Shaohui

Yi Wei 369 Dec 24, 2022
graph-theoretic framework for robust pairwise data association

CLIPPER: A Graph-Theoretic Framework for Robust Data Association Data association is a fundamental problem in robotics and autonomy. CLIPPER provides

MIT Aerospace Controls Laboratory 118 Dec 28, 2022
Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model

Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model Baris Gecer 1, Binod Bhattarai 1

Baris Gecer 190 Dec 29, 2022
Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)

Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction This is the code for the paper Combining E

Robotics and Perception Group 69 Dec 26, 2022
Generate indoor scenes with Transformers

SceneFormer: Indoor Scene Generation with Transformers Initial code release for the Sceneformer paper, contains models, train and test scripts for the

Chandan Yeshwanth 110 Dec 06, 2022
Source code for ZePHyR: Zero-shot Pose Hypothesis Rating @ ICRA 2021

ZePHyR: Zero-shot Pose Hypothesis Rating ZePHyR is a zero-shot 6D object pose estimation pipeline. The core is a learned scoring function that compare

R-Pad - Robots Perceiving and Doing 18 Aug 22, 2022
This YoloV5 based model is fit to detect people and different types of land vehicles, and displaying their density on a fitted map, according to their coordinates and detected labels.

This YoloV5 based model is fit to detect people and different types of land vehicles, and displaying their density on a fitted map, according to their

Liron Bdolah 8 May 22, 2022
Informal Persian Universal Dependency Treebank

Informal Persian Universal Dependency Treebank (iPerUDT) Informal Persian Universal Dependency Treebank, consisting of 3000 sentences and 54,904 token

Roya Kabiri 0 Jan 05, 2022
A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising (CVPR 2020 Oral & TPAMI 2021)

ELD The implementation of CVPR 2020 (Oral) paper "A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising" and its journal (TPAMI) v

Kaixuan Wei 359 Jan 01, 2023
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.

CoMoGAN: Continuous Model-guided Image-to-Image Translation Official repository. Paper CoMoGAN: continuous model-guided image-to-image translation [ar

166 Dec 31, 2022
《Towards High Fidelity Face Relighting with Realistic Shadows》(CVPR 2021)

Towards High Fidelity Face-Relighting with Realistic Shadows Andrew Hou, Ze Zhang, Michel Sarkis, Ning Bi, Yiying Tong, Xiaoming Liu. In CVPR, 2021. T

114 Dec 10, 2022