public repo for ESTER dataset and modeling (EMNLP'21)

Related tags

Deep LearningESTER
Overview

Project / Paper Introduction

This is the project repo for our EMNLP'21 paper: https://arxiv.org/abs/2104.08350

Here, we provide brief descriptions of the final data and detailed instructions to reproduce results in our paper. For more details, please refer to the paper.

Data

Final data used for the experiments are saved in ./data/ folder with train/dev/test splits. Most data fields are straightforward. Just a few notes,

  • question_event: this field is not provided by annotators nor used for our experiments. We simply use some heuristic rules based on POS tags to extract possible events in the questions. Users are encourages to try alternative tools such semantic role labeling.
  • original_events and indices are the annotator-provided event triggers plus their indices in the context.
  • answer_texts and answer_indices (in train and dev) are the annotator-provided answers plus their indices in the context.

Please Note: the evaluation script below (II) only works for the dev set. Please refer to Section III for submission to our leaderboard: https://eventqa.github.io

Models

I. Install packages.

We list the packages in our environment in env.yml file for your reference. Below are a few key packages.

  • python=3.8.5
  • pytorch=1.6.0
  • transformers=3.1.0
  • cudatoolkit=10.1.243
  • apex=0.1

To install apex, you can either follow official instruction: https://github.com/NVIDIA/apex or conda: https://anaconda.org/conda-forge/nvidia-apex

II. Replicate results in our paper.

1. Download trained models.

For reproduction purpose, we release all trained models.

  • Download link: https://drive.google.com/drive/folders/1bTCb4gBUCaNrw2chleD4RD9JP1_DOWjj?usp=sharing.
  • We only provide models with the best "hyper-parameters", and each comes with three random seeds: 5, 7, 23.
  • Make several directories to save models ./output/, ./output/facebook/ and ./output/allenai/.
  • For BART models, download them into ./output/facebook/.
  • For UnifiedQA models, download them into ./output/allenai/.
  • All other models can be saved in ./output/ directly. These ensure evaluation scripts run properly below.

2. Zero-shot performances in Table 3.

Run bash ./code/eval_zero_shot.sh. Model options are provided in the script.

3. Generative QA Fine-tuning performances in Table 3.

Run bash ./code/eval_ans_gen.sh. Make sure the following arguments are set correctly in the script.

  • Model Options provided in the script
  • Set suffix=""
  • Set lrs and batch according to model options. You can find these numbers in Appendix G of the paper.

4. Figure 6: UnifiedQA-large model trained with sub-samples.

Run bash ./code/eval_ans_gen.sh`. Make sure the following arguments are set correctly in the script.

  • model="allenai/unifiedqa-t5-large"
  • suffix={"_500" | "_1000" | "_2000" | "_3000" | "_4000"}
  • Set lrs and batch accordingly. You can find these information in the folder name containing the trained model objects.

5. Table 4: 500 original annotations v.s. completed

  • bash ./code/eval_ans_gen.sh with model="allenai/unifiedqa-t5-large and suffix="_500original
  • bash ./code/eval_ans_gen.sh with model="allenai/unifiedqa-t5-large and suffix="_500completed
  • Set lrs and batch accordingly again.

6. Extractive QA Fine-tuning performances in Table 3.

Simply run bash ./code/eval_span_pred.sh as it is.

7. Figure 8: Extractive QA Fine-tuning performances by changing positive weights.

  • Run bash ./code/eval_span_pred.sh.
  • Set pw, lrs and batch according to model folder names again.

III. Submission to ESTER Leaderboard

  • Set model_dir to your target models
  • Run leaderboard.sh, which outputs pred_dev.json and pred_test.json under ./output
  • If you write your own code to output predictions, make sure they follow our original sample order.
  • Email pred_test.json to us following in the format specified here: https://eventqa.github.io Sample outputs (using one of our UnifiedQA-large models) are provided under ./output

IV. Model Training

We also provide the model training scripts below.

1. Generative QA: Fine-tuning in Table 3.

  • Run bash ./code/run_ans_generation.sh.
  • Model options and hyper-parameter search range are provided in the script.
  • We use --fp16 argument to activate apex for GPU memory efficient training except for UnifiedQA-t5-large (trained on A100 GPU).

2. Figure 6: UnifiedQA-large model trained with sub-samples.

  • Run bash ./code/run_ans_gen_subsample.sh.
  • Set sample_size variable accordingly in the script.

3. Table 4: 500 original annotations v.s. completed

  • Run bash ./code/run_ans_gen.sh with model="allenai/unifiedqa-t5-large and suffix="_500original
  • Run bash ./code/run_ans_gen.sh with model="allenai/unifiedqa-t5-large and suffix="_500completed

4. Extractive QA Fine-tuning in Table 3 + Figure 8

Simply run bash ./code/run_span_pred.sh as it is.

Owner
PlusLab
Peng's Language Understanding & Synthesis Lab at UCLA and USC
PlusLab
Implementation of DropLoss for Long-Tail Instance Segmentation in Pytorch

[AAAI 2021]DropLoss for Long-Tail Instance Segmentation [AAAI 2021] DropLoss for Long-Tail Instance Segmentation Ting-I Hsieh*, Esther Robb*, Hwann-Tz

Tim 37 Dec 02, 2022
An implementation of a discriminant function over a normal distribution to help classify datasets.

CS4044D Machine Learning Assignment 1 By Dev Sony, B180297CS The question, report and source code can be found here. Github Repo Solution 1 Based on t

Dev Sony 6 Nov 09, 2021
Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your personal computer!

Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your machine! Motivation Would

Joeri Hermans 15 Sep 11, 2022
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 08, 2023
Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021)

PGpoints Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021) Hyeontae Son, Young Min Kim Pre

Hyeontae Son 9 Jun 06, 2022
PyTorch implementation of Trust Region Policy Optimization

PyTorch implementation of TRPO Try my implementation of PPO (aka newer better variant of TRPO), unless you need to you TRPO for some specific reasons.

Ilya Kostrikov 366 Nov 15, 2022
Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments

Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments Paper: arXiv (ICRA 2021) Video : https://youtu.be/CC

Sachini Herath 68 Jan 03, 2023
Semi-supervised Implicit Scene Completion from Sparse LiDAR

Semi-supervised Implicit Scene Completion from Sparse LiDAR Paper Created by Pengfei Li, Yongliang Shi, Tianyu Liu, Hao Zhao, Guyue Zhou and YA-QIN ZH

114 Nov 30, 2022
[CVPR 2021] Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach

Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach This is the repo to host the dataset TextSeg and code for TexRNe

SHI Lab 174 Dec 19, 2022
Software & Hardware to do multi color printing with Sharpies

3D Print Colorizer is a combination of 3D printed parts and a Cura plugin which allows anyone with an Ender 3 like 3D printer to produce multi colored

343 Jan 06, 2023
Machine learning algorithms for many-body quantum systems

NetKet NetKet is an open-source project delivering cutting-edge methods for the study of many-body quantum systems with artificial neural networks and

NetKet 413 Dec 31, 2022
Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

The official code for the NeurIPS 2021 paper Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

13 Dec 22, 2022
Framework for evaluating ANNS algorithms on billion scale datasets.

Billion-Scale ANN http://big-ann-benchmarks.com/ Install The only prerequisite is Python (tested with 3.6) and Docker. Works with newer versions of Py

Harsha Vardhan Simhadri 132 Dec 24, 2022
A wrapper around SageMaker ML Lineage Tracking extending ML Lineage to end-to-end ML lifecycles, including additional capabilities around Feature Store groups, queries, and other relevant artifacts.

ML Lineage Helper This library is a wrapper around the SageMaker SDK to support ease of lineage tracking across the ML lifecycle. Lineage artifacts in

AWS Samples 12 Nov 01, 2022
BirdCLEF 2021 - Birdcall Identification 4th place solution

BirdCLEF 2021 - Birdcall Identification 4th place solution My solution detail kaggle discussion Inference Notebook (best submission) Environment Use K

tattaka 42 Jan 02, 2023
基于深度强化学习的原神自动钓鱼AI

原神自动钓鱼AI由YOLOX, DQN两部分模型组成。使用迁移学习,半监督学习进行训练。 模型也包含一些使用opencv等传统数字图像处理方法实现的不可学习部分。

4.2k Jan 01, 2023
Geometric Algebra package for JAX

JAXGA - JAX Geometric Algebra GitHub | Docs JAXGA is a Geometric Algebra package on top of JAX. It can handle high dimensional algebras by storing onl

Robin Kahlow 36 Dec 22, 2022
"Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion"(WWW 2021)

STAR_KGC This repo contains the source code of the paper accepted by WWW'2021. "Structure-Augmented Text Representation Learning for Efficient Knowled

Bo Wang 60 Dec 26, 2022
This repository contains the entire code for our work "Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding"

Two-Timescale-DNN Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding This repository contains the entire code for our work

QiyuHu 3 Mar 07, 2022
No-reference Image Quality Assessment(NIQA) Algorithms (BRISQUE, NIQE, PIQE, RankIQA, MetaIQA)

No-Reference Image Quality Assessment Algorithms No-reference Image Quality Assessment(NIQA) is a task of evaluating an image without a reference imag

Dae-Young Song 26 Jan 04, 2023