Benchmark for evaluating open-ended generation

Overview

OpenMEVA

Contributed by Jian Guan, Zhexin Zhang. Thank Jiaxin Wen for DeBugging.

OpenMEVA is a benchmark for evaluating open-ended story generation metrics (Please refer to the Paper List for more information about Open-eNded Language Generation tasks) described in the paper: OpenMEVA: A Benchmark for Evaluating Open-ended Story Generation Metrics (ACL 2021 Long Paper). Besides, OpenMEVA also provides an open-source and extensible toolkit for metric implementation, evaluation, comparison, and analysis, as well as data perturbation techniques to help generate large numbers of customized test cases. We expect the toolkit to empower fast development of automatic metrics.

Contents

Introduction for Language Generation Evaluation

Since human evaluation is time-consuming, expensive, and difficult to reproduce, the community commonly uses automatic metrics for evaluation. We roughly divide existing metrics as follows:

  • Previous studies in conditional language generation tasks (e.g., machine translation) have developed several successful referenced metrics, which roughly quantify the lexical overlap (e.g., BLEU) or semantic entailment (e.g., BertScore) between a generated sample and the reference.
  • Referenced metrics correlate poorly with human judgments when evaluating open-ended language generation. Specifically, a generated sample can be reasonable if it is coherent to the given input, and self-consistent within its own context but not necessarily being similar to the reference in literal or semantics. To address the one-to-many issue, unreferenced metrics (e.g., UNION) are proposed to measure the quality of a generated sample without any reference.
  • Besides, some researchers propose to combine referenced and unreferenced metrics, i.e. hybrid metrics, which usually average two individual metric scores (e.g. RUBER) or learn from human preference (e.g., ADEM). However, ADEM is reported to lack generalization and robustness with limited human annotation.

The existing generation models are still far from human ability to generate reasonable texts, particularly for open-ended language generation tasks such as story generation. One important factor that hinders the research is the lack of powerful metrics for measuring generation quality. Therefore, we propose OpenMEVA as the standard paradigm for measuring progress of metrics.

Install

Clone the repository from our github page (don't forget to star us!)

git clone https://github.com/thu-coai/OpenMEVA.git

Then install all the requirements:

pip install -r requirements.txt

Then install the package with

python setup.py install

If you also want to modify the code, run this:

python setup.py develop

Toolkit

I. Metrics Interface

1. Metric List

We publish the standard implementation for the following metrics:

2. Usage

It is handy to construct a metric object and use it to evaluate given examples:

from eva.bleu import BLEU
metric = BLEU()

# for more information about the metric
print(metric.info)

# data is a list of dictionary [{"context": ..., "candidate":..., "reference": ...}]
print(metric.compute(data))

We present a python file test.py as an instruction to access the API.

These metrics are not exhaustive, it is a starting point for further metric research. We welcome any pull request for other metrics (requiring implementation of only three methods including __init__, info, compute).

3. Training Learnable Metrics

Execute the following command for training learnable metrics:

cd ./eva/model

# training language model for computing forward perplexity
bash ./run_language_modeling.sh

# training the unreferenced model for computing RUBER (RNN version)
bash ./run_ruber_unrefer.sh

# training the unreferenced model for computing RUBER (BERT version)
bash ./run_ruber_unrefer_bert.sh

# training the model for computing UNION
bash ./run_union.sh

II. Evaluating Human Scores

The python file test.py also includes detailed instruction to access the API for evaluating human scores.

1. Constructing

from eva.heva import Heva

# list of all possible human scores (int/float/str).
all_possible_score_list = [1,2,3,4,5]

# construct an object for following evaluation
heva = Heva(all_possible_score_list)

2. Consistency of human scores

# list of human score list, each row includes all the human scores for an example
human_score_list = [[1,3,2], [1,3,3], [2,3,1], ...]

print(heva.consistency(human_score_list))
# {"Fleiss's kappa": ..., "ICC correlation": ..., "Kendall-w":..., "krippendorff's alpha":...}
# the results includes correlation and p-value for significance test.

3. Mean Test for scores of examples from different source

# list of metric scores (float)
metric_score_1, metric_score_2 = [3.2, 2.4, 3.1,...], [3.5, 1.2, 2.3, ...]

# T-test for the means of two independent samples of scores.
print(heva.mean_test(metric_score_1, metric_score_2))
# {"t-statistic": ..., "p-value": ...}

4. Distribution of human scores

# list of human scores (float)
human_score = [2.0, 4.2, 1.2, 4.9, 2.6, 3.1, 4.0, 1.5,...]

# path for saving the figure of distribution
figure_path = "./figure"

# indicating the source of the annotated examples. default: ""
model_name = "gpt"

# plot the figure of distribution of human scores
heva.save_distribution_figure(score=human_score, save_path=figure_path, model_name=model_name, ymin=0, ymax=50)

5. Correlation between human and metric scores

# list of human scores (float)
human_score = [2.0, 4.2, 1.2, 4.9, 2.6, 3.1, 4.0, 1.5,...]

# list of metric scores (float)
metric_score = [3.2, 2.4, 3.1, 3.5, 1.2, 2.3, 3.5, 1.1,...]

# computing correlation
print(heva.correlation(metric_score, human_score))

# path for saving the figure of distribution
figure_path = "./figure"

# indicating the source of the metric scores. default: ""
metric_name = "bleu"

# plot the figure of metric score vs. human scores
heva.save_correlation_figure(human_score, metric_score, save_path=figure_path, metric_name=metric_name)

III. Perturbation Techniques

1. Perturbation List

We provide perturbation techniques in following aspects to create large scale test cases for evaluating comprehensive capabilities of metrics:

  • Lexical repetition

    • Repeating n-grams or sentences:

      He stepped on the stage and stepped on the stage.
  • Semantic repetition:

    • Repeating sentences with paraphrases by back translation:

      He has been from Chicago to Florida. He moved to Florida from Chicago.

  • Character behavior:

    • Reordering the subject and object of a sentence:

      Lars looked at the girl with desire.→ the girl looked at Lars with desire.
    • Substituting the personal pronouns referring to other characters:

      her mother took them to ... → their mother took her to ...
  • Common sense:

    • Substituting the head or tail entities in a commonsense triple of ConcepNet:

      Martha puts her dinner into theoven. She lays down fora quick nap. She oversleeps and runs into the kitchen (→ garden) to take out her burnt dinne.
  • Consistency:

    • Inserting or Deleting negated words or prefixes:

      She had (→ did not have) money to get vaccinated. She had a flu shot ...
      She agreed (→ disagreed) to get vaccinated.
    • Substituting words with antonyms:

      She is happy (→ upset) that she had a great time ...
  • Coherence:

    • Substituting words, phrases or sentences:

      Christmas was very soon. Kelly wanted to put up the Christmas tree. (→ Eventually it went into remission.)
  • Causal Relationship:

    • Reordering the cause and effect:

      the sky was clear so he could see clearly the boat. → he could see clearly the boat so the sky was clear.
    • Substituting the causality-related words randomly:

      the sky was clear so (→ because) he could see clearly the boat.
  • Temporal Relationship:

    • Reordering two sequential events:

      I eat one bite. Then I was no longer hungry.I was no longer hungry. Then I eat one bite.
    • Substituting the time-related words:

      After (→ Before) eating one bite I was no longer hungry.
  • Synonym:

    • Substituting a word with its synonym:

      I just purchased (→ bought) my uniforms.
  • Paraphrase:

    • Substituting a sentence with its paraphrase by back translation:

      Her dog doesn't shiver anymore.Her dog stops shaking.
  • Punctuation:

    • Inserting or Deleting inessential punctuation mark:

      Eventually,Eventually he became very hungry.
  • Contraction:

    • Contracting or Expanding contraction:

      I’ll (→ I will) have to keep waiting .
  • Typo:

    • Swapping two adjacent characters:

      that orange (→ ornage) broke her nose.
    • Repeating or Deleting a character:

      that orange (→ orannge) broke her nose.

2. Usage

It is handy to construct a perturbation object and use it to perturb given examples:

from eva.perturb.perturb import *
custom_name = "story"
method = add_typos(custom_name)

# data is a list of dictionary [{"id":0, "ipt": ..., "truth":...}, ...]
print(method.construct(data))
# the perturbed examples can be found under the directory "custom_name"

We present a python file test_perturb.py as an instruction to access the API.

You can download dependent files for some perturbation techniques by executing the following command:

cd ./eva/perturb
bash ./download.sh

You can also download them by THUCloud or Google Drive.

These perturbation techniques are not exhaustive, it is a starting point for further evaluation research. We welcome any pull request for other perturbation techniques (requiring implementation of only two methods including __init__, construct).

Note 📑 We adopt uda for back translation. We provide an example eva/perturb/back_trans_data/story_bt.json to indicate the format of the back translation result. And you can download the results for ROCStories and WritingPrompts by THUCloud or Google Drive.

Benchmark

I. Datasets

1. Machine-Generated Stories (MAGS) with manual annotation

We provide annotated stories from ROCStories (ROC) and WritingPrompts (WP). Some statistics are as follows:

Boxplot of annotation scores for each story source (Left: ROC, Right: WP):

2. Auto-Constructed Stories (ACTS)

We create large-scale test examples based on ROC and WP by aforementioned perturbation techniques. ACTS includes examples for different test types, i.e., discrimination test and invariance test.

  • The discrimination test requires metrics to distinguish human-written positive examples from negative ones. Wecreate each negative example by applying pertur-bation within an individual aspect. Besides, we also select different positive examples targeted for corresponding aspects. Below table shows the numbers of positive and negative examples in different aspects.

  • The invariance test expect the metric judgments to remain the same when we apply rationality-preserving perturbations, which means almost no influence on the quality of examples. The original examples can be either the human-written stories or the negative examples created in the discrimination test. Below table shows the numbers of original (also perturbed) positive and negative examples in different aspects.

3. Download & Data Instruction

You can download the whole dataset by THUCloud or Google Drive.

├── data
   └── `mags_data`
       ├── `mags_roc.json`	# sampled stories and corresponding human annotation.   
       ├── `mags_wp.json`		# sampled stories and corresponding human annotation.       
   └── `acts_data`
       ├── `roc`
              └── `roc_train_ipt.txt`	# input for training set
              └── `roc_train_opt.txt`	# output for training set
              └── `roc_valid_ipt.txt`	# input for validation set
              └── `roc_valid_opt.txt`	# output for validation set
              └── `roc_test_ipt.txt`	# input for test set
              └── `roc_test_opt.txt`	# output for test set
              └── `discrimination_test`                        
                 ├── `roc_lexical_rept.txt`
                 ├── `roc_lexical_rept_perturb.txt`										
                 ├── `roc_semantic_rept.txt`
                 ├── `roc_semantic_rept_perturb.txt`
                 ├── `roc_character.txt`
                 ├── `roc_character_perturb.txt`
                 ├── `roc_commonsense.txt`
                 ├── `roc_commonsense_perturb.txt`												
                 ├── `roc_coherence.txt`
                 ├── `roc_coherence_perturb.txt`
                 ├── `roc_consistency.txt`
                 ├── `roc_consistency_perturb.txt`								
                 ├── `roc_cause.txt`
                 ├── `roc_cause_perturb.txt`       										
                 ├── `roc_time.txt`
                 ├── `roc_time_perturb.txt`                    
              └── `invariance_test`
                 ├── `roc_synonym_substitute_perturb.txt`
                 ├── `roc_semantic_substitute_perturb.txt`
                 ├── `roc_contraction_perturb.txt`
                 ├── `roc_delete_punct_perturb.txt`
                 ├── `roc_typos_perturb.txt`
                 ├── `roc_negative_sample.txt`	# sampled negative samples from the discrimination test.	
                 ├── `roc_negative_sample_synonym_substitute_perturb.txt`
                 ├── `roc_negative_sample_semantic_substitute_perturb.txt`
                 ├── `roc_negative_sample_contraction_perturb.txt`
                 ├── `roc_negative_sample_delete_punct_perturb.txt`
                 ├── `roc_negative_sample_typos_perturb.txt`
       ├── `wp`
              └── ...

II. Tasks

OpenMEVA includes a suite of tasks to test comprehensive capabilities of metrics:

1. Correlation with human scores (based on MAGS)

2. Generalization across generation models and dataset (for learnable metrics, based on MAGS)

3. Judgment in general linguistic features (based on the discrimination test set of ACTS)

4. Robustness to rationality-preserving perturbations (based on the invariance test set of ACTS)

Note: The smaller absolute value of correlation is the better.

5. Fast Test

You can test these capabilities of new metrics by following command:

cd ./benchmark

# test correlation with human scores and generalization
python ./corr_gen.py

# test judgment
python ./judge.py

# test robustness
python ./robust.py

We take BLEU and Forward Perplexity as examples in the python files. You can test your own metrics by minor modification.

How to Cite

@misc{guan2021openmeva,
      title={OpenMEVA: A Benchmark for Evaluating Open-ended Story Generation Metrics}, 
      author={Jian Guan and Zhexin Zhang and Zhuoer Feng and Zitao Liu and Wenbiao Ding and Xiaoxi Mao and Changjie Fan and Minlie Huang},
      year={2021},
      eprint={2105.08920},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

It's our honor to help you better explore language generation evaluation with our toolkit and benchmark.

Owner
Conversational AI groups from Tsinghua University
The code of "Dependency Learning for Legal Judgment Prediction with a Unified Text-to-Text Transformer".

Code data_preprocess.py: preprocess data for Dependent-T5. parameters.py: define parameters of Dependent-T5. train_tools.py: traning and evaluation co

1 Apr 21, 2022
Rank1 Conversation Emotion Detection Task

Rank1-Conversation_Emotion_Detection_Task accuracy macro-f1 recall 0.826 0.7544 0.719 基于预训练模型和时序预测模型的对话情感探测任务 1 摘要 针对对话情感探测任务,本文将其分为文本分类和时间序列预测两个子任务,分

Yuchen Han 2 Nov 28, 2021
ColossalAI-Benchmark - Performance benchmarking with ColossalAI

Benchmark for Tuning Accuracy and Efficiency Overview The benchmark includes our

HPC-AI Tech 31 Oct 07, 2022
AlphaBot2 Pi Core software for interfacing with the various components.

AlphaBot2-Pi-Core AlphaBot2 Pi Core software for interfacing with the various components. This project is currently a W.I.P. I will update this readme

KyleDev 1 Feb 13, 2022
A Data Annotation Tool for Semantic Segmentation, Object Detection and Lane Line Detection.(In Development Stage)

Data-Annotation-Tool How to Run this Tool? To run this software, follow the steps: git clone https://github.com/Autonomous-Car-Project/Data-Annotation

TiVRA AI 13 Aug 18, 2022
Code for the paper "Adversarial Generator-Encoder Networks"

This repository contains code for the paper "Adversarial Generator-Encoder Networks" (AAAI'18) by Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky. Pr

Dmitry Ulyanov 279 Jun 26, 2022
Codes for Causal Semantic Generative model (CSG), the model proposed in "Learning Causal Semantic Representation for Out-of-Distribution Prediction" (NeurIPS-21)

Learning Causal Semantic Representation for Out-of-Distribution Prediction This repository is the official implementation of "Learning Causal Semantic

Chang Liu 54 Dec 01, 2022
Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Cha

Dongkwon Jin 106 Dec 29, 2022
Implementation of StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation in PyTorch

StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation Implementation of StyleSpace Analysis: Disentangled Controls for StyleGAN Ima

Xuanchi Ren 86 Dec 07, 2022
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
A new video text spotting framework with Transformer

TransVTSpotter: End-to-end Video Text Spotter with Transformer Introduction A Multilingual, Open World Video Text Dataset and End-to-end Video Text Sp

weijiawu 67 Jan 03, 2023
The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022

DG-TrajGen The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022. Our Meth

Wang 25 Sep 26, 2022
A state of the art of new lightweight YOLO model implemented by TensorFlow 2.

CSL-YOLO: A New Lightweight Object Detection System for Edge Computing This project provides a SOTA level lightweight YOLO called "Cross-Stage Lightwe

Miles Zhang 54 Dec 21, 2022
Python periodic table module

elemenpy Hello! elements.py is a small Python periodic table module that is used for calling certain information about an element. Installation Instal

Eric Cheng 2 Dec 27, 2021
Official code for 'Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentationon Complex Urban Driving Scenes'

PEBAL This repo contains the Pytorch implementation of our paper: Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urb

Yu Tian 117 Jan 03, 2023
PyTorch Implementation of CycleGAN and SSGAN for Domain Transfer (Minimal)

MNIST-to-SVHN and SVHN-to-MNIST PyTorch Implementation of CycleGAN and Semi-Supervised GAN for Domain Transfer. Prerequites Python 3.5 PyTorch 0.1.12

Yunjey Choi 401 Dec 30, 2022
The AWS Certified SysOps Administrator

The AWS Certified SysOps Administrator – Associate (SOA-C02) exam is intended for system administrators in a cloud operations role who have at least 1 year of hands-on experience with deployment, man

Aiden Pearce 32 Dec 11, 2022
The official MegEngine implementation of the ICCV 2021 paper: GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning

[ICCV 2021] GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning This is the official implementation of our ICCV2021 paper GyroFlow. Our pres

MEGVII Research 36 Sep 07, 2022
OCR-D wrapper for detectron2 based segmentation models

ocrd_detectron2 OCR-D wrapper for detectron2 based segmentation models Introduction Installation Usage OCR-D processor interface ocrd-detectron2-segm

Robert Sachunsky 13 Dec 06, 2022
K-Nearest Neighbor in Pytorch

Pytorch KNN CUDA 2019/11/02 This repository will no longer be maintained as pytorch supports sort() and kthvalue on tensors. git clone https://github.

Chris Choy 65 Dec 01, 2022