Beyond Accuracy: Behavioral Testing of NLP models with CheckList

Overview

CheckList

This repository contains code for testing NLP Models as described in the following paper:

Beyond Accuracy: Behavioral Testing of NLP models with CheckList
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh Association for Computational Linguistics (ACL), 2020

Bibtex for citations:

 @inproceedings{checklist:acl20},  
 author = {Marco Tulio Ribeiro and Tongshuang Wu and Carlos Guestrin and Sameer Singh},  
 title = {Beyond Accuracy: Behavioral Testing of NLP models with CheckList},  
 booktitle = {Association for Computational Linguistics (ACL)},  
 year = {2020}  

Table of Contents

Installation

From pypi:

pip install checklist
jupyter nbextension install --py --sys-prefix checklist.viewer
jupyter nbextension enable --py --sys-prefix checklist.viewer

Note: --sys-prefix to install into python’s sys.prefix, which is useful for instance in virtual environments, such as with conda or virtualenv. If you are not in such environments, please switch to --user to install into the user’s home jupyter directories.

From source:

git clone [email protected]:marcotcr/checklist.git
cd checklist
pip install -e .

Either way, you need to install pytorch or tensorflow if you want to use masked language model suggestions:

pip install torch

For most tutorials, you also need to download a spacy model:

python -m spacy download en_core_web_sm

Tutorials

Please note that the visualizations are implemented as ipywidgets, and don't work on colab or JupyterLab (use jupyter notebook). Everything else should work on these though.

  1. Generating data
  2. Perturbing data
  3. Test types, expectation functions, running tests
  4. The CheckList process

Paper tests

Notebooks: how we created the tests in the paper

  1. Sentiment analysis
  2. QQP
  3. SQuAD

Replicating paper tests, or running them with new models

For all of these, you need to unpack the release data (in the main repo folder after cloning):

tar xvzf release_data.tar.gz

Sentiment Analysis

Loading the suite:

import checklist
from checklist.test_suite import TestSuite
suite_path = 'release_data/sentiment/sentiment_suite.pkl'
suite = TestSuite.from_file(suite_path)

Running tests with precomputed bert predictions (replace bert on pred_path with amazon, google, microsoft, or roberta for others):

pred_path = 'release_data/sentiment/predictions/bert'
suite.run_from_file(pred_path, overwrite=True)
suite.summary() # or suite.visual_summary_table()

To test your own model, get predictions for the texts in release_data/sentiment/tests_n500 and save them in a file where each line has 4 numbers: the prediction (0 for negative, 1 for neutral, 2 for positive) and the prediction probabilities for (negative, neutral, positive).
Then, update pred_path with this file and run the lines above.

QQP

import checklist
from checklist.test_suite import TestSuite
suite_path = 'release_data/qqp/qqp_suite.pkl'
suite = TestSuite.from_file(suite_path)

Running tests with precomputed bert predictions (replace bert on pred_path with roberta if you want):

pred_path = 'release_data/qqp/predictions/bert'
suite.run_from_file(pred_path, overwrite=True, file_format='binary_conf')
suite.visual_summary_table()

To test your own model, get predictions for pairs in release_data/qqp/tests_n500 (format: tsv) and output them in a file where each line has a single number: the probability that the pair is a duplicate.

SQuAD

import checklist
from checklist.test_suite import TestSuite
suite_path = 'release_data/squad/squad_suite.pkl'
suite = TestSuite.from_file(suite_path)

Running tests with precomputed bert predictions:

pred_path = 'release_data/squad/predictions/bert'
suite.run_from_file(pred_path, overwrite=True, file_format='pred_only')
suite.visual_summary_table()

To test your own model, get predictions for pairs in release_data/squad/squad.jsonl (format: jsonl) or release_data/squad/squad.json (format: json, like SQuAD dev) and output them in a file where each line has a single string: the prediction span.

Testing huggingface transformer pipelines

See this notebook.

Code snippets

Templates

See 1. Generating data for more details.

import checklist
from checklist.editor import Editor
import numpy as np
editor = Editor()
ret = editor.template('{first_name} is {a:profession} from {country}.',
                       profession=['lawyer', 'doctor', 'accountant'])
np.random.choice(ret.data, 3)

['Mary is a doctor from Afghanistan.',
'Jordan is an accountant from Indonesia.',
'Kayla is a lawyer from Sierra Leone.']

RoBERTa suggestions

See 1. Generating data for more details.
In template:

ret = editor.template('This is {a:adj} {mask}.',  
                      adj=['good', 'bad', 'great', 'terrible'])
ret.data[:3]

['This is a good idea.',
'This is a good sign.',
'This is a good thing.']

Multiple masks:

ret = editor.template('This is {a:adj} {mask} {mask}.',
                      adj=['good', 'bad', 'great', 'terrible'])
ret.data[:3]

['This is a good history lesson.',
'This is a good chess move.',
'This is a good news story.']

Getting suggestions rather than filling out templates:

editor.suggest('This is {a:adj} {mask}.',
               adj=['good', 'bad', 'great', 'terrible'])[:5]

['idea', 'sign', 'thing', 'example', 'start']

Getting suggestions for replacements (only a single text allowed, no templates):

editor.suggest_replace('This is a good movie.', 'good')[:5]

['great', 'horror', 'bad', 'terrible', 'cult']

Getting suggestions through jupyter visualization:

editor.visual_suggest('This is {a:mask} movie.')

visual suggest

Multilingual suggestions

Just initialize the editor with the language argument (should work with language names and iso 639-1 codes):

import checklist
from checklist.editor import Editor
import numpy as np
# in Portuguese
editor = Editor(language='portuguese')
ret = editor.template('O João é um {mask}.',)
ret.data[:3]

['O João é um português.',
'O João é um poeta.',
'O João é um brasileiro.']

# in Chinese
editor = Editor(language='chinese')
ret = editor.template('西游记的故事很{mask}。',)
ret.data[:3]

['西游记的故事很精彩。',
'西游记的故事很真实。',
'西游记的故事很经典。']

We're using FlauBERT for french, German BERT for german, and XLM-RoBERTa for everything else (click the link for a list of supported languages). We can't vouch for the quality of the suggestions in other languages, but it seems to work reasonably well for the languages we speak (although not as well as English).

Lexicons (somewhat multilingual)

editor.lexicons is a dictionary, which can be used in templates. For example:

import checklist
from checklist.editor import Editor
import numpy as np
# Default: English
editor = Editor()
ret = editor.template('{male1} went to see {male2} in {city}.', remove_duplicates=True)
list(np.random.choice(ret.data, 3))

['Dan went to see Hugh in Riverside.',
'Stephen went to see Eric in Omaha.',
'Patrick went to see Nick in Kansas City.']

Person names and location (country, city) names are multilingual, depending on the editor language. We got the data from wikidata, so there is a bias towards names on wikipedia.

editor = Editor(language='german')
ret = editor.template('{male1} went to see {male2} in {city}.', remove_duplicates=True)
list(np.random.choice(ret.data, 3))

['Rolf went to see Klaus in Leipzig.',
'Richard went to see Jörg in Marl.',
'Gerd went to see Fritz in Schwerin.']

List of available lexicons:

editor.lexicons.keys()

dict_keys(['male', 'female', 'first_name', 'first_pronoun', 'last_name', 'country', 'nationality', 'city', 'religion', 'religion_adj', 'sexual_adj', 'country_city', 'male_from', 'female_from', 'last_from'])

Some of these cannot be used directly in templates because they are themselves dictionaries. For example, male_from, female_from, last_from and country_city are dictionaries from country to male names, female names, last names and most populous cities.
You can call editor.lexicons.male_from.keys() for a list of country names. Example usage:

import numpy as np
countries = ['France', 'Germany', 'Brazil']
for country in countries:
    ts = editor.template('{male} {last} is from {city}',
                male=editor.lexicons.male_from[country],
                last=editor.lexicons.last_from[country],
                city=editor.lexicons.country_city[country],
               )
    print('Country: %s' % country)
    print('\n'.join(np.random.choice(ts.data, 3)))
    print()

Country: France
Jean-Jacques Brun is from Avignon
Bruno Deschamps is from Vitry-sur-Seine
Ernest Picard is from Chambéry

Country: Germany
Rainer Braun is from Schwerin
Markus Brandt is from Gera
Reinhard Busch is from Erlangen

Country: Brazil
Gilberto Martins is from Anápolis
Alfredo Guimarães is from Indaiatuba
Jorge Barreto is from Fortaleza

Perturbing data for INVs and DIRs

See 2.Perturbing data for more details.
Custom perturbation function:

import re
import checklist
from checklist.perturb import Perturb
def replace_john_with_others(x, *args, **kwargs):
    # Returns empty (if John is not present) or list of strings with John replaced by Luke and Mark
    if not re.search(r'\bJohn\b', x):
        return None
    return [re.sub(r'\bJohn\b', n, x) for n in ['Luke', 'Mark']]

dataset = ['John is a man', 'Mary is a woman', 'John is an apostle']
ret = Perturb.perturb(dataset, replace_john_with_others)
ret.data

[['John is a man', 'Luke is a man', 'Mark is a man'],
['John is an apostle', 'Luke is an apostle', 'Mark is an apostle']]

General purpose perturbations (see tutorial for more):

import spacy
nlp = spacy.load('en_core_web_sm')
pdataset = list(nlp.pipe(dataset))
ret = Perturb.perturb(pdataset, Perturb.change_names, n=2)
ret.data

[['John is a man', 'Ian is a man', 'Robert is a man'],
['Mary is a woman', 'Katherine is a woman', 'Alexandra is a woman'],
['John is an apostle', 'Paul is an apostle', 'Gabriel is an apostle']]

ret = Perturb.perturb(pdataset, Perturb.add_negation)
ret.data

[['John is a man', 'John is not a man'],
['Mary is a woman', 'Mary is not a woman'],
['John is an apostle', 'John is not an apostle']]

Creating and running tests

See 3. Test types, expectation functions, running tests for more details.

MFT:

import checklist
from checklist.editor import Editor
from checklist.perturb import Perturb
from checklist.test_types import MFT, INV, DIR
editor = Editor()

t = editor.template('This is {a:adj} {mask}.',  
                      adj=['good', 'great', 'excellent', 'awesome'])
test1 = MFT(t.data, labels=1, name='Simple positives',
           capability='Vocabulary', description='')

INV:

dataset = ['This was a very nice movie directed by John Smith.',
           'Mary Keen was brilliant.',
          'I hated everything about this.',
          'This movie was very bad.',
          'I really liked this movie.',
          'just bad.',
          'amazing.',
          ]
t = Perturb.perturb(dataset, Perturb.add_typos)
test2 = INV(**t)

DIR:

from checklist.expect import Expect
def add_negative(x):
    phrases = ['Anyway, I thought it was bad.', 'Having said this, I hated it', 'The director should be fired.']
    return ['%s %s' % (x, p) for p in phrases]

t = Perturb.perturb(dataset, add_negative)
monotonic_decreasing = Expect.monotonic(label=1, increasing=False, tolerance=0.1)
test3 = DIR(**t, expect=monotonic_decreasing)

Running tests directly:

from checklist.pred_wrapper import PredictorWrapper
# wrapped_pp returns a tuple with (predictions, softmax confidences)
wrapped_pp = PredictorWrapper.wrap_softmax(model.predict_proba)
test.run(wrapped_pp)

Running from a file:

# One line per example
test.to_raw_file('/tmp/raw_file.txt')
# each line has prediction probabilities (softmax)
test.run_from_file('/tmp/softmax_preds.txt', file_format='softmax', overwrite=True)

Summary of results:

test.summary(n=1)

Test cases: 400
Fails (rate): 200 (50.0%)

Example fails:
0.2 This is a good idea

Visual summary:

test.visual_summary()

visual summary

Saving and loading individual tests:

# save
test.save(path)
# load
test = MFT.from_file(path)

Custom expectation functions

See 3. Test types, expectation functions, running tests for more details.

If you are writing a custom expectation functions, it must return a float or bool for each example such that:

  • > 0 (or True) means passed,
  • <= 0 or False means fail, and (optionally) the magnitude of the failure, indicated by distance from 0, e.g. -10 is worse than -1
  • None means the test does not apply, and this should not be counted

Expectation on a single example:

def high_confidence(x, pred, conf, label=None, meta=None):
    return conf.max() > 0.95
expect_fn = Expect.single(high_confidence)

Expectation on pairs of (orig, new) examples (for INV and DIR):

def changed_pred(orig_pred, pred, orig_conf, conf, labels=None, meta=None):
    return pred != orig_pred
expect_fn = Expect.pairwise(changed_pred)

There's also Expect.testcase and Expect.test, amongst many others.
Check out expect.py for more details.

Test Suites

See 4. The CheckList process for more details.

Adding tests:

from checklist.test_suite import TestSuite
# assuming test exists:
suite.add(test)

Running a suite is the same as running an individual test, either directly or through a file:

from checklist.pred_wrapper import PredictorWrapper
# wrapped_pp returns a tuple with (predictions, softmax confidences)
wrapped_pp = PredictorWrapper.wrap_softmax(model.predict_proba)
suite.run(wrapped_pp)
# or suite.run_from_file, see examples above

To visualize results, you can call suite.summary() (same as test.summary), or suite.visual_summary_table(). This is what the latter looks like for BERT on sentiment analysis:

suite.visual_summary_table()

visual summary table

Finally, it's easy to save, load, and share a suite:

# save
suite.save(path)
# load
suite = TestSuite.from_file(path)

API reference

On readthedocs

Code of Conduct

Microsoft Open Source Code of Conduct

Owner
Marco Tulio Correia Ribeiro
Marco Tulio Correia Ribeiro
I label phrases on a scale of five values: negative, somewhat negative, neutral, somewhat positive, positive

I label phrases on a scale of five values: negative, somewhat negative, neutral, somewhat positive, positive. Obstacles like sentence negation, sarcasm, terseness, language ambiguity, and many others

1 Jan 13, 2022
Converts text into a PDF of handwritten notes

Text To Handwritten Notes Converts text into a PDF of handwritten notes Explore the docs » · Report Bug · Request Feature · Steps: $ git clone https:/

UVSinghK 63 Oct 09, 2022
The Easy-to-use Dialogue Response Selection Toolkit for Researchers

The Easy-to-use Dialogue Response Selection Toolkit for Researchers

GMFTBY 32 Nov 13, 2022
AI Assistant for Building Reliable, High-performing and Fair Multilingual NLP Systems

AI Assistant for Building Reliable, High-performing and Fair Multilingual NLP Systems

Microsoft 37 Nov 29, 2022
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

Nathan Cooper 2.3k Jan 01, 2023
Longformer: The Long-Document Transformer

Longformer Longformer and LongformerEncoderDecoder (LED) are pretrained transformer models for long documents. ***** New December 1st, 2020: Longforme

AI2 1.6k Dec 29, 2022
Use Tensorflow2.7.0 Build OpenAI'GPT-2

TF2_GPT-2 Use Tensorflow2.7.0 Build OpenAI'GPT-2 使用最新tensorflow2.7.0构建openai官方的GPT-2 NLP模型 优点 使用无监督技术 拥有大量词汇量 可实现续写(堪比“xx梦续写”) 实现对话后续将应用于FloatTech的Bot

Watermelon 9 Sep 13, 2022
Python powered crossword generator with database with 20k+ polish words

crossword_generator Generate simple crossword puzzle from words and definitions fetched from krzyżowki.edu.pl endpoints -/ string:word - returns js

0 Jan 04, 2022
Klexikon: A German Dataset for Joint Summarization and Simplification

Klexikon: A German Dataset for Joint Summarization and Simplification Dennis Aumiller and Michael Gertz Heidelberg University Under submission at LREC

Dennis Aumiller 8 Jan 03, 2023
This is the offline-training-pipeline for our project.

offline-training-pipeline This is the offline-training-pipeline for our project. We adopt the offline training and online prediction Machine Learning

0 Apr 22, 2022
Suite of 500 procedurally-generated NLP tasks to study language model adaptability

TaskBench500 The TaskBench500 dataset and code for generating tasks. Data The TaskBench dataset is available under wget http://web.mit.edu/bzl/www/Tas

Belinda Li 20 May 17, 2022
PyKaldi is a Python scripting layer for the Kaldi speech recognition toolkit.

PyKaldi is a Python scripting layer for the Kaldi speech recognition toolkit. It provides easy-to-use, low-overhead, first-class Python wrappers for t

922 Dec 31, 2022
Hostapd-mac-tod-acl - Setup a hostapd AP with MAC ToD ACL

A brief explanation This script provides a quick way to setup a Time-of-day (Tod

2 Feb 03, 2022
API for the GPT-J language model 🦜. Including a FastAPI backend and a streamlit frontend

gpt-j-api 🦜 An API to interact with the GPT-J language model. You can use and test the model in two different ways: Streamlit web app at http://api.v

Víctor Gallego 276 Dec 31, 2022
Code for the paper PermuteFormer

PermuteFormer This repo includes codes for the paper PermuteFormer: Efficient Relative Position Encoding for Long Sequences. Directory long_range_aren

Peng Chen 42 Mar 16, 2022
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform tasks on automatic speech recogniti

Soohwan Kim 26 Dec 14, 2022
State-of-the-art NLP through transformer models in a modular design and consistent APIs.

Trapper (Transformers wRAPPER) Trapper is an NLP library that aims to make it easier to train transformer based models on downstream tasks. It wraps h

Open Business Software Solutions 42 Sep 21, 2022
ConvBERT-Prod

ConvBERT 目录 0. 仓库结构 1. 简介 2. 数据集和复现精度 3. 准备数据与环境 3.1 准备环境 3.2 准备数据 3.3 准备模型 4. 开始使用 4.1 模型训练 4.2 模型评估 4.3 模型预测 5. 模型推理部署 5.1 基于Inference的推理 5.2 基于Serv

yujun 7 Apr 08, 2022
Original implementation of the pooling method introduced in "Speaker embeddings by modeling channel-wise correlations"

Speaker-Embeddings-Correlation-Pooling This is the original implementation of the pooling method introduced in "Speaker embeddings by modeling channel

Themos Stafylakis 10 Apr 30, 2022
BERN2: an advanced neural biomedical namedentity recognition and normalization tool

BERN2 We present BERN2 (Advanced Biomedical Entity Recognition and Normalization), a tool that improves the previous neural network-based NER tool by

DMIS Laboratory - Korea University 99 Jan 06, 2023