Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Overview



PyPI Package latest release Supported versions

Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Ecco provides multiple interfaces to aid the explanation and intuition of Transformer-based language models. Read: Interfaces for Explaining Transformer Language Models.

Ecco runs inside Jupyter notebooks. It is built on top of pytorch and transformers.

Ecco is not concerned with training or fine-tuning models. Only exploring and understanding existing pre-trained models. The library is currently an alpha release of a research project. You're welcome to contribute to make it better!

Documentation: ecco.readthedocs.io

Features

  • Support for a wide variety of language models (GPT2, BERT, RoBERTA, T5, T0, and others).
  • Ability to add your own local models (if they're based on Hugging Face pytorch models).
  • Feature attribution (IntegratedGradients, Saliency, InputXGradient, DeepLift, DeepLiftShap, GuidedBackprop, GuidedGradCam, Deconvolution, and LRP via Captum)
  • Capture neuron activations in the FFNN layer in the Transformer block
  • Identify and visualize neuron activation patterns (via Non-negative Matrix Factorization)
  • Examine neuron activations via comparisons of activations spaces using SVCCA, PWCCA, and CKA
  • Visualizations for:
    • Evolution of processing a token through the layers of the model (Logit lens)
    • Candidate output tokens and their probabilities (at each layer in the model)

Examples:

What is the sentiment of this film review?

Use a large language model (T5 in this case) to detect text sentiment. In addition to the sentiment, see the tokens the model broke the text into (which can help debug some edge cases).

Which words in this review lead the model to classify its sentiment as "negative"?

Feature attribution using Integrated Gradients helps you explore model decisions. In this case, switching "weakness" to "inclination" allows the model to correctly switch the prediction to positive.

Explore the world knowledge of GPT models by posing fill-in-the blank questions.

Asking GPT2 where heathrow airport is

Does GPT2 know where Heathrow Airport is? Yes. It does.

What other cities/words did the model consider in addition to London?

The model also considered Birmingham and Manchester

Visualize the candidate output tokens and their probability scores.

Which input words lead it to think of London?

Asking GPT2 where heathrow airport is

At which layers did the model gather confidence that London is the right answer?

The order of the token in each layer, layer 11 makes it number 1

The model chose London by making the highest probability token (ranking it #1) after the last layer in the model. How much did each layer contribute to increasing the ranking of London? This is a logit lens visualizations that helps explore the activity of different model layers.

What are the patterns in BERT neuron activation when it processes a piece of text?

Colored line graphs on the left, a piece of text on the right. The line graphs indicate the activation of BERT neuron groups in response to the text

A group of neurons in BERT tend to fire in response to commas and other punctuation. Other groups of neurons tend to fire in response to pronouns. Use this visualization to factorize neuron activity in individual FFNN layers or in the entire model.

Read the paper:

Ecco: An Open Source Library for the Explainability of Transformer Language Models Association for Computational Linguistics (ACL) System Demonstrations, 2021

Tutorials

How-to Guides

API Reference

The API reference and the architecture page explain Ecco's components and how they work together.

Gallery & Examples

Predicted Tokens: View the model's prediction for the next token (with probability scores). See how the predictions evolved through the model's layers. [Notebook] [Colab]


Rankings across layers: After the model picks an output token, Look back at how each layer ranked that token. [Notebook] [Colab]


Layer Predictions:Compare the rankings of multiple tokens as candidates for a certain position in the sequence. [Notebook] [Colab]


Primary Attributions: How much did each input token contribute to producing the output token? [Notebook] [Colab]


Detailed Primary Attributions: See more precise input attributions values using the detailed view. [Notebook] [Colab]


Neuron Activation Analysis: Examine underlying patterns in neuron activations using non-negative matrix factorization. [Notebook] [Colab]

Getting Help

Having trouble?

  • The Discussion board might have some relevant information. If not, you can post your questions there.
  • Report bugs at Ecco's issue tracker

Bibtex for citations:

@inproceedings{alammar-2021-ecco,
    title = "Ecco: An Open Source Library for the Explainability of Transformer Language Models",
    author = "Alammar, J",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations",
    year = "2021",
    publisher = "Association for Computational Linguistics",
}
Comments
  • Support for T5-like Seq2SeqLM

    Support for T5-like Seq2SeqLM

    Hello, I was wondering if there are any plans for explicit encoder-decoder models like T5. Although T5 was not pre-trained with auto-regressive LM objective it is a pretty good candidate for ecco's generate method. I tried running t5 as it was listed in model-config.yaml but soon ran into issues because the current implementation is very much suited to gpt like models.

    I made some changes on a fork to get attribution working, but not sure if I did it correctly https://colab.research.google.com/drive/1zahIWgOCySoQXQkAaEAORZ5DID11qpkH?usp=sharing https://github.com/chiragjn/ecco/tree/t5_exp

    I would love to contribute to add support with some help, especially on the overall implementation design

    opened by chiragjn 8
  • Adds a model config field use_causal_lm and config entries for gpt-neo

    Adds a model config field use_causal_lm and config entries for gpt-neo

    Adding gpt-neo models to model-config.yaml failed because the model needs to be loaded using AutoModelForCausalLM, but init identified such models by looking for gpt2 in the name. A TODO comment in init mentioned using config instead. I refactored config loading slightly to enable this - not sure if that is the direction you intended or not.

    opened by stprior 8
  • Add a `conda` install option for `ecco`

    Add a `conda` install option for `ecco`

    A conda install option for ecco could be helpful for two reasons:

    1. Easy installation with version management with conda.
    2. For other libraries, which if depend on ecco, if you want them on conda-forge channel as well, ecco must be available on conda-forge.

    :bulb: I have already have started work on this. PR: https://github.com/conda-forge/staged-recipes/pull/17388

    Once, the PR gets merged, you will be able to install ecco as:

    conda install -c conda-forge ecco
    

    I will send a PR to update your documentation, once the PR gets merged.

    opened by sugatoray 7
  • Add support for PEGASUS model

    Add support for PEGASUS model

    I would like to add the support of PEGASUS in model-config.yaml.

    PEGASUS model is an encoder-decoder type and the implementation is completely inherited from BartForConditionalGeneration. So the config is similar to the BART model.

    Notes: This is my first time making a pull request on an open-source project, but hope this helps!

    opened by thomas-chong 6
  • Add support for Integrated Gradients explainability method

    Add support for Integrated Gradients explainability method

    In this PR, me and @SSamDav add support for the IG algorithm and make use of the same visualization plots used for input saliency. Besides, we also fix a saliency visualization bug for enc-dec models that was not addressed in the previous PR.

    Notes:

    • The generate method became even slower with the IG method. We added an option to choose which attribution method to calculate, but it can be further improved. Maybe the visualization could be coupled with the generation itself.
    • The IG score has a convergence delta error that could be shown in the plot or, for example, be used to change the IG default parameters when a minimum error is not met.
    opened by JoaoLages 5
  • attention head

    attention head

    Hi @jalammar, I tested some examples with Ecco, and I wanted to know if it is possible to change the head to view the activations for each one and for each layer?

    opened by afcarvallo 5
  • Add support for more attribution methods

    Add support for more attribution methods

    Hi, Currently, the project seems to be relying on grad-norm and grad-x-input to obtain the attributions. However, there are other arguably better (as discussed in recent work) methods to obtain saliency maps. Integrating them in this project would also provide a good way to compare them on the same input examples.

    Some of these methods from the top of my head are- integrated gradients, gradient shapley, and LIME. Perhaps support for visualizing the attention map from the model being interpreted itself could also be added. Methods based on feature ablation are also possible but they might need more work to integrate.

    There is support for these aforementioned methods on Captum, but it takes effort to get them working for NLP tasks, especially those based on language modeling. Thus, I feel this would be a useful addition here.

    enhancement help wanted 
    opened by RachitBansal 5
  • token prefix in roberta model?

    token prefix in roberta model?

    Trying to use a custom trained Roberta model by loading the config file but getting the error the token prefix is not present in the config. Any idea how to fix it? Screenshot 2022-02-02 at 3 30 31 PM

    opened by sarthusarth 4
  • output.saliency() displays nothing

    output.saliency() displays nothing

    I am trying to visualize saliency maps from a custom GPT model. Since I am concerned only about saliency maps, I just do the following:

    out = OutputSeq(token_ids = input_ids, n_input_tokens = n_input_tokens, tokens = tokens, attribution = attr)
    out.saliency()
    

    I get no errors and nothing is displayed in the jupyter notebook, but when I open Chrome's Javascript console, I see the following thing.

    
    (unknown) Ecco initialize.
    
      | l | @ | storage.googleapis.c…ust=1610606118793:1
    -- | -- | -- | --
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | autoTextColor | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | each | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | style | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | enter | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | join | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | setupTokenBoxes | @ | storage.googleapis.c…ust=1610606118793:1
      | init | @ | storage.googleapis.c…ust=1610606118793:1
      | eval
      | execCb | @ | require.js:1693
      | check | @ | require.js:881
      | enable | @ | require.js:1173
      | init | @ | require.js:786
      | (anonymous) | @ | require.js:1457
    
    DevTools failed to load SourceMap: Could not load content for http://localhost:8888/static/notebook/js/main.min.js.map: HTTP error: status code 404, net::ERR_HTTP_RESPONSE_CODE_FAILURE
    DevTools failed to load SourceMap: Could not load content for https://storage.googleapis.com/wandb-cdn/production/d4e2434e6/raven.min.js.map: HTTP error: status code 404, net::ERR_HTTP_RESPONSE_CODE_FAILURE
    

    How do I resolve this issue? Btw, I am running this notebook by sshing into my institute's remote machine.

    opened by VirajBagal 4
  • Tell pip to install from setup.py

    Tell pip to install from setup.py

    Forces pip install -r requirements.txt to install the same package versions specified in setup.py.

    For details, see this comment.

    Confirmed that tests pass locally after merging this and #13 . (Since #13 fixes tests, they won't pass until it is merged.)

    opened by nostalgebraist 4
  • Memory management and tweaks

    Memory management and tweaks

    Hello Jay, thanks for all your work on GPT interpretation!

    This PR contains changes I made in a personal fork while attempting to use ecco with a 1.5B-size GPT-2 model. There are 3 kinds of changes:

    1. Attempts to plug memory leaks / otherwise reduce memory footprint
    2. Bug fixes
    3. Usability tweaks and new features

    In retrospect, I wish I had made distinct branches for these 3 types of change, as together they now make up a pretty large PR. I can still go back and do that, if (say) you want to merge the bug fixes without the other ones.


    Context: I am using ecco on a 1.5B-size GPT-2 model, using a Tesla T4 GPU (~15GB memory) on Colab.

    I am using version 3.4.0 of transformers, which is the max version consistent with ecco's setup.py and hence the one I got on installation.

    1. Memory management

    Running lm.generate with this large model, I ran out of GPU memory. This surprised me, because memory has not been an issue for me using the same model in tensorflow.

    After looking into it, I found a few places where use of GPU memory could be lowered:

    • past, which we don't use here, was still being computed on each step.
      • More importantly, python garbage collection was not (as far as I could tell) freeing the values of past produced on previous steps, so generating N tokens required enough memory to store the N pasts emitted from steps 1, 2, ..., N.
      • Mitigation: pass use_cache=False to the model's forward pass, so it doesn't return pasts
    • Saliency calculations all used retain_graph=True, so the backward graphs were never cleared.
      • Mitigation: when we do several gradient calculations per step, pass retain_graph=False to the last one
    • hidden_states were stored on the GPU during generation.
      • They don't need to be on the GPU at that time (because they aren't used in generation).
      • And, since we have a low CPU memory footprint otherwise, we have plenty of CPU memory to store them in.
      • Mitigation: call .cpu() on hidden states emitted from each step. If we want to calculate with them later on, move them back to self.device.
    • (Minor) Memory allocated for logit matrices from each step was not freed after sampling
      • Mitigation: output['logits']=None after rolling a sample

    With these changes, I can run lm.generate for many 100s of steps, where previously I could only manage a small number, maybe ~10.


    2. Bug fixes

    • activations_dict_to_array would fail in the edge case where we only have a single token in the prompt.
      • Issue: np.squeeze would wrongly eliminate the position axis (because its size was 1).
      • Mitigation: use np.concatenate, which doesn't add an unwanted singleton dimension, so we don't have to squeeze
    • top-p sampling did not work
      • Issue: top_k_top_p_filtering apparently expects a position axis in its input, even if that axis only has length 1
      • Mitigation: replace [-1, :] with [-1: ,:] and then squeeze after rolling a sample

    3. Usability tweaks and new features

    • Added an option to not track hidden_states. This feels consistent with the way you can choose whether or not to track other things (activations, attn).
      • To help this work properly, switched from position-based indexing into the CausalLMOutputWithPast objects to key lookup, so we're robust to changes in the length/order of these objects.
    • Added the option to only track hidden states for a user-defined subset of layers, through the new kwarg collect_activations_layer_nums.
      • This is valuable with a large model where you may be only interested in a specific layer, and storing activations from all layers has high memory cost.
      • NMF now takes this kwarg and (if not None) uses it to map between row indices in activations and actual layer numbers. For example, if we are tracking layers 7 and 23, we will have an activation matrix with 2 rows. If passed from_layer=7, to_layer=8, we should retrieve the row slice [:1, :], not [7:8, :].

    I realize this PR is unwieldy -- I just wanted to get my changes up in some form, since at least some of them seemed unambiguously helpful (bug fixes).

    Let me know if you want me to break it down into smaller pieces, or if it needs other work, or if it is generally unhelpful for your goals, or whatever.

    Did not run tox tests because I could not get them to run properly on my machine, even after downloading the tox.ini from one of the CI-related branches.

    opened by nostalgebraist 4
  • AttributeError: 'OutputSeq' object has no attribute 'saliency'

    AttributeError: 'OutputSeq' object has no attribute 'saliency'

    captum 0.5.0 torch 1.13.0+cu117

    Language_Models_and_Ecco_PyData_Khobar.ipynb

    text= "The countries of the European Union are:\n1. Austria\n2. Belgium\n3. Bulgaria\n4."
    output_3 = lm.generate(text, generate=20, do_sample=True)
    output_3.saliency()
    

    AttributeError Traceback (most recent call last) Cell In [13], line 1 ----> 1 output_3.saliency()

    AttributeError: 'OutputSeq' object has no attribute 'saliency'

    opened by Claus1 1
  • Rankings_watch displaying wrong sequence

    Rankings_watch displaying wrong sequence

    Hello, I have a problem with the rankings_watch() function. I used a predefined GPT2 model and gave it the input "Today, the weather is". However, in the visualization, only the first token is shown although the model creates the output correctly: image

    Thank you for your help :D

    bug 
    opened by MiriUll 1
  • Running Eccomap for Pre Trained BertForMaskedLM

    Running Eccomap for Pre Trained BertForMaskedLM

    Hi, I was trying to run my pretrained model for which i had used BERTForMaskedLM model class from hugging face but its giving me this error. Plese help me in resolving this error. Thanks in advance. image

    opened by iamakshay1 1
  • Remove `tokenizer_config` usage from the library

    Remove `tokenizer_config` usage from the library

    This config parameter was made to easily package config to send to the Javascript components. Ecco now handles all tokenization on the Python side to separate the concerns between the python and JS components. Subsequently, this needs to be removed.

    opened by jalammar 0
  • Tokenizer has partial token suffix instead of prefix

    Tokenizer has partial token suffix instead of prefix

    Following your guide for identifying model configuration

    MODEL_ID = "vinai/bertweet-base"
    
    from transformers import AutoModelForSequenceClassification, AutoTokenizer
    model = AutoModelForSequenceClassification.from_pretrained(MODEL_ID)
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, normalization=True, use_fast=False)
    
    ids= tokenizer('tokenization')
    ids
    

    returns:

    {'input_ids': [0, 969, 6186, 6680, 2], 'token_type_ids': [0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1]}
    

    Then

    tokenizer.convert_ids_to_tokens(ids['input_ids'])
    

    returns:

    ['<s>', 'to@@', 'ken@@', 'ization', '</s>']
    

    Here I noticed that the tokenizer adds a partial token suffix instead of partial token prefix. Having a suffix instead of prefix is not configurable in the config.

    opened by guustfranssensEY 1
Releases(v0.1.2)
Owner
Jay Alammar
ML Research Engineer. Focused on NLP language models and visualization. @cohere-ai. Ex ML content dev @ Udacity.
Jay Alammar
Optimal Transport Tools (OTT), A toolbox for all things Wasserstein.

Optimal Transport Tools (OTT), A toolbox for all things Wasserstein. See full documentation for detailed info on the toolbox. The goal of OTT is to pr

OTT-JAX 255 Dec 26, 2022
NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations

NL-Augmenter 🦎 → 🐍 The NL-Augmenter is a collaborative effort intended to add transformations of datasets dealing with natural language. Transformat

684 Jan 09, 2023
基于GRU网络的句子判断程序/A program based on GRU network for judging sentences

SentencesJudger SentencesJudger 是一个基于GRU神经网络的句子判断程序,基本的功能是判断文章中的某一句话是否为一个优美的句子。 English 如何使用SentencesJudger 确认Python运行环境 安装pyTorch与LTP python3 -m pip

8 Mar 24, 2022
Pre-Training with Whole Word Masking for Chinese BERT

Pre-Training with Whole Word Masking for Chinese BERT

Yiming Cui 7.7k Dec 31, 2022
Code for our paper "Mask-Align: Self-Supervised Neural Word Alignment" in ACL 2021

Mask-Align: Self-Supervised Neural Word Alignment This is the implementation of our work Mask-Align: Self-Supervised Neural Word Alignment. @inproceed

THUNLP-MT 46 Dec 15, 2022
Treemap visualisation of Maya scene files

Ever wondered which nodes are responsible for that 600 mb+ Maya scene file? Features Fast, resizable UI Parsing at 50 mb/sec Dependency-free, single-f

Marcus Ottosson 76 Nov 12, 2022
MASS: Masked Sequence to Sequence Pre-training for Language Generation

MASS: Masked Sequence to Sequence Pre-training for Language Generation

Microsoft 1.1k Dec 17, 2022
German Text-To-Speech Engine using Tacotron and Griffin-Lim

jotts JoTTS is a German text-to-speech engine using tacotron and griffin-lim. The synthesizer model has been trained on my voice using Tacotron1. Due

padmalcom 6 Aug 28, 2022
Club chatbot

Chatbot Club chatbot Instructions to get the Chatterbot working Step 1. First make sure you are using a version of Python 3 or newer. To check your ve

5 Mar 07, 2022
NLP, before and after spaCy

textacy: NLP, before and after spaCy textacy is a Python library for performing a variety of natural language processing (NLP) tasks, built on the hig

Chartbeat Labs Projects 2k Jan 04, 2023
A combination of autoregressors and autoencoders using XLNet for sentiment analysis

A combination of autoregressors and autoencoders using XLNet for sentiment analysis Abstract In this paper sentiment analysis has been performed in or

James Zaridis 2 Nov 20, 2021
A Survey of Natural Language Generation in Task-Oriented Dialogue System (TOD): Recent Advances and New Frontiers

A Survey of Natural Language Generation in Task-Oriented Dialogue System (TOD): Recent Advances and New Frontiers

Libo Qin 132 Nov 25, 2022
Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

41 Jan 03, 2023
Based on 125GB of data leaked from Twitch, you can see their monthly revenues from 2019-2021

Twitch Revenues Bu script'i kullanarak istediğiniz yayıncıların, Twitch'den sızdırılan 125 GB'lik veriye dayanarak, 2019-2021 arası aylık gelirlerini

4 Nov 11, 2021
Code for producing Japanese GPT-2 provided by rinna Co., Ltd.

japanese-gpt2 This repository provides the code for training Japanese GPT-2 models. This code has been used for producing japanese-gpt2-medium release

rinna Co.,Ltd. 491 Jan 07, 2023
Utilize Korean BERT model in sentence-transformers library

ko-sentence-transformers 이 프로젝트는 KoBERT 모델을 sentence-transformers 에서 보다 쉽게 사용하기 위해 만들어졌습니다. Ko-Sentence-BERT-SKTBERT 프로젝트에서는 KoBERT 모델을 sentence-trans

Junghyun 40 Dec 20, 2022
RIDE automatically creates the package and boilerplate OOP Python node scripts as per your needs

RIDE: ROS IDE RIDE automatically creates the package and boilerplate OOP Python code for nodes as per your needs (RIDE is not an IDE, but even ROS isn

Jash Mota 20 Jul 14, 2022
Repository for Project Insight: NLP as a Service

Project Insight NLP as a Service Contents Introduction Features Installation Setup and Documentation Project Details Demonstration Directory Details H

Abhishek Kumar Mishra 286 Dec 06, 2022
Use PaddlePaddle to reproduce the paper:mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer

MT5_paddle Use PaddlePaddle to reproduce the paper:mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer English | 简体中文 mT5: A Massively

2 Oct 17, 2021
StarGAN - Official PyTorch Implementation

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Dec 30, 2022