Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.

Overview

ONNX T5 Actions Status Actions Status Version Downloads Slack

Summarization, translation, Q&A, text generation and more at blazing speed using a T5 version implemented in ONNX.

This package is still in alpha stage, therefore some functionalities such as beam searches are still in development.

Installation

ONNX-T5 is available on PyPi.

pip install onnxt5

For the dev version you can run the following.

git clone https://github.com/abelriboulot/onnxt5
cd onnxt5
pip install -e .

Usage

The simplest way to get started for generation is to use the default pre-trained version of T5 on ONNX included in the package.

NOTE: Please note that the first time you call get_encoder_decoder_tokenizer, the models are being downloaded which might take a minute or two.

from onnxt5 import GenerativeT5
from onnxt5.api import get_encoder_decoder_tokenizer
decoder_sess, encoder_sess, tokenizer = get_encoder_decoder_tokenizer()
generative_t5 = GenerativeT5(encoder_sess, decoder_sess, tokenizer, onnx=True)
prompt = 'translate English to French: I was a victim of a series of accidents.'

output_text, output_logits = generative_t5(prompt, max_length=100, temperature=0.)
# output_text: "J'ai été victime d'une série d'accidents."

Other tasks just require to change the prefix in your prompt, for instance for summarization:

prompt = 'summarize: <PARAGRAPH>'
output_text, output_logits = generative_t5(prompt, max_length=100, temperature=0.)

If you want to get the embeddings of text, you can run the following

from onnxt5.api import get_encoder_decoder_tokenizer, run_embeddings_text

decoder_sess, encoder_sess, tokenizer = get_encoder_decoder_tokenizer()
prompt = 'Listen, Billy Pilgrim has come unstuck in time.'
encoder_embeddings, decoder_embeddings = run_embeddings_text(encoder_sess, decoder_sess, tokenizer, prompt)

ONNXT5 also lets you export and use your own models. See the examples\ folder for more detailed examples.

T5 works with tokens such as summarize:, translate English to German:, or question: ... context:. You can see a list of the pretrained tasks and token in the appendix D of the original paper.

Functionalities

  • Run any of the T5 trained tasks in a line (translation, summarization, sentiment analysis, completion, generation)
  • Export your own T5 models to ONNX easily
  • Utility functions to generate what you need quickly
  • Up to 4X speedup compared to PyTorch execution for smaller contexts

Benchmarks

The outperformance varies heavily based on the length of the context. For contexts less than ~500 words, ONNX outperforms greatly, going up to a 4X speedup compared to PyTorch. However, the longer the context, the smaller the speedup of ONNX, with Pytorch being faster above 500 words.

GPU Benchmark, Embedding Task

Benchmark Embedding

GPU Benchmark, Generation Task

Benchmark Generation

Contributing

The project is still in its infancy, so I would love your feedback, to know what problems you are trying to solve, hear issues you're encountering, and discuss features that would help you. Therefore feel free to shoot me an e-mail (see my profile for the address!) or join our slack community.

Acknowledgements

This repo is based on the work of Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu from Google, as well as the implementation of T5 from the huggingface team, the work of the Microsoft ONNX and onnxruntime teams, in particular Tianlei Wu, and the work of Thomas Wolf on generation of text.

Original T5 Paper

@article{2019t5,
  author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
  title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
  journal = {arXiv e-prints},
  year = {2019},
  archivePrefix = {arXiv},
  eprint = {1910.10683},
}

Microsoft onnxruntime repo

HuggingFace implementation of T5

Comments
  •  Given model could not be parsed while creating inference session. Error message: Protobuf parsing failed.

    Given model could not be parsed while creating inference session. Error message: Protobuf parsing failed.

    Hi there, I've run a guide code and it doesn't work. image I'm getting an error on the following line, decoder_sess, encoder_sess, tokenizer = get_encoder_decoder_tokenizer()

    image text is a text from Wikipedia about cars.

    onnxt5==0.1.4 protobuf==3.6.0 python==3.7

    opened by vladislavkoz 6
  • Default T5 summary contains <extra_id_2>.<extra_id_3>.<extra_id_4>

    Default T5 summary contains ..

    <extra_id_0> the company<extra_id_1> the company<extra_id_2>.<extra_id_3>.<extra_id_4>.<extra_id_5>.<extra_id_6>. <extra_id_7>.

    Do I need some postprocessing? Or it is an issue?

    opened by vladislavkoz 5
  • int() argument must be a string , when running exemple.

    int() argument must be a string , when running exemple.

    Hello , i can't run the first exemple ,

    from onnxt5 import GenerativeT5
    from onnxt5.api import get_encoder_decoder_tokenizer
    
    decoder_sess, encoder_sess, tokenizer = get_encoder_decoder_tokenizer()
    generative_t5 = GenerativeT5(encoder_sess, decoder_sess, tokenizer, onnx=True)
    prompt = 'translate English to French: I was a victim of a series of accidents.'
    
    output_text, output_logits = generative_t5(prompt, max_length=100, temperature=0.)
     # output_text: "J'ai été victime d'une série d'accidents." 
    

    the model begin calculation but before End, i have this error :

    TypeError                                 Traceback (most recent call last)
    <ipython-input-1-257f12b63043> in <module>
          5 prompt = 'translate English to French: I was a victim of a series of accidents.'
          6 
    ----> 7 output_text, output_logits = generative_t5(prompt, max_length=16, temperature=0.)
          8 # output_text: "J'ai été victime d'une série d'accidents."
    
    ~\Anaconda3\envs\onnxt5\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        720             result = self._slow_forward(*input, **kwargs)
        721         else:
    --> 722             result = self.forward(*input, **kwargs)
        723         for hook in itertools.chain(
        724                 _global_forward_hooks.values(),
    
    ~\Anaconda3\envs\onnxt5\lib\site-packages\onnxt5\models.py in forward(self, prompt, max_length, temperature, repetition_penalty, top_k, top_p, max_context_length)
        145                 new_tokens.append(next_token)
        146 
    --> 147             return self.tokenizer.decode(new_tokens), new_logits
    
    ~\Anaconda3\envs\onnxt5\lib\site-packages\transformers\tokenization_utils_base.py in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)
       3000             skip_special_tokens=skip_special_tokens,
       3001             clean_up_tokenization_spaces=clean_up_tokenization_spaces,
    -> 3002             **kwargs,
       3003         )
       3004 
    
    ~\Anaconda3\envs\onnxt5\lib\site-packages\transformers\tokenization_utils.py in _decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, spaces_between_special_tokens)
        730         spaces_between_special_tokens: bool = True,
        731     ) -> str:
    --> 732         filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
        733 
        734         # To avoid mixing byte-level and unicode for byte-level BPT
    
    ~\Anaconda3\envs\onnxt5\lib\site-packages\transformers\tokenization_utils.py in convert_ids_to_tokens(self, ids, skip_special_tokens)
        708         tokens = []
        709         for index in ids:
    --> 710             index = int(index)
        711             if skip_special_tokens and index in self.all_special_ids:
        712                 continue
    
    TypeError: int() argument must be a string, a bytes-like object or a number, not 'list
    

    `

    and i have no idea how to find solution , if you have any solution !? thx !

    opened by AZE38 3
  • Inference time on gpu vs onnxt5-gpu

    Inference time on gpu vs onnxt5-gpu

    @abelriboulot , @Ki6an , @brymck .
    I have finetuned t5 model for paraphrasing task like this: Paraphrase with t5

    I want to reduce inference time, so I exported finetuned t5 model using onnxt5, here I get time taken more in case where I use onnx model on gpu than pytorch model on gpu.

    gpu: time taken = 0.2357314471155405 time taken = 0.24958523781970143 time taken = 0.20342689706012607 time taken = 0.5490081580355763 time taken = 0.10756197292357683

    onnxt5-gpu time taken = 0.5277913622558117 time taken = 0.6335883080027997 time taken = 0.6975196991115808 time taken = 1.9159171842038631 time taken = 0.7938353712670505

    Did I make mistake in exporting/loading model ? gpu code onnxt5-gpu code

    opened by priyanksonis 1
  • Add progress bar

    Add progress bar

    This adds a progress bar using tqdm.

    The files this library downloads are about 500 MB in size, so I'd like to have some feedback on what's happening. Originally I wasn't clear what was the cause of the delay when running get_encoder_decoder_tokenizer.

    opened by brymck 0
  • Add download progress bar

    Add download progress bar

    This adds a progress bar using tqdm.

    The files this library downloads are about 500 MB in size, so I'd like to have some feedback on what's happening. Originally I wasn't clear what was the cause of the delay when running get_encoder_decoder_tokenizer.

    opened by brymck 0
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Add dtype to new_tokens tensor to avoid an error when decoding

    Add dtype to new_tokens tensor to avoid an error when decoding

    Thanks for the repo!

    I was having an error message come up when running the code after my initial install.

    Small code example:

    import os
    
    import torch
    from onnxt5 import GenerativeT5
    from onnxt5.api import get_sess
    from transformers import AutoTokenizer
    
    model_dir = <path-to-tokenizer-and-onnx-files>
    model_name = <name-of-model>
    
    tokenizer = AutoTokenizer.from_pretrained(
        model_dir,
    )
    
    decoder_sess, encoder_sess = get_sess(
        os.path.join(model_dir, model_name)
    )
    
    model = GenerativeT5(
        encoder_sess,
        decoder_sess,
        tokenizer,
        onnx=True,
        cuda=torch.cuda.is_available(),
    )
    
    sentences = [
        "I has good grammar.",
        "I have bettr grammur."
    ]
    
    corrected_sentences = [
        model(f"grammar: {sentence}",
              max_length=512,
              temperature=1,
              )[0]
        for sentence in sentences
    ]
    
    
    

    The error

    Traceback (most recent call last):
      File "/Users/jamiebrandon/Code/inferentia-test/onnx_example/compiled-t5-base-grammar-correction/code/inference.py", line 133, in <module>
        main()
      File "/Users/jamiebrandon/Code/inferentia-test/onnx_example/compiled-t5-base-grammar-correction/code/inference.py", line 125, in main
        prediction_output = predict_fn(input_data=input_tokens,
      File "/Users/jamiebrandon/Code/inferentia-test/onnx_example/compiled-t5-base-grammar-correction/code/inference.py", line 95, in predict_fn
        corrected_sentences = [model(f"grammar: {sentence}",
      File "/Users/jamiebrandon/Code/inferentia-test/onnx_example/compiled-t5-base-grammar-correction/code/inference.py", line 95, in <listcomp>
        corrected_sentences = [model(f"grammar: {sentence}",
      File "/Users/jamiebrandon/Code/inferentia-test/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
        return forward_call(*input, **kwargs)
      File "/Users/jamiebrandon/Code/inferentia-test/onnx_example/compiled-t5-base-grammar-correction/onnxt5/onnxt5/models.py", line 154, in forward
        return self.tokenizer.decode(new_tokens), new_logits
      File "/Users/jamiebrandon/Code/inferentia-test/venv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 3367, in decode
        return self._decode(
      File "/Users/jamiebrandon/Code/inferentia-test/venv/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 548, in _decode
        text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
    TypeError: 'float' object cannot be interpreted as an integer
    

    It seems the tensor for new tokens is of type float instead of long. Adding dtype=torch.long to the instantiation of the tensor resolved my issue, so I thought I'd share.

    opened by jambran 0
  • Running example

    Running example "export_pretrained_model.py" as-is fails (See details)

    86%|████████▌ | 18/21 [00:00<00:00, 44.29it/s]
    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-4-f543e3365977> in <module>()
         27 # Generating text
         28 generative_t5 = GenerativeT5(encoder_sess, decoder_sess, tokenizer, onnx=True)
    ---> 29 generative_t5('translate English to French: I was a victim of a series of accidents.', 21, temperature=0.)[0]
    
    3 frames
    /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py in _decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)
        505         if isinstance(token_ids, int):
        506             token_ids = [token_ids]
    --> 507         text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
        508 
        509         if clean_up_tokenization_spaces:
    
    TypeError: 'float' object cannot be interpreted as an integer
    

    Any possible version conflicts that you know of?

    opened by PrithivirajDamodaran 2
  • How to suppress output

    How to suppress output

    How to suppress output? Setting verbosity logging level does nothing 5%|█████████▊ | 16/300 [00:01<00:18, 15.65it/s]

    opened by 127 0
  • Can this model suitable for multilingual-t5 accelerate?

    Can this model suitable for multilingual-t5 accelerate?

    Recently, I use the chinese function of multilingual-t5 model to accomplish the Chinese NLG tasks. However, the inference speed might be slow, could this model be used for multilingual-t5? How can I do?

    opened by williamwong91 2
Releases(0.1.9)
Owner
Abel
Repentant portfolio manager, turned data scientist. I'm one Vonnegut quote away from figuring out this whole life thing.
Abel
jiant is an NLP toolkit

🚨 Update 🚨 : As of 2021/10/17, the jiant project is no longer being actively maintained. This means there will be no plans to add new models, tasks,

ML² AT CILVR 1.5k Dec 28, 2022
Training code of Spatial Time Memory Network. Semi-supervised video object segmentation.

Training-code-of-STM This repository fully reproduces Space-Time Memory Networks Performance on Davis17 val set&Weights backbone training stage traini

haochen wang 128 Dec 11, 2022
Arabic speech recognition, classification and text-to-speech.

klaam Arabic speech recognition, classification and text-to-speech using many advanced models like wave2vec and fastspeech2. This repository allows tr

ARBML 177 Dec 27, 2022
Ray-based parallel data preprocessing for NLP and ML.

Wrangl Ray-based parallel data preprocessing for NLP and ML. pip install wrangl # for latest pip install git+https://github.com/vzhong/wrangl See exa

Victor Zhong 33 Dec 27, 2022
A framework for implementing federated learning

This is partly the reproduction of the paper of [Privacy-Preserving Federated Learning in Fog Computing](DOI: 10.1109/JIOT.2020.2987958. 2020)

DavidChen 46 Sep 23, 2022
Partially offline multi-language translator built upon Huggingface transformers.

Translate Command-line interface to translation pipelines, powered by Huggingface transformers. This tool can download translation models, and then us

Richard Jarry 8 Oct 25, 2022
Help you discover excellent English projects and get rid of disturbing by other spoken language

GitHub English Top Charts 「Help you discover excellent English projects and get

GrowingGit 544 Jan 09, 2023
Repository for Project Insight: NLP as a Service

Project Insight NLP as a Service Contents Introduction Features Installation Setup and Documentation Project Details Demonstration Directory Details H

Abhishek Kumar Mishra 286 Dec 06, 2022
A2T: Towards Improving Adversarial Training of NLP Models (EMNLP 2021 Findings)

A2T: Towards Improving Adversarial Training of NLP Models This is the source code for the EMNLP 2021 (Findings) paper "Towards Improving Adversarial T

QData 17 Oct 15, 2022
Sorce code and datasets for "K-BERT: Enabling Language Representation with Knowledge Graph",

K-BERT Sorce code and datasets for "K-BERT: Enabling Language Representation with Knowledge Graph", which is implemented based on the UER framework. R

Weijie Liu 834 Jan 09, 2023
無料で使える中品質なテキスト読み上げソフトウェア、VOICEVOXの音声合成エンジン

VOICEVOX ENGINE VOICEVOXの音声合成エンジン。 実態は HTTP サーバーなので、リクエストを送信すればテキスト音声合成できます。 API ドキュメント VOICEVOX ソフトウェアを起動した状態で、ブラウザから

Hiroshiba 3 Jul 05, 2022
Chinese NER with albert/electra or other bert descendable model (keras)

Chinese NLP (albert/electra with Keras) Named Entity Recognization Project Structure ./ ├── NER │   ├── __init__.py │   ├── log

2 Nov 20, 2022
BiNE: Bipartite Network Embedding

BiNE: Bipartite Network Embedding This repository contains the demo code of the paper: BiNE: Bipartite Network Embedding. Ming Gao, Leihui Chen, Xiang

leihuichen 214 Nov 24, 2022
An Open-Source Package for Neural Relation Extraction (NRE)

OpenNRE We have a DEMO website (http://opennre.thunlp.ai/). Try it out! OpenNRE is an open-source and extensible toolkit that provides a unified frame

THUNLP 3.9k Jan 03, 2023
SimpleChinese2 集成了许多基本的中文NLP功能,使基于 Python 的中文文字处理和信息提取变得简单方便。

SimpleChinese2 SimpleChinese2 集成了许多基本的中文NLP功能,使基于 Python 的中文文字处理和信息提取变得简单方便。 声明 本项目是为方便个人工作所创建的,仅有部分代码原创。

Ming 30 Dec 02, 2022
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Amazon Web Services - Labs 1.1k Dec 27, 2022
Semi-automated vocabulary generation from semantic vector models

vec2word Semi-automated vocabulary generation from semantic vector models This script generates a list of potential conlang word forms along with asso

9 Nov 25, 2022
Snowball compiler and stemming algorithms

Snowball is a small string processing language for creating stemming algorithms for use in Information Retrieval, plus a collection of stemming algori

Snowball Stemming language and algorithms 613 Jan 07, 2023
This repository implements a brute-force spellchecker utilizing the Damerau-Levenshtein edit distance.

About spellchecker.py Implementing a highly-accurate, brute-force, and dynamically programmed spellchecking program that utilizes the Damerau-Levensht

Raihan Ahmed 1 Dec 11, 2021
AllenNLP integration for Shiba: Japanese CANINE model

Allennlp Integration for Shiba allennlp-shiab-model is a Python library that provides AllenNLP integration for shiba-model. SHIBA is an approximate re

Shunsuke KITADA 12 Feb 16, 2022