Model parallel transformers in JAX and Haiku

Overview

Table of contents

  1. Mesh Transformer JAX
    1. Updates
  2. Pretrained Models
    1. GPT-J-6B
      1. Links
      2. Acknowledgments
      3. License
      4. Model Details
      5. Zero-Shot Evaluations
  3. Architecture and Usage
    1. Fine-tuning
    2. JAX Dependency
  4. TODO

Mesh Transformer JAX

A haiku library using the xmap/pjit operators in JAX for model parallelism of transformers.

The parallelism scheme is similar to the original Megatron-LM, which is efficient on TPUs due to the high speed 2d mesh network. There is also an experimental model version which implements ZeRo style sharding.

This library is designed for scalability up to approximately 40B parameters on TPUv3s, beyond which different parallelism strategies should be used. See other implementations such as GPT-NeoX or DeepSpeed for that.

One future direction for research is integrating this codebase with swarm-jax, to achieve further scalability with pipeline parallelism.

Updates

12-07-21: Added guide to fine tuning

Pretrained Models

GPT-J-6B

A 6 billion parameter, autoregressive text generation model trained on The Pile.

Links

Slim weights (bf16 weights only, for inference, 9GB)

Full weights (including optimizer params, 61GB)

Colab demo

Web demo

Aran's blog post

Acknowledgments

This project would not have been possible without compute generously provided by the TPU Research Cloud with assistance from EleutherAI.

Thanks to the Cloud TPU team at Google for providing early access to the Cloud TPU VM alpha (now publicly available!)

Thanks to everyone who have helped out one way or another (listed alphabetically):

  • Aran Komatsuzaki for advice with experiment design and writing the blog posts.
  • James Bradbury for valuable assistance with debugging JAX issues.
  • Janko Prester for creating the web demo frontend.
  • Laurence Golding for adding some features to the web demo.
  • Leo Gao for running zero shot evaluations for the baseline models for the table.

License

The weights of GPT-J-6B are licensed under version 2.0 of the Apache License.

Model Details

Hyperparameter Value
n_parameters 6,053,381,344
n_layers 28*
d_model 4,096
d_ff 16,384
n_heads 16
d_head 256
n_ctx 2,048
n_vocab 50,257 (same tokenizer as GPT-2/3)
position encoding Rotary position encodings (RoPE)
RoPE dimensions 64

* each layer consists of one feedforward block and one self attention block

The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3.

Zero-Shot Evaluations

Models roughly sorted by performance, or by FLOPs if not available.

Model Weights Training FLOPs LAMBADA PPL ↓ LAMBADA Acc ↑ Winogrande ↑ Hellaswag ↑ PIQA ↑ Dataset Size (GB)
Chance 0 ~a lot ~0% 50% 25% 25% 0
GPT-3-Ada‡ ----- 9.95 51.6% 52.9% 43.4% 70.5% -----
GPT-2-1.5B ----- 10.63 51.21% 59.4% 50.9% 70.8% 40
GPTNeo-1.3B‡ 3.0e21 7.50 57.2% 55.0% 48.9% 71.1% 825
Megatron-2.5B* 2.4e21 ----- 61.7% ----- ----- ----- 174
GPTNeo-2.7B‡ 6.8e21 5.63 62.2% 56.5% 55.8% 73.0% 825
GPT-3-1.3B*‡ 2.4e21 5.44 63.6% 58.7% 54.7% 75.1% ~800
GPT-3-Babbage‡ ----- 5.58 62.4% 59.0% 54.5% 75.5% -----
Megatron-8.3B* 7.8e21 ----- 66.5% ----- ----- ----- 174
GPT-3-2.7B*‡ 4.8e21 4.60 67.1% 62.3% 62.8% 75.6% ~800
Megatron-11B† 1.0e22 ----- ----- ----- ----- ----- 161
GPT-J-6B 1.5e22 3.99 69.7% 65.3% 66.1% 76.5% 825
GPT-3-6.7B*‡ 1.2e22 4.00 70.3% 64.5% 67.4% 78.0% ~800
GPT-3-Curie‡ ----- 4.00 69.3% 65.6% 68.5% 77.9% -----
GPT-3-13B*‡ 2.3e22 3.56 72.5% 67.9% 70.9% 78.5% ~800
GPT-3-175B*‡ 3.1e23 3.00 76.2% 70.2% 78.9% 81.0% ~800
GPT-3-Davinci‡ ----- 3.0 75% 72% 78% 80% -----
Gopher 230B* 6.31E+23 ----- 74.50% 70.10% 79.20% 81.80% 1344
MT-NLG 530B*‡ ----- ----- 76.6% 73.0% 80.2% 82.0% -----

* represents evaluation numbers reported by their respective authors, all other numbers are provided by running the lm-evaluation-harness either with the released weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these might not be directly comparable. See this blog post for more details.

The Megatron-11B model provides no comparable metrics, and several implementations using the released weights do not reproduce the generation quality and evaluations. (see 1 2 3) Thus, evaluation was not attempted.

These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is trained on The Pile, which has not been deduplicated against any test sets.

Architecture and Usage

Most scripts in this repository are designed to be run on TPUs, which under the TPU-VM architecture are virtual machines which can run arbitrary code. Most scripts are designed to spin up a TPU, SSH into it to set up the dependencies and copy code over from the local directory, and then start a Ray worker which can accept RPC calls.

The TPUVMs handles running model training steps and evaluation, checkpoint save and loading, while the driver python program handles data loading and general orchestration (such as when to save checkpoints etc).

This means that most scripts (train.py, eval_harness.py etc) expect to be running on a GCE virtual machine in the same region as the TPUs, to minimize RPC latency and data transfer cost. Other scripts (usually ones which don't take a --tpu argument, such as device_sample.py, device_serve.py or device_train.py) expect to be run directly on a TPUVM. The device_* scripts only work on a v3-8 and not on larger pods.

Furthermore, there is an example (resharding_example.py) of how to convert the provided checkpoints (which have 8 shards in the case of GPT-J-6B) down to a smaller number, such as for when running on GPU(s).

Fine-tuning

To fine-tune the model, run device_train.py on a TPU VM. Using a TPU v3-8, you can fine-tune at a rate of ~5000 tokens/second, which should be sufficient for small-to-medium-size datasets.

Please read the step by step guide for thorough fine-tuning instructions.

JAX Dependency

Note this library has some specific requirements for JAX version. Specifically, to use the v1 models (including GPT-J 6B), jax==0.2.12 is required. This in turn depends on jaxlib==0.1.68. If this is not done, you will get cryptic xmap errors

However, to use the v2 model code (no publicly released weights), the newest JAX version can be used.

Citation

To cite this repository:

@misc{mesh-transformer-jax,
  author = {Wang, Ben},
  title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
  howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
  year = 2021,
  month = May
}

To cite the weights of GPT-J-6B:

@misc{gpt-j,
  author = {Wang, Ben and Komatsuzaki, Aran},
  title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
  howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
  year = 2021,
  month = May
}

If you use this repository or any of the pretrained weights to do something cool, we would love to hear about it. Feel free to open a github issue or reach out over email (in profile).

TODO

  • disentangle heads and shards
  • test/benchmark on TPU
  • implement gradient checkpointing
  • fix initialization
  • mixed precision
  • deal with preemptible TPUs
  • test and validate generation
  • shard activations instead of replicating for memory efficiency (in v2)
  • support ZeRO style sharding (in v2)
Owner
Ben Wang
Ben Wang
Contract Understanding Atticus Dataset

Contract Understanding Atticus Dataset This repository contains code for the Contract Understanding Atticus Dataset (CUAD), a dataset for legal contra

The Atticus Project 273 Dec 17, 2022
kochat

Kochat 챗봇 빌더는 성에 안차고, 자신만의 딥러닝 챗봇 애플리케이션을 만드시고 싶으신가요? Kochat을 이용하면 손쉽게 자신만의 딥러닝 챗봇 애플리케이션을 빌드할 수 있습니다. # 1. 데이터셋 객체 생성 dataset = Dataset(ood=True) #

1 Oct 25, 2021
Turn clang-tidy warnings and fixes to comments in your pull request

clang-tidy pull request comments A GitHub Action to post clang-tidy warnings and suggestions as review comments on your pull request. What platisd/cla

Dimitris Platis 30 Dec 13, 2022
Code for the paper "Language Models are Unsupervised Multitask Learners"

Status: Archive (code is provided as-is, no updates expected) gpt-2 Code and models from the paper "Language Models are Unsupervised Multitask Learner

OpenAI 16.1k Jan 08, 2023
NLP: SLU tagging

NLP: SLU tagging

北海若 3 Jan 14, 2022
CCF BDCI 2020 房产行业聊天问答匹配赛道 A榜47/2985

CCF BDCI 2020 房产行业聊天问答匹配 A榜47/2985 赛题描述详见:https://www.datafountain.cn/competitions/474 文件说明 data: 存放训练数据和测试数据以及预处理代码 model_bert.py: 网络模型结构定义 adv_train

shuo 40 Sep 28, 2022
State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

John Snow Labs 3k Jan 05, 2023
Script and models for clustering LAION-400m CLIP embeddings.

clustering-laion400m Script and models for clustering LAION-400m CLIP embeddings. Models were fit on the first million or so image embeddings. A subje

Peter Baylies 22 Oct 04, 2022
Baseline code for Korean open domain question answering(ODQA)

Open-Domain Question Answering(ODQA)는 다양한 주제에 대한 문서 집합으로부터 자연어 질의에 대한 답변을 찾아오는 task입니다. 이때 사용자 질의에 답변하기 위해 주어지는 지문이 따로 존재하지 않습니다. 따라서 사전에 구축되어있는 Knowl

VUMBLEB 69 Nov 04, 2022
The ability of computer software to identify words and phrases in spoken language and convert them to human-readable text

speech-recognition-py Speech recognition is the ability of computer software to identify words and phrases in spoken language and convert them to huma

Deepangshi 1 Apr 03, 2022
BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network)

BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network) BERTAC is a framework that combines a

6 Jan 24, 2022
This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

Graph4AI 230 Nov 22, 2022
An end to end ASR Transformer model training repo

END TO END ASR TRANSFORMER 本项目基于transformer 6*encoder+6*decoder的基本结构构造的端到端的语音识别系统 Model Instructions 1.数据准备: 自行下载数据,遵循文件结构如下: ├── data │ ├── train │

旷视天元 MegEngine 10 Jul 19, 2022
neural network based speaker embedder

Content What is deepaudio-speaker? Installation Get Started Model Architecture How to contribute to deepaudio-speaker? Acknowledge What is deepaudio-s

20 Dec 29, 2022
AIDynamicTextReader - A simple dynamic text reader based on Artificial intelligence

AI Dynamic Text Reader: This is a simple dynamic text reader based on Artificial

Md. Rakibul Islam 1 Jan 18, 2022
硕士期间自学的NLP子任务,供学习参考

NLP_Chinese_down_stream_task 自学的NLP子任务,供学习参考 任务1 :短文本分类 (1).数据集:THUCNews中文文本数据集(10分类) (2).模型:BERT+FC/LSTM,Pytorch实现 (3).使用方法: 预训练模型使用的是中文BERT-WWM, 下载地

12 May 31, 2022
Torchrecipes provides a set of reproduci-able, re-usable, ready-to-run RECIPES for training different types of models, across multiple domains, on PyTorch Lightning.

Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research techniques without significant engineering overhead.Specifica

Meta Research 193 Dec 28, 2022
Utilizing RBERT model for KLUE Relation Extraction task

RBERT for Relation Extraction task for KLUE Project Description Relation Extraction task is one of the task of Korean Language Understanding Evaluatio

snoop2head 14 Nov 15, 2022
Visual Automata is a Python 3 library built as a wrapper for Caleb Evans' Automata library to add more visualization features.

Visual Automata Copyright 2021 Lewi Lie Uberg Released under the MIT license Visual Automata is a Python 3 library built as a wrapper for Caleb Evans'

Lewi Uberg 55 Nov 17, 2022
Bot to connect a real Telegram user, simulating responses with OpenAI's davinci GPT-3 model.

AI-BOT Bot to connect a real Telegram user, simulating responses with OpenAI's davinci GPT-3 model.

Thempra 2 Dec 21, 2022