Easy Language Model Pretraining leveraging Huggingface's Transformers and Datasets

Related tags

Text Data & NLPlassl
Overview

Easy Language Model Pretraining leveraging Huggingface's Transformers and Datasets

What is LASSLHow to Use

License Issues

What is LASSL

LASSL은 LAnguage Semi-Supervised Learning의 약자로, 데이터만 있다면 누구나 쉽게 자신만의 언어모델을 가질 수 있도록 Huggingface의 Transformers, Datasets 라이브러리를 이용해 언어 모델 사전학습을 제공합니다.

Environment setting

아래 명령어를 통해 필요한 패키지를 설치하거나,

pip3 install -r requirements.txt

poetry를 이용하여 환경설정을 할 수 있습니다.

# poetry 설치
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -
# poetry dependencies 환경 설정
poetry install

How to Use

1. Train Tokenizer

python3 train_tokenizer.py \
    --corpora_dir $CORPORA_DIR \
    --corpus_type $CORPUS_TYPE \
    --sampling_ratio $SAMPLING_RATIO \
    --model_type $MODEL_TYPE \
    --vocab_size $VOCAB_SIZE \
    --min_frequency $MIN_FREQUENCY
# poetry 이용
poetry run python3 train_tokenizer.py \
    --corpora_dir $CORPORA_DIR \
    --corpus_type $CORPUS_TYPE \
    --sampling_ratio $SAMPLING_RATIO \
    --model_type $MODEL_TYPE \
    --vocab_size $VOCAB_SIZE \
    --min_frequency $MIN_FREQUENCY

2. Serialize Corpora

python3 serialize_corpora.py \
    --model_type $MODEL_TYPE \
    --tokenizer_dir $TOKENIZER_DIR \
    --corpora_dir $CORPORA_DIR \
    --corpus_type $CORPUS_TYPE \
    --max_length $MAX_LENGTH \
    --num_proc $NUM_PROC
# poetry 이용
poetry run python3 serialize_corpora.py \
    --model_type $MODEL_TYPE \
    --tokenizer_dir $TOKENIZER_DIR \
    --corpora_dir $CORPORA_DIR \
    --corpus_type $CORPUS_TYPE \
    --max_length $MAX_LENGTH \
    --num_proc $NUM_PROC

3. Pretrain Language Model

python3 pretrain_language_model.py --config_path $CONFIG_PATH
# poetry 이용
poetry run python3 pretrain_language_model.py --config_path $CONFIG_PATH
# TPU를 사용할 때는 아래 명령어를 사용합니다. (poetry 환경은 PyTorch XLA를 기본으로 제공하지 않습니다.)
python3 xla_spawn.py --num_cores $NUM_CORES pretrain_language_model.py --config_path $CONFIG_PATH

Contributors

김보섭 류민호 류인제 박장원 김형석
image1 image2 image3 image4 image5
Github Github Github Github Github

Acknowledgements

LASSL은 Tensorflow Research Cloud (TFRC) 프로그램의 Cloud TPU 지원으로 제작되었습니다.

Comments
  • Ready to release v0.1.0

    Ready to release v0.1.0

    Summary

    기본적으로 전체적인 틀은 잡혀있는 사항 v0.1.0을 release하기에 앞서 다음의 내용에 대해서 논의

    • serialize_corpora.pytrain_tokenizer.py가 지원하는 model_type에 이격이 존재
      • serialie_corpora.py: roberta, gpt2, albert
      • train_tokenizer.py: bert-uncased, bert-cased, gpt2, roberta, albert, electra
    • README.md
    help wanted 
    opened by seopbo 10
  • Refactor codes relevant to pretrain

    Refactor codes relevant to pretrain

    • 학습하고자하는 plm 별로 DataCollatorFor{MODEL}을 추가함.
    • pretrain_language_model.py에서 model_type_to_collator를 정의하여, model_type 별로 collator를 가져옴.
      • config 파일의 collator 항목에서 collator를 위한 args (e.g. mlm_probability)를 가져옴.
    • pretrain_language_model.py에서 eval_dataset을 사용하기위한 코드추가
      • config 파일의 data 항목에서 eval_dataset을 설정하기위한 test_size arg를 가져옴.
    • 그 의 isort, black 돌림.

    Refs: #30

    opened by seopbo 6
  • Add UL2 Language Modeling

    Add UL2 Language Modeling

    슬랙에서도 소개하긴 했는데 Universal Language Learning Paradigm 논문에 소개된 Mixture of Denoisers 를 활용한 목적함수가 기존 Span corruption, MLM, CLM 보다 전반적으로 좋다고 합니다. 저도 마침 회사에서 활용해 볼 생각이 있어서 lassl에 collator 및 processor를 구현하려고 하는데 어떻게 생각하시나요??

    opened by DaehanKim 4
  • Support training BART

    Support training BART

    Is your feature request related to a problem? Please describe. BART processor, collator 추가하기

    Describe the solution you'd like text_infilling 방법을 collator로 추가한다.

    enhancement 
    opened by bzantium 4
  • Add keep_in_memory option in load_dataset

    Add keep_in_memory option in load_dataset

    Is your feature request related to a problem? Please describe.

    • TPU VM에서 학습하는 과정에서 캐쉬로 인해 메모리가 충분함에도 disk 용량이 꽉차는 이슈가 발생함

    Describe the solution you'd like

    • load_dataset 단계에서 keep_in_memory 옵션을 추가하여 해결
    • Serialize과정이완료된 데이터는 disk에 저장되므로, train 단계에서는 필요가 없고 tokenizer, serialize과정에서만 추가
    opened by iron-ij 2
  • KoRobertaSmall training

    KoRobertaSmall training

    TODO

    Training tokenizer

    poetry run python3 train_tokenizer.py --corpora_dir corpora \
    --corpus_type sent_text \
    --model_type roberta \
    --vocab_size 51200 \
    --min_frequency 2
    

    Serializing corpora

    poetry run python3 serialize_corpora.py --model_type roberta \
    --tokenizer_dir tokenizers/roberta \
    --corpora_dir corpora \
    --corpus_type sent_text \
    --max_length 512 \
    --num_proc 96 \
    --batch_size 1000 \
    --writer_batch_size 1000
    

    ref:

    • https://github.com/huggingface/blog/blob/master/notebooks/13_pytorch_xla.ipynb
    help wanted 
    opened by seopbo 2
  • Support corpus_type

    Support corpus_type

    • "docu_text", "docu_json", "sent_text", "sent_json"으로 corpus_type을 정의함.
      • 위에 대응하여 load_corpora 함수를 수정함.
      • "sent_text"에 대응되는 loading scripts의 이름과 class 명을 수정함
      • serialize_corpora.py에서 corpus_type에 대응되게 argument parser를 수정함.
      • train_tokenizer.py에서 corpus_type에 대응되게 refactoring을 수행함.
      • model_name -> model_type으로 수정함.

    Refs: #23

    enhancement 
    opened by seopbo 2
  • Support setting arguments of pretraining by a config file

    Support setting arguments of pretraining by a config file

    • config 파일하나로 pretrain_language_model.py에 실행에 필요한 arguments를 전달함.
    • nested dict 처리를 위한 Omegaconf library 추가
    • CONFIG_MAPPING을 활용하여 class 생성자 호출

    Refs: #16

    opened by seopbo 2
  • argument setting

    argument setting

    To Do

    • https://github.com/lassl/lassl/blob/c507a547e5e22a3bc89bf65e448712783e688211/pretrain_language_model.py#L47
    • set ModelArguments from config.json file
    • set TrainingArguments from config.json file
    enhancement 
    opened by alxiom 2
  • Single-stage Electra collator refactored

    Single-stage Electra collator refactored

    src/lassl/collators.py

    1. Simplified the main operation (all-in-tensor)
    2. change the function name pad_for_token_type_ids -> _token_type_ids_with_pad for clarity
    documentation 
    opened by Doohae 1
  • Add config files #82

    Add config files #82

    Add config files for following:

    • bert-small.yaml
    • albert-small.yaml
    • gpt2-small.yaml
    • roberta-small.yaml Also add readme file for brief explanation of config files in general For issue #82 @seopbo
    opened by Doohae 1
  • Can you give some examples or benchmarks, that use this pretrain framework make downstream task better ?

    Can you give some examples or benchmarks, that use this pretrain framework make downstream task better ?

    I think if you can give a evidence that use this framework will improve the performance in some self build corpus in some downstream task, will make this project more attractive.

    opened by svjack 1
  • Change default save format to parquet

    Change default save format to parquet

    TODO

    • Currently, serialize_corpora.py saves encoded dataset with save_to_disk.
    • In this issue, we replace calling save_to_disk with calling `to_parquet``

    cc: @Doohae @DaehanKim

    enhancement 
    opened by seopbo 0
Releases(v1.0.0)
  • v1.0.0(Nov 2, 2022)

    What's Changed

    • [mixed] refactor: Refactor for v1.0.0 by @seopbo in https://github.com/lassl/lassl/pull/102
    • Currently, lassl suports to train bert, albert, roberta, gpt2, bart, t5, ul2
    • In next, lassl will suport to train electra. Moreover train_universal_tokenizer.py will be added to lassl.
      • train_universal_tokenizer.py will train tokenizer used to train all types of model which are supported by lassl.

    Full Changelog: https://github.com/lassl/lassl/compare/v0.2.0...v1.0.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Sep 22, 2022)

    What's Changed

    • Support training BART by @seopbo in https://github.com/lassl/lassl/pull/81
    • Support training T5 model by @DaehanKim in https://github.com/lassl/lassl/pull/87
    • Add config files #82 by @Doohae in https://github.com/lassl/lassl/pull/88
    • Support Electra pretrain by @Doohae in https://github.com/lassl/lassl/pull/91
    • Add UL2 Language Modeling by @DaehanKim in https://github.com/lassl/lassl/pull/98

    New Contributors

    • @DaehanKim made their first contribution in https://github.com/lassl/lassl/pull/87

    Full Changelog: https://github.com/lassl/lassl/compare/v0.1.4...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Mar 18, 2022)

    Summary

    • Refactor lassl for packaging modules to library
    • Add a function of dataset blending

    What's Changed

    • Add dataset blender by @hyunwoongko in https://github.com/lassl/lassl/pull/73
    • Remove poetry dependencies by @seopbo in https://github.com/lassl/lassl/pull/76

    New Contributors

    • @hyunwoongko made their first contribution in https://github.com/lassl/lassl/pull/73

    Full Changelog: https://github.com/lassl/lassl/compare/v0.1.2...v0.1.3

    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Dec 30, 2021)

    Summary

    • Fix bugs in src/collators.py

    What's Changed

    • [python] fix: Fix importing a invalid module by @seopbo in https://github.com/lassl/lassl/pull/72

    Full Changelog: https://github.com/lassl/lassl/compare/v0.1.1...v0.1.2

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Dec 20, 2021)

    Summary

    • Update README.md
      • Support README.md in english.
      • Support README_ko.md in korean.
    • Fix bugs of training GPT2
    • Add examples configs for gpu, tpu environments.

    What's Changed

    • [docs] fix: Change a license by @seopbo in https://github.com/lassl/lassl/pull/64
    • [etc] docs: Add English version of README by @bzantium in https://github.com/lassl/lassl/pull/66
    • Add example configs for gpu, tpu by @seopbo in https://github.com/lassl/lassl/pull/65
    • [python] fix: debug GPT2 processor and collator by @bzantium in https://github.com/lassl/lassl/pull/69
    • Update README.md by @bzantium in https://github.com/lassl/lassl/pull/70

    Full Changelog: https://github.com/lassl/lassl/compare/v0.1.0...v0.1.1

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Dec 15, 2021)

    Summary

    • First release

    What's Changed

    • Feature/#2 by @seopbo in https://github.com/lassl/lassl/pull/4
    • feat: TPU compatibility by @monologg in https://github.com/lassl/lassl/pull/8
    • Feature/#3 GPT2Preprocessor 추가 by @iron-ij in https://github.com/lassl/lassl/pull/10
    • [docs] chore: Add authors by @seopbo in https://github.com/lassl/lassl/pull/13
    • Feature/#9 ALBERT용 Processor, Collator 추가 by @bzantium in https://github.com/lassl/lassl/pull/14
    • [python] feat: Save tokenizer by @seopbo in https://github.com/lassl/lassl/pull/19
    • [python] mixed: Support sentence per line type doc by @seopbo in https://github.com/lassl/lassl/pull/20
    • Support setting arguments of pretraining by a config file by @seopbo in https://github.com/lassl/lassl/pull/22
    • Support corpus_type by @seopbo in https://github.com/lassl/lassl/pull/25
    • Support adding additional special tokens by @seopbo in https://github.com/lassl/lassl/pull/26
    • [python] feat: Add bert processor by @bzantium in https://github.com/lassl/lassl/pull/29
    • Refactor codes relevant to pretrain by @seopbo in https://github.com/lassl/lassl/pull/31
    • Update issue templates by @seopbo in https://github.com/lassl/lassl/pull/34
    • [python] fix: sampling_ratio 조건 추가하기 by @bzantium in https://github.com/lassl/lassl/pull/36
    • [python] chore: Update dependencies by @seopbo in https://github.com/lassl/lassl/pull/38
    • [python] fix: Fix a buffer in processing.py by @seopbo in https://github.com/lassl/lassl/pull/41
    • [mixed] fix: xla_spawn 변경, config 추가 및 주석 by @bzantium in https://github.com/lassl/lassl/pull/44
    • [python] feat: add keep_in_memory option in serialize_corpora by @iron-ij in https://github.com/lassl/lassl/pull/43
    • [chore] fix: Fix a requirements.txt by @seopbo in https://github.com/lassl/lassl/pull/46
    • [python] fix: sampling할 때 중복샘플링 옵션 제거 by @bzantium in https://github.com/lassl/lassl/pull/48
    • [etc] docs: README 추가 by @bzantium in https://github.com/lassl/lassl/pull/39
    • [etc] docs: README에 LASSL 약자소개 추가하기 by @bzantium in https://github.com/lassl/lassl/pull/52
    • [python] chore: Update dependencies by @seopbo in https://github.com/lassl/lassl/pull/54
    • [python] fix: GPT2 Collator CollatorForLM 상속하기 by @bzantium in https://github.com/lassl/lassl/pull/57
    • [etc] docs: Add additional information to doc by @seopbo in https://github.com/lassl/lassl/pull/59

    New Contributors

    • @seopbo made their first contribution in https://github.com/lassl/lassl/pull/4
    • @monologg made their first contribution in https://github.com/lassl/lassl/pull/8
    • @iron-ij made their first contribution in https://github.com/lassl/lassl/pull/10
    • @bzantium made their first contribution in https://github.com/lassl/lassl/pull/14

    Full Changelog: https://github.com/lassl/lassl/commits/v0.1.0

    Source code(tar.gz)
    Source code(zip)
Owner
LASSL: LAnguage Self-Supervised Learning
LASSL: LAnguage Self-Supervised Learning
Neural network sequence labeling model

Sequence labeler This is a neural network sequence labeling system. Given a sequence of tokens, it will learn to assign labels to each token. Can be u

Marek Rei 250 Nov 03, 2022
Ongoing research training transformer language models at scale, including: BERT & GPT-2

Megatron (1 and 2) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.

NVIDIA Corporation 3.5k Dec 30, 2022
A BERT-based reverse-dictionary of Korean proverbs

Wisdomify A BERT-based reverse-dictionary of Korean proverbs. 김유빈 : 모델링 / 데이터 수집 / 프로젝트 설계 / back-end 김종윤 : 데이터 수집 / 프로젝트 설계 / front-end Quick Start C

Eu-Bin KIM 94 Dec 08, 2022
a CTF web challenge about making screenshots

screenshotter (web) A CTF web challenge about making screenshots. It is inspired by a bug found in real life. The challenge was created by @LiveOverfl

219 Jan 02, 2023
BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network)

BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network) BERTAC is a framework that combines a

6 Jan 24, 2022
Creating an LSTM model to generate music

Music-Generation Creating an LSTM model to generate music music-generator Used to create basic sin wave sounds music-ai Contains the functions to conv

Jerin Joseph 2 Dec 02, 2021
Stand-alone language identification system

langid.py readme Introduction langid.py is a standalone Language Identification (LangID) tool. The design principles are as follows: Fast Pre-trained

2k Jan 04, 2023
A PyTorch-based model pruning toolkit for pre-trained language models

English | 中文说明 TextPruner是一个为预训练语言模型设计的模型裁剪工具包,通过轻量、快速的裁剪方法对模型进行结构化剪枝,从而实现压缩模型体积、提升模型速度。 其他相关资源: 知识蒸馏工具TextBrewer:https://github.com/airaria/TextBrewe

Ziqing Yang 231 Jan 08, 2023
🏖 Easy training and deployment of seq2seq models.

Headliner Headliner is a sequence modeling library that eases the training and in particular, the deployment of custom sequence models for both resear

Axel Springer Ideas Engineering GmbH 231 Nov 18, 2022
Generate text line images for training deep learning OCR model (e.g. CRNN)

Generate text line images for training deep learning OCR model (e.g. CRNN)

532 Jan 06, 2023
Code for "Generating Disentangled Arguments with Prompts: a Simple Event Extraction Framework that Works"

GDAP The code of paper "Code for "Generating Disentangled Arguments with Prompts: a Simple Event Extraction Framework that Works"" Event Datasets Prep

45 Oct 29, 2022
A simple Speech Emotion Recognition (SER) API created using Flask and running in a Docker container.

keyword_searching Steps to use this Python scripts: (1)Paste this script into the file folder containing the PDF files you need to search from; (2)Thi

2 Nov 11, 2022
Coreference resolution for English, German and Polish, optimised for limited training data and easily extensible for further languages

Coreferee Author: Richard Paul Hudson, msg systems ag 1. Introduction 1.1 The basic idea 1.2 Getting started 1.2.1 English 1.2.2 German 1.2.3 Polish 1

msg systems ag 169 Dec 21, 2022
Grading tools for Advanced NLP (11-711)Grading tools for Advanced NLP (11-711)

Grading tools for Advanced NLP (11-711) Installation You'll need docker and unzip to use this repo. For docker, visit the official guide to get starte

Hao Zhu 2 Sep 27, 2022
This Project is based on NLTK It generates a RANDOM WORD from a predefined list of words, From that random word it read out the word, its meaning with parts of speech , its antonyms, its synonyms

This Project is based on NLTK(Natural Language Toolkit) It generates a RANDOM WORD from a predefined list of words, From that random word it read out the word, its meaning with parts of speech , its

SaiVenkatDhulipudi 2 Nov 17, 2021
Bu Chatbot, Konya Bilim Merkezi Yen için tasarlanmış olan bir projedir.

chatbot Bu Chatbot, Konya Bilim Merkezi Yeni Ufuklar Sergisi için 2021 Yılında tasarlanmış olan bir projedir. Chatbot Python ortamında yazılmıştır. Sö

Emre Özkul 1 Feb 23, 2022
Script and models for clustering LAION-400m CLIP embeddings.

clustering-laion400m Script and models for clustering LAION-400m CLIP embeddings. Models were fit on the first million or so image embeddings. A subje

Peter Baylies 22 Oct 04, 2022
Translation for Trilium Notes. Trilium Notes 中文版.

Trilium Translation 中文说明 This repo provides a translation for the awesome Trilium Notes. Currently, I have translated Trilium Notes into Chinese. Test

743 Jan 08, 2023
Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper

Data Augmentation using Pre-trained Transformer Models Code associated with the Data Augmentation using Pre-trained Transformer Models paper Code cont

44 Dec 31, 2022