Universal End2End Training Platform, including pre-training, classification tasks, machine translation, and etc.

Overview

背景

TrenTrans是一个统一的端到端的多语言多任务预训练平台,支持多种预训练方式,以及序列生成和自然语言理解任务。

安装教程

git clone [email protected]:baijunji/Teg-Tentrans.git
pip install -r requirements.txt 

Tentrans是一个基于Pytorch的轻量级工具包,安装十分方便。

快速上手

(一)预训练模型

TenTrans支持多种预训练模型,包括基于编码器的预训练(e.g. MLM)和基于seq2seq结构的生成式预训练方法(e.g. Mass)。 此外, Tentrans还支持大规模的多语言机器翻译预训练。

我们将从最简单的MLM预训练开始,让您快速熟悉TenTrans的运行逻辑。

  1. 数据处理

在预训练MLM模型时,我们需要对单语训练文件进行二进制化。您可以使用以下命令, 词表的格式为一行一词,执行该命令后会生成train.bpe.en.pth。

python process.py vocab file  lang [shard_id](optional)

当数据规模不大时,您可以使用纯文本格式的csv作为训练文件。csv的文件格式为

seq1 lang1
This is a positive sentence. en
This is a negtive sentence. en
This is a sentence. en
  1. 参数配置

TenTrans是通过yaml文件的方式读取训练参数的, 我们提供了一系列的适应各个任务的训练配置文件模版(见 run/ 文件夹),您只要改动很小的一部分参数即可。

# base config
langs: [en]
epoch: 15
update_every_epoch:  1   # 每轮更新多少step
dumpdir: ./dumpdir       # 模型及日志文件保存的地方
share_all_task_model: True # 是否共享所有任务的模型参数
save_intereval: 1      # 模型保存间隔
log_interval: 10       # 打印日志间隔



#全局设置开始, 如果tasks内没有定义特定的参数,则将使用全局设置
optimizer: adam 
learning_rate: 0.0001
learning_rate_warmup: 4000
scheduling: warmupexponentialdecay
max_tokens: 2000
group_by_size: False   # 是否对语料对长度排序
max_seq_length: 260    # 模型所能接受的最大句子长度
weight_decay: 0.01
eps: 0.000001
adam_betas: [0.9, 0.999]

sentenceRep:           # 模型编码器设置
  type: transformer #cbow, rnn
  hidden_size: 768
  ff_size: 3072
  dropout: 0.1
  attention_dropout: 0.1
  encoder_layers: 12
  num_lang: 1
  num_heads: 12
  use_langembed: False
  embedd_size: 768
  learned_pos: True
  pretrain_embedd: 
  activation: gelu
#全局设置结束


tasks:                #任务定义, TenTrans支持多种任务联合训练,包括分类,MLM和seq2seq联合训练。
  en_mlm:             #任务ID,  您可以随意定义有含义的标识名
    task_name: mlm    #任务名,  TenTrans会根据指定的任务名进行训练
    data:
        data_folder: your_data_folder
        src_vocab: vocab.txt
        # train_valid_test: [train.bpe.en.csv, valid.bpe.en.csv, test.bpe.en.csv]
        train_valid_test: [train.bpe.en.pth, valid.bpe.en.pth, test.bpe.en.pth]
        stream_text: False  # 是否启动文本流训练
        p_pred_mask_kepp_rand: [0.15, 0.8, 0.1, 0.1]

    target:           # 输出层定义
        sentence_rep_dim: 768
        dropout: 0.1
        share_out_embedd: True
  1. 启动训练

单机多卡

export NPROC_PER_NODE=8;
python -m torch.distributed.launch \
                --nproc_per_node=$NPROC_PER_NODE main.py \
                --config run/xlm.yaml --multi_gpu True

(二)机器翻译

本节您将快速学会如何训练一个基于Transformer的神经机器翻译模型,我们以WMT14 英-德为例(下载数据)。

  1. 数据处理

与处理单语训练文件相同,您也需要对翻译的平行语料进行二进制化。

python process.py vocab.bpe.32000 train.bpe.de de
python process.py vocab.bpe.32000 train.bpe.en en
  1. 参数配置
# base config
langs: [en, de]
epoch: 50
update_every_epoch: 5000
dumpdir: ./exp/tentrans/wmt14ende_template

share_all_task_model: True
optimizer: adam 
learning_rate: 0.0007
learning_rate_warmup: 4000
scheduling: warmupexponentialdecay
max_tokens: 8000
max_seq_length: 512
save_intereval: 1
weight_decay: 0
adam_betas: [0.9, 0.98]

clip_grad_norm: 0
label_smoothing: 0.1
accumulate_gradients: 2
share_all_embedd: True
patience: 10
#share_out_embedd: False

tasks:
  wmtende_mt:
    task_name: seq2seq
    reload_checkpoint:
    data:
        data_folder:  /train_data/wmt16_ende/
        src_vocab: vocab.bpe.32000
        tgt_vocab: vocab.bpe.32000
        train_valid_test: [train.bpe.en.pth:train.bpe.de.pth, valid.bpe.en.pth:valid.bpe.de.pth, test.bpe.en.pth:test.bpe.de.pth]
        group_by_size: True
        max_len: 200

    sentenceRep:
      type: transformer 
      hidden_size: 512
      ff_size: 2048
      attention_dropout: 0.1
      encoder_layers: 6
      num_heads: 8
      embedd_size: 512
      dropout: 0.1
      learned_pos: True
      activation: relu

    target:
      type: transformer 
      hidden_size: 512
      ff_size: 2048
      attention_dropout: 0.1
      decoder_layers: 6
      num_heads: 8
      embedd_size: 512
      dropout: 0.1
      learned_pos: True
      activation: relu
  1. 模型解码

大约训练更新20万步之后(8张M40,大约耗时四十小时), 我们可以使用TenTrans提供的脚本对平均最后几个模型来获得更好的效果。

path=model_save_path
python  scripts/average_checkpoint.py --inputs  $path/checkpoint_seq2seq_ldc_mt_40 \
    $path/checkpoint_seq2seq_ldc_mt_39 $path/checkpoint_seq2seq_ldc_mt_38 \
    $path/checkpoint_seq2seq_ldc_mt_37 $path/checkpoint_seq2seq_ldc_mt_36 \
    $path/checkpoint_seq2seq_ldc_mt_35 $path/checkpoint_seq2seq_ldc_mt_34 \
    --output $path/average.pt

我们可以使用平均之后的模型进行翻译解码,

python -u infer/translation_infer.py \
        --src train_data/wmt16_ende/test.bpe.en \
        --src_vocab train_data/wmt16_ende/vocab.bpe.32000 \
        --tgt_vocab train_data/wmt16_ende/vocab.bpe.32000 \
        --src_lang en \
        --tgt_lang de --batch_size 50 --beam 4 --length_penalty 0.6 \
        --model_path model_save_path/average.pt | \
        grep "Target_" | cut -f2- -d " " | sed -r 's/(@@ )|(@@ ?$)//g' > predict.ende

cat  train_data/wmt16_ende/test.tok.de |  perl -ple 's{(\S)-(\S)}{$1 ##AT##-##AT## $2}g' > generate.ref
cat  predict.ende | perl -ple 's{(\S)-(\S)}{$1 ##AT##-##AT## $2}g' > generate.sys
perl ../scripts/multi-bleu.perl generate.ref < generate.sys
  1. 翻译结果
WMT14-ende BLEU
Attention is all you need(beam=4) 27.30
TenTrans(beam=4, 8gpus, updates=200k, gradient_accu=1) 27.54
TenTrans(beam=4, 8gpus, updates=125k, gradient_accu=2) 27.64
TenTrans(beam=4, 24gpus, updates=90k, gradient_accu=1) 27.67

(三)文本分类

您同样可以使用我们所提供的预训练模型来进行下游任务, 本节我们将以SST2任务为例, 让你快速上手使用预训练模型进行微调下游任务。

  1. 数据处理

我们推荐使用文本格式进行文本分类的训练,因为这更轻量和快速。我们将SST2的数据处理为如下格式(见sample_data 文件夹):

seq1 label1 lang1
This is a positive sentence. postive en
This is a negtive sentence. negtive en
This is a sentence. unknow en
  1. 参数配置
# base config
langs: [en]
epoch: 200
update_every_epoch: 1000
share_all_task_model: False
batch_size: 8 
save_interval: 20
dumpdir: ./dumpdir/sst2

sentenceRep:
  type: transformer
  pretrain_rep: ../tentrans_pretrain/model_mlm2048.tt

tasks:
  sst2_en:
    task_name: classification
    data:
        data_folder:  sample_data/sst2
        src_vocab: vocab_en
        train_valid_test: [train.csv, dev.csv, test.csv]
        label1: [0, 1]
        feature: [seq1, label1, lang1]
    lr_e: 0.000005  # encoder学习率
    lr_p: 0.000125  # target 学习率
    target:
      sentence_rep_dim: 2048
      dropout: 0.1
    weight_training: False # 是否采用数据平衡
  1. 分类解码
python -u classification_infer.py \
         --model model_path \
         --vocab  sample_data/sst2/vocab_en \
         --src test.txt \
         --lang en --threhold 0.5  > predict.out.label
python scripts/eval_recall.py  test.en.label predict.out.label

TenTrans 进阶

1. 多语言机器翻译

2. 跨语言预训练

Owner
Tencent Minority-Mandarin Translation Team
Tencent Minority-Mandarin Translation Team
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
End-to-end image captioning with EfficientNet-b3 + LSTM with Attention

Image captioning End-to-end image captioning with EfficientNet-b3 + LSTM with Attention Model is seq2seq model. In the encoder pretrained EfficientNet

2 Feb 10, 2022
InferSent sentence embeddings

InferSent InferSent is a sentence embeddings method that provides semantic representations for English sentences. It is trained on natural language in

Facebook Research 2.2k Dec 27, 2022
Transformer - A TensorFlow Implementation of the Transformer: Attention Is All You Need

[UPDATED] A TensorFlow Implementation of Attention Is All You Need When I opened this repository in 2017, there was no official code yet. I tried to i

Kyubyong Park 3.8k Dec 26, 2022
Hostapd-mac-tod-acl - Setup a hostapd AP with MAC ToD ACL

A brief explanation This script provides a quick way to setup a Time-of-day (Tod

2 Feb 03, 2022
Nested Named Entity Recognition

Nested Named Entity Recognition Training Dataset: CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark url: https://tianchi.aliyun.

8 Dec 25, 2022
The (extremely) naive sentiment classification function based on NBSVM trained on wisesight_sentiment

thai_sentiment The naive sentiment classification function based on NBSVM trained on wisesight_sentiment วิธีติดตั้ง pip install thai_sentiment==0.1.3

Charin 7 Dec 08, 2022
Telegram bot to auto post messages of one channel in another channel as soon as it is posted, without the forwarded tag.

Channel Auto-Post Bot This bot can send all new messages from one channel, directly to another channel (or group, just in case), without the forwarded

Aditya 128 Dec 29, 2022
DVC-NLP-Simple-usecase

dvc-NLP-simple-usecase DVC NLP project Reference repository: official reference repo DVC STUDIO MY View Bag of Words- Krish Naik TF-IDF- Krish Naik ST

SUNNY BHAVEEN CHANDRA 2 Oct 02, 2022
This repository contains (not all) code from my project on Named Entity Recognition in philosophical text

NERphilosophy 👋 Welcome to the github repository of my BsC thesis. This repository contains (not all) code from my project on Named Entity Recognitio

Ruben 1 Jan 27, 2022
A PyTorch implementation of paper "Learning Shared Semantic Space for Speech-to-Text Translation", ACL (Findings) 2021

Chimera: Learning Shared Semantic Space for Speech-to-Text Translation This is a Pytorch implementation for the "Chimera" paper Learning Shared Semant

Chi Han 43 Dec 28, 2022
(ACL 2022) The source code for the paper "Towards Abstractive Grounded Summarization of Podcast Transcripts"

Towards Abstractive Grounded Summarization of Podcast Transcripts We provide the source code for the paper "Towards Abstractive Grounded Summarization

10 Jul 01, 2022
Tools to download and cleanup Common Crawl data

cc_net Tools to download and clean Common Crawl as introduced in our paper CCNet. If you found these resources useful, please consider citing: @inproc

Meta Research 483 Jan 02, 2023
Code for text augmentation method leveraging large-scale language models

HyperMix Code for our paper GPT3Mix and conducting classification experiments using GPT-3 prompt-based data augmentation. Getting Started Installing P

NAVER AI 47 Dec 20, 2022
A Japanese tokenizer based on recurrent neural networks

Nagisa is a python module for Japanese word segmentation/POS-tagging. It is designed to be a simple and easy-to-use tool. This tool has the following

325 Jan 05, 2023
Twitter-NLP-Analysis - Twitter Natural Language Processing Analysis

Twitter-NLP-Analysis Business Problem I got last @turk_politika 3000 tweets with

Çağrı Karadeniz 7 Mar 12, 2022
Chatbot with Pytorch, Python & Nextjs

Installation Instructions Make sure that you have Python 3, gcc, venv, and pip installed. Clone the repository $ git clone https://github.com/sahr

Rohit Sah 0 Dec 11, 2022
Russian words synonyms and antonyms

ru_synonyms Russian words synonyms and antonyms. Install pip install git+https://github.com/ahmados/rusynonyms.git Usage from ru_synonyms import Anto

sumekenov 7 Dec 14, 2022
Repositório do trabalho de introdução a NLP

Trabalho da disciplina de BI NLP Repositório do trabalho da disciplina Introdução a Processamento de Linguagem Natural da pós BI-Master da PUC-RIO. Eq

Leonardo Lins 1 Jan 18, 2022
Code for "Generating Disentangled Arguments with Prompts: a Simple Event Extraction Framework that Works"

GDAP The code of paper "Code for "Generating Disentangled Arguments with Prompts: a Simple Event Extraction Framework that Works"" Event Datasets Prep

45 Oct 29, 2022