BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia.

Overview

BPEmb

BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia. Its intended use is as input for neural models in natural language processing.

WebsiteUsageDownloadMultiBPEmbPaper (pdf)Citing BPEmb

Usage

Install BPEmb with pip:

pip install bpemb

Embeddings and SentencePiece models will be downloaded automatically the first time you use them.

>>> from bpemb import BPEmb
# load English BPEmb model with default vocabulary size (10k) and 50-dimensional embeddings
>>> bpemb_en = BPEmb(lang="en", dim=50)
downloading https://nlp.h-its.org/bpemb/en/en.wiki.bpe.vs10000.model
downloading https://nlp.h-its.org/bpemb/en/en.wiki.bpe.vs10000.d50.w2v.bin.tar.gz

You can do two main things with BPEmb. The first is subword segmentation:

>> bpemb_zh = BPEmb(lang="zh", vs=100000) # apply Chinese BPE subword segmentation model >>> bpemb_zh.encode("这是一个中文句子") # "This is a Chinese sentence." ['▁这是一个', '中文', '句子'] # ["This is a", "Chinese", "sentence"] ">
# apply English BPE subword segmentation model
>>> bpemb_en.encode("Stratford")
['▁strat', 'ford']
# load Chinese BPEmb model with vocabulary size 100k and default (100-dim) embeddings
>>> bpemb_zh = BPEmb(lang="zh", vs=100000)
# apply Chinese BPE subword segmentation model
>>> bpemb_zh.encode("这是一个中文句子")  # "This is a Chinese sentence."
['▁这是一个', '中文', '句子']  # ["This is a", "Chinese", "sentence"]

If / how a word gets split depends on the vocabulary size. Generally, a smaller vocabulary size will yield a segmentation into many subwords, while a large vocabulary size will result in frequent words not being split:

vocabulary size segmentation
1000 ['▁str', 'at', 'f', 'ord']
3000 ['▁str', 'at', 'ford']
5000 ['▁str', 'at', 'ford']
10000 ['▁strat', 'ford']
25000 ['▁stratford']
50000 ['▁stratford']
100000 ['▁stratford']
200000 ['▁stratford']

The second purpose of BPEmb is to provide pretrained subword embeddings:

>> type(bpemb_en.vectors) numpy.ndarray >>> bpemb_en.vectors.shape (10000, 50) >>> bpemb_zh.vectors.shape (100000, 100) ">
# Embeddings are wrapped in a gensim KeyedVectors object
>>> type(bpemb_zh.emb)
gensim.models.keyedvectors.Word2VecKeyedVectors
# You can use BPEmb objects like gensim KeyedVectors
>>> bpemb_en.most_similar("ford")
[('bury', 0.8745079040527344),
 ('ton', 0.8725000619888306),
 ('well', 0.871537446975708),
 ('ston', 0.8701574206352234),
 ('worth', 0.8672043085098267),
 ('field', 0.859795331954956),
 ('ley', 0.8591548204421997),
 ('ington', 0.8126075267791748),
 ('bridge', 0.8099068999290466),
 ('brook', 0.7979353070259094)]
>>> type(bpemb_en.vectors)
numpy.ndarray
>>> bpemb_en.vectors.shape
(10000, 50)
>>> bpemb_zh.vectors.shape
(100000, 100)

To use subword embeddings in your neural network, either encode your input into subword IDs:

>> bpemb_zh.vectors[ids].shape (3, 100) ">
>>> ids = bpemb_zh.encode_ids("这是一个中文句子")
[25950, 695, 20199]
>>> bpemb_zh.vectors[ids].shape
(3, 100)

Or use the embed method:

# apply Chinese subword segmentation and perform embedding lookup
>>> bpemb_zh.embed("这是一个中文句子").shape
(3, 100)

Downloads for each language

ab (Abkhazian)ace (Achinese)ady (Adyghe)af (Afrikaans)ak (Akan)als (Alemannic)am (Amharic)an (Aragonese)ang (Old English)ar (Arabic)arc (Official Aramaic)arz (Egyptian Arabic)as (Assamese)ast (Asturian)atj (Atikamekw)av (Avaric)ay (Aymara)az (Azerbaijani)azb (South Azerbaijani)

ba (Bashkir)bar (Bavarian)bcl (Central Bikol)be (Belarusian)bg (Bulgarian)bi (Bislama)bjn (Banjar)bm (Bambara)bn (Bengali)bo (Tibetan)bpy (Bishnupriya)br (Breton)bs (Bosnian)bug (Buginese)bxr (Russia Buriat)

ca (Catalan)cdo (Min Dong Chinese)ce (Chechen)ceb (Cebuano)ch (Chamorro)chr (Cherokee)chy (Cheyenne)ckb (Central Kurdish)co (Corsican)cr (Cree)crh (Crimean Tatar)cs (Czech)csb (Kashubian)cu (Church Slavic)cv (Chuvash)cy (Welsh)

da (Danish)de (German)din (Dinka)diq (Dimli)dsb (Lower Sorbian)dty (Dotyali)dv (Dhivehi)dz (Dzongkha)

ee (Ewe)el (Modern Greek)en (English)eo (Esperanto)es (Spanish)et (Estonian)eu (Basque)ext (Extremaduran)

fa (Persian)ff (Fulah)fi (Finnish)fj (Fijian)fo (Faroese)fr (French)frp (Arpitan)frr (Northern Frisian)fur (Friulian)fy (Western Frisian)

ga (Irish)gag (Gagauz)gan (Gan Chinese)gd (Scottish Gaelic)gl (Galician)glk (Gilaki)gn (Guarani)gom (Goan Konkani)got (Gothic)gu (Gujarati)gv (Manx)

ha (Hausa)hak (Hakka Chinese)haw (Hawaiian)he (Hebrew)hi (Hindi)hif (Fiji Hindi)hr (Croatian)hsb (Upper Sorbian)ht (Haitian)hu (Hungarian)hy (Armenian)

ia (Interlingua)id (Indonesian)ie (Interlingue)ig (Igbo)ik (Inupiaq)ilo (Iloko)io (Ido)is (Icelandic)it (Italian)iu (Inuktitut)

ja (Japanese)jam (Jamaican Creole English)jbo (Lojban)jv (Javanese)

ka (Georgian)kaa (Kara-Kalpak)kab (Kabyle)kbd (Kabardian)kbp (Kabiyè)kg (Kongo)ki (Kikuyu)kk (Kazakh)kl (Kalaallisut)km (Central Khmer)kn (Kannada)ko (Korean)koi (Komi-Permyak)krc (Karachay-Balkar)ks (Kashmiri)ksh (Kölsch)ku (Kurdish)kv (Komi)kw (Cornish)ky (Kirghiz)

la (Latin)lad (Ladino)lb (Luxembourgish)lbe (Lak)lez (Lezghian)lg (Ganda)li (Limburgan)lij (Ligurian)lmo (Lombard)ln (Lingala)lo (Lao)lrc (Northern Luri)lt (Lithuanian)ltg (Latgalian)lv (Latvian)

mai (Maithili)mdf (Moksha)mg (Malagasy)mh (Marshallese)mhr (Eastern Mari)mi (Maori)min (Minangkabau)mk (Macedonian)ml (Malayalam)mn (Mongolian)mr (Marathi)mrj (Western Mari)ms (Malay)mt (Maltese)mwl (Mirandese)my (Burmese)myv (Erzya)mzn (Mazanderani)

na (Nauru)nap (Neapolitan)nds (Low German)ne (Nepali)new (Newari)ng (Ndonga)nl (Dutch)nn (Norwegian Nynorsk)no (Norwegian)nov (Novial)nrm (Narom)nso (Pedi)nv (Navajo)ny (Nyanja)

oc (Occitan)olo (Livvi)om (Oromo)or (Oriya)os (Ossetian)

pa (Panjabi)pag (Pangasinan)pam (Pampanga)pap (Papiamento)pcd (Picard)pdc (Pennsylvania German)pfl (Pfaelzisch)pi (Pali)pih (Pitcairn-Norfolk)pl (Polish)pms (Piemontese)pnb (Western Panjabi)pnt (Pontic)ps (Pushto)pt (Portuguese)

qu (Quechua)

rm (Romansh)rmy (Vlax Romani)rn (Rundi)ro (Romanian)ru (Russian)rue (Rusyn)rw (Kinyarwanda)

sa (Sanskrit)sah (Yakut)sc (Sardinian)scn (Sicilian)sco (Scots)sd (Sindhi)se (Northern Sami)sg (Sango)sh (Serbo-Croatian)si (Sinhala)sk (Slovak)sl (Slovenian)sm (Samoan)sn (Shona)so (Somali)sq (Albanian)sr (Serbian)srn (Sranan Tongo)ss (Swati)st (Southern Sotho)stq (Saterfriesisch)su (Sundanese)sv (Swedish)sw (Swahili)szl (Silesian)

ta (Tamil)tcy (Tulu)te (Telugu)tet (Tetum)tg (Tajik)th (Thai)ti (Tigrinya)tk (Turkmen)tl (Tagalog)tn (Tswana)to (Tonga)tpi (Tok Pisin)tr (Turkish)ts (Tsonga)tt (Tatar)tum (Tumbuka)tw (Twi)ty (Tahitian)tyv (Tuvinian)

udm (Udmurt)ug (Uighur)uk (Ukrainian)ur (Urdu)uz (Uzbek)

ve (Venda)vec (Venetian)vep (Veps)vi (Vietnamese)vls (Vlaams)vo (Volapük)

wa (Walloon)war (Waray)wo (Wolof)wuu (Wu Chinese)

xal (Kalmyk)xh (Xhosa)xmf (Mingrelian)

yi (Yiddish)yo (Yoruba)

za (Zhuang)zea (Zeeuws)zh (Chinese)zu (Zulu)

MultiBPEmb

multi (multilingual)

Citing BPEmb

If you use BPEmb in academic work, please cite:

@InProceedings{heinzerling2018bpemb,
  author = {Benjamin Heinzerling and Michael Strube},
  title = "{BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages}",
  booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
  year = {2018},
  month = {May 7-12, 2018},
  address = {Miyazaki, Japan},
  editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
  publisher = {European Language Resources Association (ELRA)},
  isbn = {979-10-95546-00-9},
  language = {english}
  }
Tools, wrappers, etc... for data science with a concentration on text processing

Rosetta Tools for data science with a focus on text processing. Focuses on "medium data", i.e. data too big to fit into memory but too small to necess

207 Nov 22, 2022
本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。

【关于 NLP】那些你不知道的事 作者:杨夕、芙蕖、李玲、陈海顺、twilight、LeoLRH、JimmyDU、艾春辉、张永泰、金金金 介绍 本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。 目录架构 一、【

1.4k Dec 30, 2022
Multiple implementations for abstractive text summurization , using google colab

Text Summarization models if you are able to endorse me on Arxiv, i would be more than glad https://arxiv.org/auth/endorse?x=FRBB89 thanks This repo i

463 Dec 26, 2022
The code for two papers: Feedback Transformer and Expire-Span.

transformer-sequential This repo contains the code for two papers: Feedback Transformer Expire-Span The training code is structured for long sequentia

Meta Research 125 Dec 25, 2022
A highly sophisticated sequence-to-sequence model for code generation

CoderX A proof-of-concept AI system by Graham Neubig (June 30, 2021). About CoderX CoderX is a retrieval-based code generation AI system reminiscent o

Graham Neubig 39 Aug 03, 2021
Klexikon: A German Dataset for Joint Summarization and Simplification

Klexikon: A German Dataset for Joint Summarization and Simplification Dennis Aumiller and Michael Gertz Heidelberg University Under submission at LREC

Dennis Aumiller 8 Jan 03, 2023
Rhasspy 673 Dec 28, 2022
Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics.

Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics. Jury offers a smooth and easy-to-use interface. It uses datasets for underlying metric computa

Open Business Software Solutions 129 Jan 06, 2023
To create a deep learning model which can explain the content of an image in the form of speech through caption generation with attention mechanism on Flickr8K dataset.

To create a deep learning model which can explain the content of an image in the form of speech through caption generation with attention mechanism on Flickr8K dataset.

Ragesh Hajela 0 Feb 08, 2022
SciBERT is a BERT model trained on scientific text.

SciBERT is a BERT model trained on scientific text.

AI2 1.2k Dec 24, 2022
open-information-extraction-system, build open-knowledge-graph(SPO, subject-predicate-object) by pyltp(version==3.4.0)

中文开放信息抽取系统, open-information-extraction-system, build open-knowledge-graph(SPO, subject-predicate-object) by pyltp(version==3.4.0)

7 Nov 02, 2022
Python wrapper for Stanford CoreNLP tools v3.4.1

Python interface to Stanford Core NLP tools v3.4.1 This is a Python wrapper for Stanford University's NLP group's Java-based CoreNLP tools. It can eit

Dustin Smith 610 Sep 07, 2022
Neural network sequence labeling model

Sequence labeler This is a neural network sequence labeling system. Given a sequence of tokens, it will learn to assign labels to each token. Can be u

Marek Rei 250 Nov 03, 2022
Natural language computational chemistry command line interface.

nlcc Install pip install nlcc Must have Open-AI Codex key: export OPENAI_API_KEY=your key here then nlcc key bindings ctrl-w copy to clipboard (Note

Andrew White 37 Dec 14, 2022
Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources (NAACL-2021).

Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources Description This is the repository for the paper Unifying Cross-

Sapienza NLP group 16 Sep 09, 2022
A repo for open resources & information for people to succeed in PhD in CS & career in AI / NLP

A repo for open resources & information for people to succeed in PhD in CS & career in AI / NLP

420 Dec 28, 2022
Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
ConvBERT: Improving BERT with Span-based Dynamic Convolution

ConvBERT Introduction In this repo, we introduce a new architecture ConvBERT for pre-training based language model. The code is tested on a V100 GPU.

YITUTech 237 Dec 10, 2022
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

Phil Wang 5k Jan 02, 2023
Blackstone is a spaCy model and library for processing long-form, unstructured legal text

Blackstone Blackstone is a spaCy model and library for processing long-form, unstructured legal text. Blackstone is an experimental research project f

ICLR&D 579 Jan 08, 2023