FireFlyer Record file format, writer and reader for DL training samples.

Overview

FFRecord

The FFRecord format is a simple format for storing a sequence of binary records developed by HFAiLab, which supports random access and Linux Asynchronous Input/Output (AIO) read.

File Format

Storage Layout:

+-----------------------------------+---------------------------------------+
|         checksum                  |             N                         |
+-----------------------------------+---------------------------------------+
|         checksums                 |           offsets                     |
+---------------------+---------------------+--------+----------------------+
|      sample 1       |      sample 2       | ....   |      sample N        |
+---------------------+---------------------+--------+----------------------+

Fields:

field size (bytes) description
checksum 4 CRC32 checksum of metadata
N 8 number of samples
checksums 4 * N CRC32 checksum of each sample
offsets 8 * N byte offset of each sample
sample i offsets[i + 1] - offsets[i] data of the i-th sample

Get Started

Requirements

Install

pip3 install ffrecord

Usage

We provide ffrecord.FileWriter and ffrecord.FileReader for reading and writing, respectively.

Write

To create a FileWriter object, you need to specify a file name and the total number of samples. And then you could call FileWriter.write_one() to write a sample to the FFRecord file. It accepts bytes or bytearray as input and appends the data to the end of the opened file.

from ffrecord import FileWriter


def serialize(sample):
    """ Serialize a sample to bytes or bytearray

    You could use anything you like to serialize the sample.
    Here we simply use pickle.dumps().
    """
    return pickle.dumps(sample)


samples = [i for i in range(100)]  # anything you would like to store
fname = 'test.ffr'
n = len(samples)  # number of samples to be written
writer = FileWriter(fname, n)

for i in range(n):
    data = serialize(samples[i])  # data should be bytes or bytearray
    writer.write_one(data)

writer.close()

Read

To create a FileReader object, you only need to specify the file name. And then you could call FileWriter.read() to read multiple samples from the FFReocrd file. It accepts a list of indices as input and outputs the corresponding samples data.

The reader would validate the checksum before returning the data if check_data = True.

from ffrecord import FileReader


def deserialize(data):
    """ deserialize bytes data

    The deserialize method should be paired with the serialize method above.
    """
    return pickle.loads(data)


fname = 'test.ffr'
reader = FileReader(fname, check_data=True)
print(f'Number of samples: {reader.n}')

indices = [3, 6, 0, 10]      # indices of each sample
data = reader.read(indices)  # return a list of bytes data

for i in range(n):
    sample = deserialize(data[i])
    # do what you want

reader.close()

Dataset and DataLoader for PyTorch

We also provide ffrecord.torch.Dataset and ffrecord.torch.DataLoader for PyTorch users to train models using FFRecord.

Different from torch.utils.data.Dataset which accepts an index as input and returns one sample, ffrecord.torch.Dataset accepts a batch of indices as input and returns a batch of samples. One advantage of ffrecord.torch.Dataset is that it could read a batch of data at a time using Linux AIO.

We first read a batch of bytes data from the FFReocrd file and then pass the bytes data to process() function. Users need to inherit from ffrecord.torch.Dataset and define their custom process() function.

Pipline:   indices ----------------------------> bytes -------------> samples
                      reader.read(indices)               process()

For example:

class CustomDataset(ffrecord.torch.Dataset):

    def __init__(self, fname, check_data=True, transform=None):
        super().__init__(fname, check_data)
        self.transform = transform

    def process(self, indices, data):
        # deserialize data
        samples = [pickle.loads(b) for b in data]

        # transform data
        if self.transform:
            samples = [self.transform(s) for s in samples]
        return samples

dataset = CustomDataset('train.ffr')
indices = [3, 4, 1, 0]
samples = dataset[indices]

ffrecord.torch.Dataset could be combined with ffrecord.torch.DataLoader just like PyTorch.

dataset = CustomDataset('train.ffr')
loader = ffrecord.torch.DataLoader(dataset,
                                   batch_size=16,
                                   shuffle=True,
                                   num_workers=8)

for i, batch in enumerate(loader):
    # training model
You might also like...
Word2Wave: a framework for generating short audio samples from a text prompt using WaveGAN and COALA.

Word2Wave is a simple method for text-controlled GAN audio generation. You can either follow the setup instructions below and use the source code and CLI provided in this repo or you can have a play around in the Colab notebook provided. Note that, in both cases, you will need to train a WaveGAN model first

Text to speech is a process to convert any text into voice. Text to speech project takes words on digital devices and convert them into audio. Here I have used Google-text-to-speech library popularly known as gTTS library to convert text file to .mp3 file. Hope you like my project!
Universal End2End Training Platform, including pre-training, classification tasks, machine translation, and etc.

背景 安装教程 快速上手 (一)预训练模型 (二)机器翻译 (三)文本分类 TenTrans 进阶 1. 多语言机器翻译 2. 跨语言预训练 背景 TrenTrans是一个统一的端到端的多语言多任务预训练平台,支持多种预训练方式,以及序列生成和自然语言理解任务。 安装教程 git clone git

A Domain Specific Language (DSL) for building language patterns. These can be later compiled into spaCy patterns, pure regex, or any other format
A Domain Specific Language (DSL) for building language patterns. These can be later compiled into spaCy patterns, pure regex, or any other format

RITA DSL This is a language, loosely based on language Apache UIMA RUTA, focused on writing manual language rules, which compiles into either spaCy co

The Sudachi synonym dictionary in Solar format.

solr-sudachi-synonyms The Sudachi synonym dictionary in Solar format. Summary Run a script that checks for updates to the Sudachi dictionary every hou

Coreference resolution for English, German and Polish, optimised for limited training data and easily extensible for further languages
Coreference resolution for English, German and Polish, optimised for limited training data and easily extensible for further languages

Coreferee Author: Richard Paul Hudson, msg systems ag 1. Introduction 1.1 The basic idea 1.2 Getting started 1.2.1 English 1.2.2 German 1.2.3 Polish 1

Tevatron is a simple and efficient toolkit for training and running dense retrievers with deep language models.

Tevatron Tevatron is a simple and efficient toolkit for training and running dense retrievers with deep language models. The toolkit has a modularized

Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further languages
Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further languages

Coreferee Author: Richard Paul Hudson, Explosion AI 1. Introduction 1.1 The basic idea 1.2 Getting started 1.2.1 English 1.2.2 French 1.2.3 German 1.2

Comments
  • install error

    install error

    When I install ffrecord with python setup.py install, it failed with the following errors:

    running install
    running bdist_egg
    running egg_info
    creating ffrecord.egg-info
    writing ffrecord.egg-info/PKG-INFO
    writing dependency_links to ffrecord.egg-info/dependency_links.txt
    writing requirements to ffrecord.egg-info/requires.txt
    writing top-level names to ffrecord.egg-info/top_level.txt
    writing manifest file 'ffrecord.egg-info/SOURCES.txt'
    reading manifest file 'ffrecord.egg-info/SOURCES.txt'
    writing manifest file 'ffrecord.egg-info/SOURCES.txt'
    installing library code to build/bdist.linux-x86_64/egg
    running install_lib
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.7
    creating build/lib.linux-x86_64-3.7/ffrecord
    copying ffrecord/fileio.py -> build/lib.linux-x86_64-3.7/ffrecord
    copying ffrecord/__init__.py -> build/lib.linux-x86_64-3.7/ffrecord
    copying ffrecord/utils.py -> build/lib.linux-x86_64-3.7/ffrecord
    creating build/lib.linux-x86_64-3.7/ffrecord/torch
    copying ffrecord/torch/__init__.py -> build/lib.linux-x86_64-3.7/ffrecord/torch
    copying ffrecord/torch/dataset.py -> build/lib.linux-x86_64-3.7/ffrecord/torch
    copying ffrecord/torch/dataloader.py -> build/lib.linux-x86_64-3.7/ffrecord/torch
    running build_ext
    -- The C compiler identification is GNU 7.5.0
    -- The CXX compiler identification is GNU 7.5.0
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Check for working C compiler: /usr/bin/cc - skipped
    -- Detecting C compile features
    -- Detecting C compile features - done
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Check for working CXX compiler: /usr/bin/c++ - skipped
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    -- Found PythonInterp: /opt/conda/bin/python (found version "3.7.10") 
    -- Found PythonLibs: /opt/conda/lib/libpython3.7m.so
    -- Performing Test HAS_CPP14_FLAG
    -- Performing Test HAS_CPP14_FLAG - Success
    -- Performing Test HAS_CPP11_FLAG
    -- Performing Test HAS_CPP11_FLAG - Success
    -- Performing Test HAS_LTO_FLAG
    -- Performing Test HAS_LTO_FLAG - Success
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /root/ffrecord/build/temp.linux-x86_64-3.7
    [ 20%] Building CXX object CMakeFiles/_ffrecord_cpp.dir/reader.cpp.o
    [ 40%] Building CXX object CMakeFiles/_ffrecord_cpp.dir/writer.cpp.o
    [ 60%] Building CXX object CMakeFiles/_ffrecord_cpp.dir/utils.cpp.o
    [ 80%] Building CXX object CMakeFiles/_ffrecord_cpp.dir/bindings.cpp.o
    /root/ffrecord/ffrecord/src/bindings.cpp: In member function ‘void ffrecord::WriterWrapper::write_one_wrapper(const pybind11::buffer&)’:
    /root/ffrecord/ffrecord/src/bindings.cpp:22:44: error: passing ‘const pybind11::buffer’ as ‘this’ argument discards qualifiers [-fpermissive]
             py::buffer_info info = buf.request();
                                                ^
    In file included from /usr/include/pybind11/cast.h:13:0,
                     from /usr/include/pybind11/attr.h:13,
                     from /usr/include/pybind11/pybind11.h:36,
                     from /root/ffrecord/ffrecord/src/bindings.cpp:1:
    /usr/include/pybind11/pytypes.h:832:17: note:   in call to ‘pybind11::buffer_info pybind11::buffer::request(bool)’
         buffer_info request(bool writable = false) {
                     ^~~~~~~
    /root/ffrecord/ffrecord/src/bindings.cpp: In member function ‘std::vector<pybind11::array> ffrecord::ReaderWrapper::read_batch_wrapper(const std::vector<long int>&)’:
    /root/ffrecord/ffrecord/src/bindings.cpp:41:59: error: invalid conversion from ‘void (*)(void*)’ to ‘void (*)(PyObject*) {aka void (*)(_object*)}’ [-fpermissive]
                 auto capsule = py::capsule(b.data, free_buffer);
                                                               ^
    In file included from /usr/include/pybind11/cast.h:13:0,
                     from /usr/include/pybind11/attr.h:13,
                     from /usr/include/pybind11/pybind11.h:36,
                     from /root/ffrecord/ffrecord/src/bindings.cpp:1:
    /usr/include/pybind11/pytypes.h:734:14: note:   initializing argument 2 of ‘pybind11::capsule::capsule(const void*, void (*)(PyObject*))’
         explicit capsule(const void *value, void (*destruct)(PyObject *) = nullptr)
                  ^~~~~~~
    /root/ffrecord/ffrecord/src/bindings.cpp: In member function ‘pybind11::array ffrecord::ReaderWrapper::read_one_wrapper(int64_t)’:
    /root/ffrecord/ffrecord/src/bindings.cpp:49:55: error: invalid conversion from ‘void (*)(void*)’ to ‘void (*)(PyObject*) {aka void (*)(_object*)}’ [-fpermissive]
             auto capsule = py::capsule(b.data, free_buffer);
                                                           ^
    In file included from /usr/include/pybind11/cast.h:13:0,
                     from /usr/include/pybind11/attr.h:13,
                     from /usr/include/pybind11/pybind11.h:36,
                     from /root/ffrecord/ffrecord/src/bindings.cpp:1:
    /usr/include/pybind11/pytypes.h:734:14: note:   initializing argument 2 of ‘pybind11::capsule::capsule(const void*, void (*)(PyObject*))’
         explicit capsule(const void *value, void (*destruct)(PyObject *) = nullptr)
                  ^~~~~~~
    /root/ffrecord/ffrecord/src/bindings.cpp: In member function ‘pybind11::array_t<long int> ffrecord::ReaderWrapper::get_offsets(int)’:
    /root/ffrecord/ffrecord/src/bindings.cpp:55:58: error: invalid user-defined conversion from ‘ffrecord::ReaderWrapper::get_offsets(int)::<lambda(void*)>’ to ‘void (*)(PyObject*) {aka void (*)(_object*)}’ [-fpermissive]
             auto capsule = py::capsule(v.data(), [](void*) {});
                                                              ^
    /root/ffrecord/ffrecord/src/bindings.cpp:55:54: note: candidate is: ffrecord::ReaderWrapper::get_offsets(int)::<lambda(void*)>::operator void (*)(void*)() const <near match>
             auto capsule = py::capsule(v.data(), [](void*) {});
                                                          ^
    /root/ffrecord/ffrecord/src/bindings.cpp:55:54: note:   no known conversion from ‘void (*)(void*)’ to ‘void (*)(PyObject*) {aka void (*)(_object*)}’
    In file included from /usr/include/pybind11/cast.h:13:0,
                     from /usr/include/pybind11/attr.h:13,
                     from /usr/include/pybind11/pybind11.h:36,
                     from /root/ffrecord/ffrecord/src/bindings.cpp:1:
    /usr/include/pybind11/pytypes.h:734:14: note:   initializing argument 2 of ‘pybind11::capsule::capsule(const void*, void (*)(PyObject*))’
         explicit capsule(const void *value, void (*destruct)(PyObject *) = nullptr)
                  ^~~~~~~
    /root/ffrecord/ffrecord/src/bindings.cpp: In member function ‘pybind11::array_t<unsigned int> ffrecord::ReaderWrapper::get_checksums(int)’:
    /root/ffrecord/ffrecord/src/bindings.cpp:61:58: error: invalid user-defined conversion from ‘ffrecord::ReaderWrapper::get_checksums(int)::<lambda(void*)>’ to ‘void (*)(PyObject*) {aka void (*)(_object*)}’ [-fpermissive]
             auto capsule = py::capsule(v.data(), [](void*) {});
                                                              ^
    /root/ffrecord/ffrecord/src/bindings.cpp:61:54: note: candidate is: ffrecord::ReaderWrapper::get_checksums(int)::<lambda(void*)>::operator void (*)(void*)() const <near match>
             auto capsule = py::capsule(v.data(), [](void*) {});
                                                          ^
    /root/ffrecord/ffrecord/src/bindings.cpp:61:54: note:   no known conversion from ‘void (*)(void*)’ to ‘void (*)(PyObject*) {aka void (*)(_object*)}’
    In file included from /usr/include/pybind11/cast.h:13:0,
                     from /usr/include/pybind11/attr.h:13,
                     from /usr/include/pybind11/pybind11.h:36,
                     from /root/ffrecord/ffrecord/src/bindings.cpp:1:
    /usr/include/pybind11/pytypes.h:734:14: note:   initializing argument 2 of ‘pybind11::capsule::capsule(const void*, void (*)(PyObject*))’
         explicit capsule(const void *value, void (*destruct)(PyObject *) = nullptr)
                  ^~~~~~~
    /root/ffrecord/ffrecord/src/bindings.cpp: At global scope:
    /root/ffrecord/ffrecord/src/bindings.cpp:67:16: error: expected constructor, destructor, or type conversion before ‘(’ token
     PYBIND11_MODULE(_ffrecord_cpp, m) {
                    ^
    CMakeFiles/_ffrecord_cpp.dir/build.make:117: recipe for target 'CMakeFiles/_ffrecord_cpp.dir/bindings.cpp.o' failed
    make[2]: *** [CMakeFiles/_ffrecord_cpp.dir/bindings.cpp.o] Error 1
    CMakeFiles/Makefile2:82: recipe for target 'CMakeFiles/_ffrecord_cpp.dir/all' failed
    make[1]: *** [CMakeFiles/_ffrecord_cpp.dir/all] Error 2
    Makefile:90: recipe for target 'all' failed
    make: *** [all] Error 2
    Traceback (most recent call last):
      File "setup.py", line 24, in <module>
        ext_modules=[cpp_module]
      File "/opt/conda/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
        return distutils.core.setup(**attrs)
      File "/opt/conda/lib/python3.7/distutils/core.py", line 148, in setup
        dist.run_commands()
      File "/opt/conda/lib/python3.7/distutils/dist.py", line 966, in run_commands
        self.run_command(cmd)
      File "/opt/conda/lib/python3.7/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/opt/conda/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run
        self.do_egg_install()
      File "/opt/conda/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install
        self.run_command('bdist_egg')
      File "/opt/conda/lib/python3.7/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/opt/conda/lib/python3.7/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/opt/conda/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run
        cmd = self.call_command('install_lib', warn_dir=0)
      File "/opt/conda/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command
        self.run_command(cmdname)
      File "/opt/conda/lib/python3.7/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/opt/conda/lib/python3.7/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/opt/conda/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run
        self.build()
      File "/opt/conda/lib/python3.7/distutils/command/install_lib.py", line 107, in build
        self.run_command('build_ext')
      File "/opt/conda/lib/python3.7/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/opt/conda/lib/python3.7/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/opt/conda/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
        _build_ext.run(self)
      File "/opt/conda/lib/python3.7/distutils/command/build_ext.py", line 340, in run
        self.build_extensions()
      File "/opt/conda/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
        self._build_extensions_serial()
      File "/opt/conda/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
        self.build_extension(ext)
      File "/root/ffrecord/cmake_build.py", line 118, in build_extension
        ["cmake", "--build", "."] + build_args, cwd=self.build_temp
      File "/opt/conda/lib/python3.7/subprocess.py", line 363, in check_call
        raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['cmake', '--build', '.']' returned non-zero exit status 2.
    
    bug install 
    opened by jimchenhub 3
  • Error of 0' failed. Number of submitted requests: -22"">

    Error of "RuntimeError: 'ns > 0' failed. Number of submitted requests: -22"

    I apply the sample code from README, but an error occurred in data = self.reader.read(indices) of the __getitem__ method in ffrecord.torch.dataset module. The following are more detailed error messages:


    -- Process 1 terminated with the following error:
    Traceback (most recent call last):
      File "xxxx/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
        fn(i, *args)
      File "xxxx.py", line 172, in worker
        trainer.train(args, gpu_id, rank, train_loader, model, optimizer, scheduler, train_sampler)
      File "xxxx.py", line 39, in train
        for step, batch in enumerate(loader):
      File "xxxx/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
        data = self._next_data()
      File "xxxx/python3.8/site-packages/torch/utils/data/dataloader.py", line 1224, in _next_data
        return self._process_data(data)
      File "xxxx/python3.8/site-packages/torch/utils/data/dataloader.py", line 1250, in _process_data
        data.reraise()
      File "xxxx/python3.8/site-packages/site-packages/torch/_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in DataLoader worker process 0.
    Original Traceback (most recent call last):
      File "xxxx/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
        data = fetcher.fetch(index)
      File "xxxx/python3.8/site-packages/ffrecord-1.3.2+35c6863-py3.8-linux-x86_64.egg/ffrecord/torch/dataloader.py", line 151, in fetch
        data = self.dataset[indexes]
      File "xxx.py", line 34, in __getitem__
        data = self.reader.read(indices)
    RuntimeError: 'ns > 0' failed. Number of submitted requests: -22
    Error in std::vector<ffrecord::MemBlock> ffrecord::FileReader::read_batch(const std::vector<long int>&) at xxx/ffrecord/ffrecord/src/reader.cpp line 225
    

    What might be the cause of this error?

    opened by xlxwalex 7
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

GPT Neo 🎉 1T or bust my dudes 🎉 An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here t

EleutherAI 6.7k Dec 28, 2022
Tracking Progress in Natural Language Processing

Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.

Sebastian Ruder 21.2k Dec 30, 2022
[EMNLP 2021] LM-Critic: Language Models for Unsupervised Grammatical Error Correction

LM-Critic: Language Models for Unsupervised Grammatical Error Correction This repo provides the source code & data of our paper: LM-Critic: Language M

Michihiro Yasunaga 98 Nov 24, 2022
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 881 Jan 03, 2023
Code for ACL 2022 main conference paper "STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation".

STEMM: Self-learning with Speech-Text Manifold Mixup for Speech Translation This is a PyTorch implementation for the ACL 2022 main conference paper ST

ICTNLP 29 Oct 16, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Multilingual finetuning of Machine Translation model on low-resource languages. Project for Deep Natural Language Processing course.

Low-resource-Machine-Translation This repository contains the code for the project relative to the course Deep Natural Language Processing. The goal o

Andrea Cavallo 3 Jun 22, 2022
Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)

Time-aware Large Kernel (TaLK) Convolutions (Lioutas et al., 2020) This repository contains the source code, pre-trained models, as well as instructio

Vasileios Lioutas 28 Dec 07, 2022
Finally decent dictionaries based on Wiktionary for your beloved eBook reader.

eBook Reader Dictionaries Finally, decent dictionaries based on Wiktionary for your beloved eBook reader. Dictionaries Catalan 🚧 Ελληνικά (help welco

Mickaël Schoentgen 163 Dec 31, 2022
This project deals with a simplified version of a more general problem of Aspect Based Sentiment Analysis.

Aspect_Based_Sentiment_Extraction Created on: 5th Jan, 2022. This project deals with an important field of Natural Lnaguage Processing - Aspect Based

Naman Rastogi 4 Jan 01, 2023
基于GRU网络的句子判断程序/A program based on GRU network for judging sentences

SentencesJudger SentencesJudger 是一个基于GRU神经网络的句子判断程序,基本的功能是判断文章中的某一句话是否为一个优美的句子。 English 如何使用SentencesJudger 确认Python运行环境 安装pyTorch与LTP python3 -m pip

8 Mar 24, 2022
Course project of [email protected]

NaiveMT Prepare Clone this repository git clone [email protected]:Poeroz/NaiveMT.git

Poeroz 2 Apr 24, 2022
A Telegram bot to add notes to Flomo.

flomo bot 使用 Telegram 机器人发送笔记到你的 Flomo. 你需要有一台可访问 Telegram 的服务器。 Steps @BotFather 新建机器人,获取 token Flomo 官网获取 API,链接 https://flomoapp.com/mine?source=in

Zhen 44 Dec 30, 2022
Transformer training code for sequential tasks

Sequential Transformer This is a code for training Transformers on sequential tasks such as language modeling. Unlike the original Transformer archite

Meta Research 578 Dec 13, 2022
Python utility library for compositing PDF documents with reportlab.

pdfdoc-py Python utility library for compositing PDF documents with reportlab. Installation The pdfdoc-py package can be installed directly from the s

Michael Gale 1 Jan 06, 2022
Build Text Rerankers with Deep Language Models

Reranker is a lightweight, effective and efficient package for training and deploying deep languge model reranker in information retrieval (IR), question answering (QA) and many other natural languag

Luyu Gao 140 Dec 06, 2022
Dust model dichotomous performance analysis

Dust-model-dichotomous-performance-analysis Using a collated dataset of 90,000 dust point source observations from 9 drylands studies from around the

1 Dec 17, 2021
AEC_DeepModel - Deep learning based acoustic echo cancellation baseline code

AEC_DeepModel - Deep learning based acoustic echo cancellation baseline code

凌逆战 75 Dec 05, 2022
XLNet: Generalized Autoregressive Pretraining for Language Understanding

Introduction XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective.

Zihang Dai 6k Jan 07, 2023
Tokenizer - Module python d'analyse syntaxique et de grammaire, tokenization

Tokenizer Le Tokenizer est un analyseur lexicale, il permet, comme Flex and Yacc par exemple, de tokenizer du code, c'est à dire transformer du code e

Manolo 1 Aug 15, 2022