A Neural Language Style Transfer framework to transfer natural language text smoothly between fine-grained language styles like formal/casual, active/passive, and many more. Created by Prithiviraj Damodaran. Open to pull requests and other forms of collaboration.

Overview

PyPI - License Visits Badge

Styleformer

A Neural Language Style Transfer framework to transfer natural language text smoothly between fine-grained language styles like formal/casual, active/passive, and many more.For instance, understand What makes text formal or casual/informal.

Table of contents

Usecases for Styleformer

Area 1: Data Augmentation

  • Augment training datasets with various fine-grained language styles.

Area 2: Post-processing

  • Apply style transfers to machine generated text.
  • e.g.
    • Refine a Summarised text to active voice + formal tone.
    • Refine a Translated text to more casual tone to reach younger audience.

Area 3: Controlled paraphrasing

  • Formal <=> Casual and Active <=> style transfers adds a notion of control over how we paraphrase when compared to free-form paraphrase where there is control or guarantee over the paraphrases.

Area 4: Assisted writing

  • Integrate this to any human writing interfaces like email clients, messaging tools or social media post authoring tools. Your creativity is your limit to te uses.
  • e.g.
    • Polish an email with business tone for professional uses.

Installation

pip install git+https://github.com/PrithivirajDamodaran/Styleformer.git

Quick Start

Casual to Formal (Available now !)

from styleformer import Styleformer
import torch
import warnings
warnings.filterwarnings("ignore")

'''
#uncomment for re-producability
def set_seed(seed):
  torch.manual_seed(seed)
  if torch.cuda.is_available():
    torch.cuda.manual_seed_all(seed)

set_seed(1234)
'''

# style = [0=Casual to Formal, 1=Formal to Casual, 2=Active to Passive, 3=Passive to Active etc..]
sf = Styleformer(style = 0) 

source_sentences = [
"I am quitting my job",
"Jimmy is on crack and can't trust him",
"What do guys do to show that they like a gal?",
"i loooooooooooooooooooooooove going to the movies.",
"That movie was fucking awesome",
"My mom is doing fine",
"That was funny LOL" , 
"It's piece of cake, we can do it",
"btw - ur avatar looks familiar",
"who gives a crap?",
"Howdy Lucy! been ages since we last met.",
"Dude, this car's dope!",
"She's my bestie from college",
"I kinda have a feeling that he has a crush on you.",
"OMG! It's finger-lickin' good.",
]   

for source_sentence in source_sentences:
    target_sentence = sf.transfer(source_sentence)
    print("-" *100)
    print("[Informal] ", source_sentence)
    print("-" *100)
    if target_sentence is not None:
        print("[Formal] ",target_sentence)
        print()
    else:
        print("No good quality transfers available !")
[Informal]  I am quitting my job
[Formal]  I will be stepping down from my job.
----------------------------------------------------------------------------------------------------
[Informal]  Jimmy is on crack and can't trust him
[Formal]  Jimmy is a crack addict I cannot trust him
----------------------------------------------------------------------------------------------------
[Informal]  What do guys do to show that they like a gal?
[Formal]  What do guys do to demonstrate their affinity for women?
----------------------------------------------------------------------------------------------------
[Informal]  i loooooooooooooooooooooooove going to the movies.
[Formal]  I really like to go to the movies.
----------------------------------------------------------------------------------------------------
[Informal]  That movie was fucking awesome
[Formal]  That movie was wonderful.
----------------------------------------------------------------------------------------------------
[Informal]  My mom is doing fine
[Formal]  My mother is doing well.
----------------------------------------------------------------------------------------------------
[Informal]  That was funny LOL
[Formal]  That was hilarious
----------------------------------------------------------------------------------------------------
[Informal]  It's piece of cake, we can do it
[Formal]  The whole process is simple and is possible.
----------------------------------------------------------------------------------------------------
[Informal]  btw - ur avatar looks familiar
[Formal]  Also, your avatar looks familiar.
----------------------------------------------------------------------------------------------------
[Informal]  who gives a crap?
[Formal]  Who cares?
----------------------------------------------------------------------------------------------------
[Informal]  Howdy Lucy! been ages since we last met.
[Formal]  Hello, Lucy It has been a long time since we last met.
----------------------------------------------------------------------------------------------------
[Informal]  Dude, this car's dope!
[Formal]  I find this car very appealing.
----------------------------------------------------------------------------------------------------
[Informal]  She's my bestie from college
[Formal]  She is my best friend from college.
----------------------------------------------------------------------------------------------------
[Informal]  I kinda have a feeling that he has a crush on you.
[Formal]  I have a feeling that he is attracted to you.
----------------------------------------------------------------------------------------------------
[Informal]  OMG! It's finger-lickin' good.
[Formal]  It is so good, it is delicious.
----------------------------------------------------------------------------------------------------

Knobs

# inference_on = [0=Regular model On CPU, 1= Regular model On GPU, 2=Quantized model On CPU]
target_sentence = sf.transfer(source_sentence, inference_on=0, quality_filter=0.95, max_candidates=5)

Models

Model Type Status
prithivida/informal_to_formal_styletransfer Seq2Seq Beta
prithivida/formal_to_informal_styletransfer Seq2Seq WIP
prithivida/active_to_passive_styletransfer Seq2Seq WIP
prithivida/passive_to_active_styletransfer Seq2Seq WIP
prithivida/positive_to_negative_styletransfer Seq2Seq WIP
prithivida/negative_to_positive_styletransfer Seq2Seq WIP

Dataset

  • TBD
  • Fined tuned on T5 on a Tesla T4 GPU and it took ~2 hours to train each of the above models with batch_size = 16 and epochs = 5.(Will share training args shortly)

Benchmark

  • TBD

References

Citation

  • TBD
Comments
  • added streamlit app

    added streamlit app

    Following points are covered in this PR:

    • Added Streamlit app. (CTF,FTC,ATP,PTA)
    • Fixed bug in PTA style transfer

    @PrithivirajDamodaran Attaching screenshot of streamlit app for reference. Let me know your suggestions

    app_screenshot

    opened by shashankdeshpande 6
  • Trimming long sentences

    Trimming long sentences

    Following the code snippet for a better understanding of the problem, I am facing.

    from styleformer import Styleformer
    import torch
    import warnings
    warnings.filterwarnings("ignore")
    
    # style = [0=Casual to Formal, 1=Formal to Casual, 2=Active to Passive, 3=Passive to Active etc..]
    sf = Styleformer(style = 0) 
    
    source_sentences = [
                       "Corruption in african countries hinders economic, political and social development. It is a major obstacle to economic growth, good governance and fundamental freedoms, such as freedom of speech or the right of citizens to hold governments accountable. In addition, corruption affects the lives of individuals, families and communities. The 10th global corruption barometer (gcb) program - in africa, shows that while many people in africa feel that corruption is on the rise in their country, many also feel confident that, as citizens, they can make a difference in the fight against corruption."
    ]
    
    for paragraph in source_sentences:
        # sentences = sent_tokenize(paragraph)
        sentences = paragraph.split('. ')
        for source_sentence in sentences:
            target_sentence = sf.transfer(source_sentence)
            print("-" *100)
            print("[Casual] ", source_sentence)
            print("-" *100)
            if target_sentence is not None:
                print("[Formal] ",target_sentence)
                print()
            else:
                print("No good quality transfers available !")
    

    Program Output


    [Casual] Corruption in african countries hinders economic, political and social development

    [Formal] In African countries, corruption affects economic, political, and social development.


    [Casual] It is a major obstacle to economic growth, good governance and fundamental freedoms, such as freedom of speech or the right of citizens to hold governments accountable

    [Formal] It's a major obstacle to economic growth, good governance, and fundamental freedoms, such as the freedom of speech or the right of citizens to


    [Casual] In addition, corruption affects the lives of individuals, families and communities

    [Formal] Additionally, corruption has a negative impact on individuals, families and communities.


    [Casual] The 10th global corruption barometer (gcb) program - in africa, shows that while many people in africa feel that corruption is on the rise in their country, many also feel confident that, as citizens, they can make a difference in the fight against corruption.

    [Formal] The tenth Global Corruptibility Barometer (GCB) program - in Africa - shows that while many people in Africa feel that corruption

    Please help to fix this for longer sentences. Thanks in advance!

    wontfix 
    opened by Nomiluks 4
  • OSError: Can't load config for 'prithivida/parrot_adequacy_on_BART'. Make sure that....

    OSError: Can't load config for 'prithivida/parrot_adequacy_on_BART'. Make sure that....

    Hi Prithviraj,

    Fantastic work you are doing.

    While testing your models, I intended to deploy the model wrapped in a flask app on EC2.

    Although the results work on Google Colab, I receive the following error on EC2 -

    OSError: Can't load config for 'prithivida/parrot_adequacy_on_BART'. Make sure that:
    
    - 'prithivida/parrot_adequacy_on_BART' is a correct model identifier listed on 'https://huggingface.co/models'
    
    - or 'prithivida/parrot_adequacy_on_BART' is the correct path to a directory containing a config.json file
    

    Can you guide me on how this can be resolved?

    Regards, Paritosh

    invalid 
    opened by katreparitosh 2
  • Issue with loading saved models

    Issue with loading saved models

    Hi, I'm trying to save and load the tokenizer and model. I use the following to save them:

    tokenizer = AutoTokenizer.from_pretrained("prithivida/informal_to_formal_styletransfer")
    tokenizer.save_pretrained('./data/style_tokenizer')
    model = AutoModelForSeq2SeqLM.from_pretrained("prithivida/informal_to_formal_styletransfer")
    model.save_pretrained('./data/style_model')
    

    But when I try to load them, from the local path, I get the following error:

    OSError: Can't load config for '../data/style_tokenizer'. Make sure that:
    
    - '../data/style_tokenizer' is a correct model identifier listed on 'https://huggingface.co/models'
    
    - or '../data/style_tokenizer' is the correct path to a directory containing a config.json file
    

    This somehow makes sense since saving the vectorizer, no config.json is being created.

    Any idea how can I save/load the tokenizer and model?

    opened by Naviden 1
  • Code to train the model

    Code to train the model

    Hey, Can you please share the code, where you train models? We have tasks similar to issues you solve but in other domains. It might be very helpful for us. Do you fine-tune only T5 or you make additional changes to T5 fine-tuning? Thanks

    question 
    opened by ivan-bulka 1
  • cant create a Styleformer(style=n)

    cant create a Styleformer(style=n)

    it keeps throing the same error(diffrent request id) OSError: There was a specific connection error when trying to load prithivida/informal_to_formal_styletransfer: <class 'requests.exceptions.HTTPError'> (Request ID: K9-6-Ks5uMEai7cOcQ3gC)

    opened by TalSchiff 0
  • Unable to create the styleformer instance

    Unable to create the styleformer instance

    OSError: prithivida/parrot_adequacy_on_BART is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'

    I'm using the latest version and seeing the following issue. I was wondering if anything has changed on the huggingface models front?

    opened by ks2002119 0
  • How to do inferencing using multiple GPU's for styleformer

    How to do inferencing using multiple GPU's for styleformer

    I am using this model to do inferencing on 1 million data point using A100 GPU's with 4 GPU. I am launching a inference.py code using Googles vertex-ai Container.

    How can I make inference code to utilise all 4 GPU's ? So that inferencing is super-fast.

    Here is the same code I use in inference.py:

    from styleformer import Styleformer
    import warnings
    warnings.filterwarnings("ignore")
    
    # style = [0=Casual to Formal, 1=Formal to Casual, 2=Active to Passive, 3=Passive to Active etc..]
    sf = Styleformer(style = 1) 
    import torch
    def set_seed(seed):
      torch.manual_seed(seed)
      if torch.cuda.is_available():
        torch.cuda.manual_seed_all(seed)
    
    set_seed(1212)
    
    source_sentences = [
    "I would love to meet attractive men in town",
    "Please leave the room now",
    "It is a delicious icecream",
    "I am not paying this kind of money for that nonsense",
    "He is on cocaine and he cannot be trusted with this",
    "He is a very nice man and has a charming personality",
    "Let us go out for dinner",
    "We went to Barcelona for the weekend. We have a lot of things to tell you.",
    ]   
    
    for source_sentence in source_sentences:
        # inference_on = [0=Regular model On CPU, 1= Regular model On GPU, 2=Quantized model On CPU]
        target_sentence = sf.transfer(source_sentence, inference_on=1, quality_filter=0.95, max_candidates=5)
        print("[Formal] ", source_sentence)
        if target_sentence is not None:
            print("[Casual] ",target_sentence)
        else:
            print("No good quality transfers available !")
        print("-" *100)     
    
    opened by pratikchhapolika 6
  • Sentiment Transfer

    Sentiment Transfer

    Love the library!

    Was hoping to do sentiment transfer but I see that has not yet been integrated. Any pointers towards off the shelf models that can do that?

    opened by JosephGatto 1
Releases(v1.0)
Owner
Prithivida
Applied NLP, XAI for NLP and Data Engineering
Prithivida
Generate text line images for training deep learning OCR model (e.g. CRNN)

Generate text line images for training deep learning OCR model (e.g. CRNN)

532 Jan 06, 2023
FireFlyer Record file format, writer and reader for DL training samples.

FFRecord The FFRecord format is a simple format for storing a sequence of binary records developed by HFAiLab, which supports random access and Linux

77 Jan 04, 2023
BERT Attention Analysis

BERT Attention Analysis This repository contains code for What Does BERT Look At? An Analysis of BERT's Attention. It includes code for getting attent

Kevin Clark 401 Dec 11, 2022
FedNLP: A Benchmarking Framework for Federated Learning in Natural Language Processing

FedNLP is a research-oriented benchmarking framework for advancing federated learning (FL) in natural language processing (NLP). It uses FedML repository as the git submodule. In other words, FedNLP

FedML-AI 216 Nov 27, 2022
Hierarchical unsupervised and semi-supervised topic models for sparse count data with CorEx

Anchored CorEx: Hierarchical Topic Modeling with Minimal Domain Knowledge Correlation Explanation (CorEx) is a topic model that yields rich topics tha

Greg Ver Steeg 592 Dec 18, 2022
Deduplication is the task to combine different representations of the same real world entity.

Deduplication is the task to combine different representations of the same real world entity. This package implements deduplication using active learning. Active learning allows for rapid training wi

63 Nov 17, 2022
PyTorch implementation of the NIPS-17 paper "Poincaré Embeddings for Learning Hierarchical Representations"

Poincaré Embeddings for Learning Hierarchical Representations PyTorch implementation of Poincaré Embeddings for Learning Hierarchical Representations

Facebook Research 1.6k Dec 29, 2022
We have built a Voice based Personal Assistant for people to access files hands free in their device using natural language processing.

Voice Based Personal Assistant We have built a Voice based Personal Assistant for people to access files hands free in their device using natural lang

Rushabh 2 Nov 13, 2021
Full Spectrum Bioinformatics - a free online text designed to introduce key topics in Bioinformatics using the Python

Full Spectrum Bioinformatics is a free online text designed to introduce key topics in Bioinformatics using the Python programming language. The text is written in interactive Jupyter Notebooks, whic

Jesse Zaneveld 33 Dec 28, 2022
Experiments in converting wikidata to ftm

FollowTheMoney / Wikidata mappings This repo will contain tools for converting Wikidata entities into FtM schema. Prefixes: https://www.mediawiki.org/

Friedrich Lindenberg 2 Nov 12, 2021
Implementing SimCSE(paper, official repository) using TensorFlow 2 and KR-BERT.

KR-BERT-SimCSE Implementing SimCSE(paper, official repository) using TensorFlow 2 and KR-BERT. Training Unsupervised python train_unsupervised.py --mi

Jeong Ukjae 27 Dec 12, 2022
Topic Modelling for Humans

gensim – Topic Modelling in Python Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Targ

RARE Technologies 13.8k Jan 02, 2023
A Python module made to simplify the usage of Text To Speech and Speech Recognition.

Nav Module The solution for voice related stuff in Python Nav is a Python module which simplifies voice related stuff in Python. Just import the Modul

Snm Logic 1 Dec 20, 2021
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Various Algorithms for Short Text Mining

Short Text Mining in Python Introduction This package shorttext is a Python package that facilitates supervised and unsupervised learning for short te

Kwan-Yuet 466 Dec 06, 2022
Rhyme with AI

Local development Create a conda virtual environment and activate it: conda env create --file environment.yml conda activate rhyme-with-ai Install the

GoDataDriven 28 Nov 21, 2022
Optimal Transport Tools (OTT), A toolbox for all things Wasserstein.

Optimal Transport Tools (OTT), A toolbox for all things Wasserstein. See full documentation for detailed info on the toolbox. The goal of OTT is to pr

OTT-JAX 255 Dec 26, 2022
Mkdocs + material + cool stuff

Modern-Python-Doc-Example mkdocs + material + cool stuff Doc is live here Features out of the box amazing good looking website thanks to mkdocs.org an

Francesco Saverio Zuppichini 61 Oct 26, 2022
BiQE: Code and dataset for the BiQE paper

BiQE: Bidirectional Query Embedding This repository includes code for BiQE and the datasets introduced in Answering Complex Queries in Knowledge Graph

Bhushan Kotnis 1 Oct 20, 2021
Host your own GPT-3 Discord bot

GPT3 Discord Bot Host your own GPT-3 Discord bot i'd host and make the bot invitable myself, however GPT3 terms of service prohibit public use of GPT3

[something hillarious here] 8 Jan 07, 2023