Takes a string and puts it through different languages in Google Translate a requested amount of times, returning nonsense.

Overview

PythonTextObfuscator

Takes a string and puts it through different languages in Google Translate a requested amount of times, returning nonsense. Example

Requirements:

python3

For the Selenium Obfuscator:

    -Selenium
    
    -Firefox
    
    -Geckodriver

In the Selenium Obfuscator:

-The major benefit is that you can translate excel documents, the downside is that after 10 or so document translations, Google blocks your ip for a while.

-Translation is generally slower and more limited using selenium as a browser tab is being used to scrape the data. Also beware of RAM usage.

-May no longer be supported in the future due to its drawbacks.

In the Urllib Obfuscator:

-Translation is generally faster and uses very little resources as only html is downloaded through a request. Multiprocessing also allows simultanious requests and can be used to the full extent without worrying about RAM usage.

—Split by length is faster and uses less requests (better for longer texts)

—Split by newline is slower and uses more requests but adds much more translation variety.

-Reminder: Since google has a url request limit, you'll need to switch VPN locations when the request limit is hit.

    ——Don't worry too much though, as it takes quite a bit of requests to get to that point, and the block only lasts for around an hour.
You might also like...
Translate - a PyTorch Language Library

NOTE PyTorch Translate is now deprecated, please use fairseq instead. Translate - a PyTorch Language Library Translate is a library for machine transl

Auto translate textbox from Japanese to English or Indonesia
Auto translate textbox from Japanese to English or Indonesia

priconne-auto-translate Auto translate textbox from Japanese to English or Indonesia How to use Install python first, Anaconda is recommended Install

translate using your voice
translate using your voice

speech-to-text-translator Usage translate using your voice description this project makes translating a word easy, all you have to do is speak and...

translate using your voice

speech-to-text-translator Usage translate using your voice description this project makes translating a word easy, all you have to do is speak and...

This program do translate english words to portuguese

Python-Dictionary This program is used to translate english words to portuguese. Web-Scraping This program use BeautifulSoap to make web scraping, so

Translate U is capable of translating the text present in an image from one language to the other.
Translate U is capable of translating the text present in an image from one language to the other.

Translate U is capable of translating the text present in an image from one language to the other. The app uses OCR and Google translate to identify and translate across 80+ languages.

Graphical user interface for Argos Translate
Graphical user interface for Argos Translate

Argos Translate GUI Website | GitHub | PyPI Graphical user interface for Argos Translate. Install pip3 install argostranslategui

Use the state-of-the-art m2m100 to translate large data on CPU/GPU/TPU. Super Easy!
Use the state-of-the-art m2m100 to translate large data on CPU/GPU/TPU. Super Easy!

Easy-Translate is a script for translating large text files in your machine using the M2M100 models from Facebook/Meta AI. We also privide a script fo

Search for documents in a domain through Google. The objective is to extract metadata

MetaFinder - Metadata search through Google _____ __ ___________ .__ .___ / \

Comments
  • Attempt to decode JSON with unexpected mimetype: text/plain

    Attempt to decode JSON with unexpected mimetype: text/plain

    I'm not sure what's causing this, as the last time I tried this release, this issue was not present. If it's accessing content server-side, then it might be that the server has had a config change resulting in it returning a different mimetype?

    I get the error message below consistently in the console, with %2E being added to the end of the URL each time. It does seem like some translation does happen; in this case, I inputted "Test", and the URL ended with "Hlola".

    https://translate.alefvanoon.xyz/api/v1/zu/mi/Hlola%2E 0, message='Attempt to decode JSON with unexpected mimetype: text/plain; charset=utf-8', url=URL('https://translate.alefvanoon.xyz/api/v1/zu/mi/Hlola')

    From what I've gathered looking online, the issue lies in either line 13, line 469, or both.

    return (await response.json())['translation'].replace('/','⁄')

    text = (await response.json())['translation'].replace('/','⁄')

    Some of the solutions online referred to adding "content_type=None" or "content_type='text/plain'" into the brackets after "json", but this only seemed to cause further issues for me.

    opened by UltraHylia 2
  • Program Freezes Up and Looping Error

    Program Freezes Up and Looping Error

    When you have Chinese (Simplified) and/or Chinese (Traditional) enabled in the language selector, the program can freeze and an error loops in the console. It happens no matter what other languages are enabled.

    https://user-images.githubusercontent.com/60769253/197659506-38871035-e311-4710-9eb9-ac2d7387841f.mp4

    opened by DerpTaco99921 0
Releases(v0.4)
  • v0.4(Feb 2, 2022)

    Rebuilt from the ground up with a new GUI and translation method.

    Changes:

    -Improved GUI.

    -Translations are retrieved from a front-end to Google Translate called Lingva, which removes the issue with being blocked for doing too many requests.

    -Translations are done in an asynchronous function using aiohttp instead of a process pool, which is optimal for large bulk translations.

    -Removed selenium obfuscation.

    Additions: -Importing and saving text files. -Language Selector to activate or deactivate any individual language. -Language setting for the result. -Three different split methods: ____-Initial ________-Text is split by length before being passed into the obfuscate function. ________-Faster as less requests are made. ________-Different languages for each piece. ________-Tabs not preserved. ____-Continuous ________-Text is split by length inside the obfuscate function. ________-Faster as less requests are made. ________-Same languages for each piece. ________-Tabs not preserved. ____-Newline ________-Text is split by newlines and tabs. ________-Slower as more requests are made. ________-Every single line is translated with different languages. ________-Tabs preserved. -Translation Generator which creates a .csv file containing multiple translations of the same text: ____-Repeat mode obfuscates the original text each time, adding the result in each new column. ____-Continue mode obfuscates the results from each subsequent obfuscation, adding the result in each new column.

    Source code(tar.gz)
    Source code(zip)
    Python.Text.Obfuscator.v0.4.zip(15.75 KB)
  • v0.3.1c-r2(Dec 23, 2021)

  • v0.3.1c(Dec 23, 2021)

    Newlines no longer get messed up in Urllib Obfuscator. Added a choice to split by length or by newlines. —Split by length is faster and uses less requests (better for longer texts) —Split by newline is slower and uses more requests but adds much more translation variety. Reminder: Since google has a URL request limit, you'll need to switch VPN locations when the request limit is hit.

    Source code(tar.gz)
    Source code(zip)
    Python.Text.Obfuscator.v0.3.1c.zip(51.63 KB)
  • v0.3.1b(Dec 23, 2021)

  • v0.3.1a(Dec 23, 2021)

  • v0.3(Dec 23, 2021)

    I made massive improvements to the speed of the obfuscation thanks to learning about urllib.

    For example, I did translated the same ~2300 character long string of text 10 times in the old and new version; the old one took 38.8 seconds while the new one took only 6.8 seconds.

    In addition, the capacity to add a larger amount of characters is far increased as it doesn't require Firefox tabs to be open and eating up ram.

    As a test I translated the entire Among Us Wikipedia page 50 times (with a character count of over 60 thousand!), and it only took only 114 seconds to finish translating. Using the old obfuscator I wouldn't be able to translate more than half that amount, and it would take ages to complete (Like 10 mins or more).

    Unfortunately for this version the Excel Obfuscator is removed until I can figure out how to get it to work in urllib, if I can't then I'll probably add it back it with Selenium.

    At least if you couldn't get selenium to work on your computer for the previous versions you don't have to worry about getting it for this.

    Source code(tar.gz)
    Source code(zip)
    Python.Text.Obfuscator.v0.3.zip(5.73 KB)
  • v0.2.2(Dec 23, 2021)

  • v0.2.1b(Dec 23, 2021)

  • v0.2.1a(Dec 23, 2021)

    Fixed TimeoutExceptions for the string translations (textbox input) obfuscation. You can now do as many translations as you want without worrying about encountering an error. Same for amount of characters (as long as your PC can handle of course). As for excel translations they remain unchanged — since I can't do anything about Google's Document translation limit — so just switch locations on VPN like usual after 10 translations for the Excel Obfuscator.

    Source code(tar.gz)
    Source code(zip)
    Python.Text.Obfuscator.v0.2.1.zip(5.88 KB)
  • v0.2(Dec 23, 2021)

  • v0.1b(Dec 23, 2021)

  • v0.1a(Dec 23, 2021)

Outreachy TFX custom component project

Schema Curation Custom Component Outreachy TFX custom component project This repo contains the code for Schema Curation Custom Component made as a par

Robert Crowe 5 Jul 16, 2021
Nmt - TensorFlow Neural Machine Translation Tutorial

Neural Machine Translation (seq2seq) Tutorial Authors: Thang Luong, Eugene Brevdo, Rui Zhao (Google Research Blogpost, Github) This version of the tut

6.1k Dec 29, 2022
Code for the paper TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks

TestRank in Pytorch Code for the paper TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks by Yu Li, Min Li, Qiuxia Lai, Ya

3 May 19, 2022
Contains the code and data for our #ICSE2022 paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming Sequences"

CodeFill This repository contains the code for our paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Namin

Software Analytics Lab 11 Oct 31, 2022
Code of paper: A Recurrent Vision-and-Language BERT for Navigation

Recurrent VLN-BERT Code of the Recurrent-VLN-BERT paper: A Recurrent Vision-and-Language BERT for Navigation Yicong Hong, Qi Wu, Yuankai Qi, Cristian

YicongHong 109 Dec 21, 2022
Neural text generators like the GPT models promise a general-purpose means of manipulating texts.

Boolean Prompting for Neural Text Generators Neural text generators like the GPT models promise a general-purpose means of manipulating texts. These m

Jeffrey M. Binder 20 Jan 09, 2023
TunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset.

TunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset. TunBERT was applied to three NLP downstream tasks: Sentiment Analysis (S

InstaDeep Ltd 72 Dec 09, 2022
OCR을 이용하여 인원수를 인식 후 줌을 Kill 해줍니다

How To Use killtheZoom-2.0 Windows 0. https://joyhong.tistory.com/79 이 글을 보면서 tesseract를 C:\Program Files\Tesseract-OCR 경로로 설치해주세요(한국어 언어 추가 필요) 상단의 초

김정인 9 Sep 13, 2021
Blackstone is a spaCy model and library for processing long-form, unstructured legal text

Blackstone Blackstone is a spaCy model and library for processing long-form, unstructured legal text. Blackstone is an experimental research project f

ICLR&D 579 Jan 08, 2023
RecipeReduce: Simplified Recipe Processing for Lazy Programmers

RecipeReduce This repo will help you figure out the amount of ingredients to buy for a certain number of meals with selected recipes. RecipeReduce Get

Qibin Chen 9 Apr 22, 2022
Big Bird: Transformers for Longer Sequences

BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the c

Google Research 457 Dec 23, 2022
Exploring dimension-reduced embeddings

sleepwalk Exploring dimension-reduced embeddings This is the code repository. See here for the Sleepwalk web page. License and disclaimer This program

S. Anders's research group at ZMBH 91 Nov 29, 2022
Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP

Pretrain and Fine-tune a T5 model with Flax on GCP This tutorial details how pretrain and fine-tune a FlaxT5 model from HuggingFace using a TPU VM ava

Gabriele Sarti 41 Nov 18, 2022
topic modeling on unstructured data in Space news articles retrieved from the Guardian (UK) newspaper using API

NLP Space News Topic Modeling Photos by nasa.gov (1, 2, 3, 4, 5) and extremetech.com Table of Contents Project Idea Data acquisition Primary data sour

edesz 1 Jan 03, 2022
aMLP Transformer Model for Japanese

aMLP-japanese Japanese aMLP Pretrained Model aMLPとは、Liu, Daiらが提案する、Transformerモデルです。 ざっくりというと、BERTの代わりに使えて、より性能の良いモデルです。 詳しい解説は、こちらの記事などを参考にしてください。 この

tanreinama 13 Aug 11, 2022
code for modular summarization work published in ACL2021 by Krishna et al

This repository contains the code for running modular summarization pipelines as described in the publication Krishna K, Khosla K, Bigham J, Lipton ZC

Kundan Krishna 6 Jun 04, 2021
Exploration of BERT-based models on twitter sentiment classifications

twitter-sentiment-analysis Explore the relationship between twitter sentiment of Tesla and its stock price/return. Explore the effect of different BER

Sammy Cui 2 Oct 02, 2022
Code repository for "It's About Time: Analog clock Reading in the Wild"

it's about time Code repository for "It's About Time: Analog clock Reading in the Wild" Packages required: pytorch (used 1.9, any reasonable version s

52 Nov 10, 2022
DAGAN - Dual Attention GANs for Semantic Image Synthesis

Contents Semantic Image Synthesis with DAGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evalu

Hao Tang 104 Oct 08, 2022
This repository implements a brute-force spellchecker utilizing the Damerau-Levenshtein edit distance.

About spellchecker.py Implementing a highly-accurate, brute-force, and dynamically programmed spellchecking program that utilizes the Damerau-Levensht

Raihan Ahmed 1 Dec 11, 2021