DeepAmandine is an artificial intelligence that allows you to talk to it for hours, you won't know the difference.

Overview

DeepAmandine

This is an artificial intelligence based on GPT-3 that you can chat with, it is very nice and makes a lot of jokes. We wish you a good experience with the AGI and hope you have fun.

screen_1


Installation and usage

- To use the version on Android - v1.0-beta :

1. Installing the pre-required Python libraries :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt

2. Download the executable file :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-android-v1.0-beta.pyc

3. Run the executable :

$ python3 DeepAmandine-android-v1.0-beta.pyc

- To use the version on Desktop - v1.0-beta :

1. Installing the pre-required Python libraries :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt

2. Download the executable file :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-desktop-v1.0-beta.pyc

3. Run the executable :

$ python3 DeepAmandine-desktop-v1.0-beta.pyc

Examples of use

You can select a language to speak with our AI.

screen_1

You can select a username to chat with the AI.

screen_2

Once all the steps have been completed, you can start talking to the AI.

screen_3

You might also like...
Text to speech is a process to convert any text into voice. Text to speech project takes words on digital devices and convert them into audio. Here I have used Google-text-to-speech library popularly known as gTTS library to convert text file to .mp3 file. Hope you like my project! simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.

Quickly train T5 models in just 3 lines of code + ONNX support simpleT5 is built on top of PyTorch-lightning ⚡️ and Transformers 🤗 that lets you quic

pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

A PyTorch implementation of the Transformer model in
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE

smaller-LaBSE LaBSE(Language-agnostic BERT Sentence Embedding) is a very good method to get sentence embeddings across languages. But it is hard to fi

I can help you convert your images to pdf file.
I can help you convert your images to pdf file.

IMAGE TO PDF CONVERTER BOT Configs TOKEN - Get bot token from @BotFather API_ID - From my.telegram.org API_HASH - From my.telegram.org Deploy to Herok

A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.

multitask-learning-transformers A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You

Rhythm-Finder is a unsupervised ML driven python powered web-application that can find the songs that suits you.
Releases(v1.0-beta)
  • v1.0-beta(Jan 15, 2022)

    DeepAmandine

    This is an artificial intelligence based on GPT-3 that you can chat with, it is very nice and makes a lot of jokes.


    - To use the version on Android - v1.0-beta :

    1. Installing the pre-required Python libraries :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt
    

    2. Download the executable file :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-android-v1.0-beta.pyc
    

    3. Run the executable :

    $ python3 DeepAmandine-android-v1.0-beta.pyc
    

    - To use the version on Desktop - v1.0-beta :

    1. Installing the pre-required Python libraries :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt
    

    2. Download the executable file :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-desktop-v1.0-beta.pyc
    

    3. Run the executable :

    $ python3 DeepAmandine-desktop-v1.0-beta.pyc
    
    Source code(tar.gz)
    Source code(zip)
    DeepAmandine-android-v1.0-beta.pyc(3.17 KB)
    DeepAmandine-desktop-v1.0-beta.pyc(3.43 KB)
    requirements.txt(13 bytes)
Owner
BuyWithCrypto
Blockchain & Fintech
BuyWithCrypto
Deal or No Deal? End-to-End Learning for Negotiation Dialogues

Introduction This is a PyTorch implementation of the following research papers: (1) Hierarchical Text Generation and Planning for Strategic Dialogue (

Facebook Research 1.4k Dec 29, 2022
Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers

beyond masking Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers The code is coming Figure 1: Pipeline of token-based pre-

Yunjie Tian 23 Sep 27, 2022
pysentimiento: A Python toolkit for Sentiment Analysis and Social NLP tasks

A Python multilingual toolkit for Sentiment Analysis and Social NLP tasks

297 Dec 29, 2022
code for modular summarization work published in ACL2021 by Krishna et al

This repository contains the code for running modular summarization pipelines as described in the publication Krishna K, Khosla K, Bigham J, Lipton ZC

Kundan Krishna 6 Jun 04, 2021
📔️ Generate a text-based journal from a template file.

JGen 📔️ Generate a text-based journal from a template file. Contents Getting Started Example Overview Usage Details Reserved Keywords Gotchas Getting

Harrison Broadbent 21 Sep 25, 2022
Espresso: A Fast End-to-End Neural Speech Recognition Toolkit

Espresso Espresso is an open-source, modular, extensible end-to-end neural automatic speech recognition (ASR) toolkit based on the deep learning libra

Yiming Wang 919 Jan 03, 2023
Text Classification Using LSTM

Text classification is the task of assigning a set of predefined categories to free text. Text classifiers can be used to organize, structure, and categorize pretty much anything. For example, new ar

KrishArul26 3 Jan 03, 2023
Common Voice Dataset explorer

Common Voice Dataset Explorer Common Voice Dataset is by Mozilla Made during huggingface finetuning week Usage pip install -r requirements.txt streaml

Ceyda Cinarel 22 Nov 16, 2022
A combination of autoregressors and autoencoders using XLNet for sentiment analysis

A combination of autoregressors and autoencoders using XLNet for sentiment analysis Abstract In this paper sentiment analysis has been performed in or

James Zaridis 2 Nov 20, 2021
DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time

DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time. While it efficiently searches the answers out of 60 billion phrases in Wikipedia, it is also v

Jinhyuk Lee 543 Jan 08, 2023
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

AI2 11.4k Jan 01, 2023
Sample data associated with the Aurora-BP study

The Aurora-BP Study and Dataset This repository contains sample code, sample data, and explanatory information for working with the Aurora-BP dataset

Microsoft 16 Dec 12, 2022
AutoGluon: AutoML for Text, Image, and Tabular Data

AutoML for Text, Image, and Tabular Data AutoGluon automates machine learning tasks enabling you to easily achieve strong predictive performance in yo

Amazon Web Services - Labs 5.2k Dec 29, 2022
Code for "Generative adversarial networks for reconstructing natural images from brain activity".

Reconstruct handwritten characters from brains using GANs Example code for the paper "Generative adversarial networks for reconstructing natural image

K. Seeliger 2 May 17, 2022
Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together

SpeechMix Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together. Introduction For the same input: from datas

Eric Lam 31 Nov 07, 2022
iSTFTNet : Fast and Lightweight Mel-spectrogram Vocoder Incorporating Inverse Short-time Fourier Transform

iSTFTNet : Fast and Lightweight Mel-spectrogram Vocoder Incorporating Inverse Short-time Fourier Transform This repo try to implement iSTFTNet : Fast

Rishikesh (ऋषिकेश) 126 Jan 02, 2023
Beyond the Imitation Game collaborative benchmark for enormous language models

BIG-bench 🪑 The Beyond the Imitation Game Benchmark (BIG-bench) will be a collaborative benchmark intended to probe large language models, and extrap

Google 1.3k Jan 01, 2023
Japanese Long-Unit-Word Tokenizer with RemBertTokenizerFast of Transformers

Japanese-LUW-Tokenizer Japanese Long-Unit-Word (国語研長単位) Tokenizer for Transformers based on 青空文庫 Basic Usage from transformers import RemBertToken

Koichi Yasuoka 3 Dec 22, 2021
CoSENT、STS、SentenceBERT

CoSENT_Pytorch 比Sentence-BERT更有效的句向量方案

102 Dec 07, 2022