Spectrum is an AI that uses machine learning to generate Rap song lyrics

Overview

Contributors Forks Stargazers Issues MIT License Open In Colab


Spectrum

Spectrum is an AI that uses deep learning to generate rap song lyrics.

View Demo
Report Bug
Request Feature
Open In Colab

About The Project

Spectrum is an AI that uses deep learning to generate rap song lyrics.

Built With

This project is built using Python, Tensorflow, and Flask.

Getting Started

Installation

# clone the repo
git clone https://github.com/YigitGunduc/Spectrum.git

# install requirements
pip install -r requirements.txt

Training

# navigate to the Spectrum/AI folder 
cd Spectrum/AI

# pass verbose, epochs, save_at arguments and run train.py 
python3 train.py -h, --help  --epochs EPOCHS --save_at SAVE_AT --verbose VERBOSE --rnn_neurons RNN_NEURONS
             --embed_dim EMBED_DIM --dropout DROPOUT --num_layers NUM_LAYERS --learning_rate LEARNING_RATE

All the arguments are optional if you leave them empty model will construct itself with the default params

Generating Text from Trained Model

Call eval.py from the command line with seed text as an argument

python3 eval.py --seed SEEDTEXT

or

from model import Generator

model = Generator()

model.load_weights('../models/model-5-epochs-256-neurons.h5')

generatedText = model.predict(start_seed=SEED, gen_size=1000)

print(generatedText)
  • If you have tweaked the model's parameters while training initialize the model with the parameters you trained

Running the Web-App Locally

# navigate to the Spectrum folder 
cd Spectrum

# run app.py
python3 app.py

# check out http://0.0.0.0:8080

API

spectrum has a free web API you can send request to it as shown below

import requests 

response = requests.get("https://spectrumapp.herokuapp.com/api/generate/SEEDTEXT")
#raw response
print(response.json())
#cleaned up response
print(response.json()["lyrics"])

Hyperparameters

epochs = 30 
batch size = 128
number of layers = 2(hidden) + 1(output)
number of RNN units = 256
dropout prob = 0.3
embedding dimensions = 64
optimizer = Adam
loss = sparse categorical crossentropy

These hyperparameters are the best that I can found but you have to be careful while dealing with the hyperparameters because this model can over or underfit quite easily and GRUs performs better than LSTMs

Info about model

>>> from model import Generator
>>> model = Generator()
>>> model.load_weights('../models/model-5-epochs-256-neurons.h5')
>>> model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (1, None, 64)             6400      
_________________________________________________________________
gru (GRU)                    (1, None, 256)            247296    
_________________________________________________________________
gru_1 (GRU)                  (1, None, 256)            394752    
_________________________________________________________________
dense (Dense)                (1, None, 100)            25700     
=================================================================
Total params: 674,148
Trainable params: 674,148
Non-trainable params: 0
_________________________________________________________________

>>> model.hyperparams()
Hyper Parameters
+--------------------------+
|rnn_neurons   |        256|
|embed_dim     |         64|
|learning_rate |     0.0001|
|dropout       |        0.3|
|num_layers    |          2|
+--------------------------+
>>>

Roadmap

See the open issues for a list of proposed features (and known issues).

Results

WARNING: There is some offensive language ahead, please stop reading here if you are a sensitive person. The texts below have been generated by Spectrum

Seed : today

Prediction : 

If that don't, yeah
Weint off the music
It's like a fired-enother foar fool straight for an exactly
Nigga why I id my Door Merican muthafucka

Ng answered by need for blazy hard
The family wish fans dishes rolled up
How better just wanna die
Match all about the moment in I glory
Fire is that attention is the flop and pipe those peokin' distriors
Bitch I been hard and I'm like the Scales me and we're going to school like all-off of the allegit to get the bitches
Yeah kinda too legit back into highin'
A year have it would plobably want

And we all bustin' the conscious in the cusfuckers won't ha
Quite warkie and it's blow, and what? I cannot love him,
Alugal Superman, and the revolution likes migh
I ain't still not I uest the neighborhoo
Powers all too bad show, you crite your bac
When I say way too fathom
If you wanna revell, money, where your face we'll blin
Pulf me very, yo, they pull out for taught nothin' off
I pass a with a nigga hang some, pleas
Fuck me now, it's a

======================================================================
Seed : hello

Prediction : 

hellow motherfucker
You wanna talk on the pockets on Harlotom
I'm legit some more than Volumon
Ridicalab knowledge is blessin' some of your honierby man
We just bust the Flud joke with shoulders on the Statue
Lecock it on everybody want your dices to speak
While she speak cents look back to Pops
He was a nigga when I got behind pictures any Lil Sanvanas
Used to in her lady yaught they never had a bitch
He'll break the jird little rappers kill your children is

I'm prayin' back to ready for that bitch just finished And mised to the gamr
Every eyes on and about that getting common
I'm going to attractived with its
I just went by the crowd get the promise to buy the money-a star big down
Can one sall 'em in me tryna get them days that's how I can break the top
Well, that's hug her hands he screaming like a fucking hip-hop but put a Blidze like rhymin'
Yeah I slack like a Job let your cops got a generres
These West of it today flamping this
Black Kuttle crib, said "Ju Conlie, hold up, fuck the

======================================================================
Seed : bestfriend

Prediction : 

bestfriend
Too much time we tonight
The way I know is a please have no self-back when I be for the fucking weed and a game
What the fuck we wanna be working on the streets make it like a stay down the world is from the head of the real brain
Chain don't come back to the grass
My dick is the one to tell you I'm the fuck
So see me we gon' be fans when you had to hear the window you come to the dick when a little cooleng and I was calling what the fuck is it good as the crown
And I'm representing you finally waitin' in your girl
This is the corner with my brother
I'm just a damn door and the real motherfuckers come got the point my shit is the money on the world

I get it then the conscious that's why I cripp
I might take my own shit so let me have a bad bitch
I'm just had and make the fuck is in the single of the window
I think I ain't got the world is all my gone be mine
They ain't like the half the best between my words
And I'm changing with the heads of the speech
Fuck a bunch of best of a fuck

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022
⚓ Eurybia monitor model drift over time and securize model deployment with data validation

View Demo · Documentation · Medium article 🔍 Overview Eurybia is a Python library which aims to help in : Detecting data drift and model drift Valida

MAIF 172 Dec 27, 2022
JAXDL: JAX (Flax) Deep Learning Library

JAXDL: JAX (Flax) Deep Learning Library Simple and clean JAX/Flax deep learning algorithm implementations: Soft-Actor-Critic (arXiv:1812.05905) Transf

Patrick Hart 4 Nov 27, 2022
Indoor Panorama Planar 3D Reconstruction via Divide and Conquer

HV-plane reconstruction from a single 360 image Code for our paper in CVPR 2021: Indoor Panorama Planar 3D Reconstruction via Divide and Conquer (pape

sunset 36 Jan 03, 2023
EfficientNetV2-with-TPU - Cifar-10 case study

EfficientNetV2-with-TPU EfficientNet EfficientNetV2 adalah jenis jaringan saraf convolutional yang memiliki kecepatan pelatihan lebih cepat dan efisie

Sultan syach 1 Dec 28, 2021
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Jan 03, 2023
PyTorch implementation for SDEdit: Image Synthesis and Editing with Stochastic Differential Equations

SDEdit: Image Synthesis and Editing with Stochastic Differential Equations Project | Paper | Colab PyTorch implementation of SDEdit: Image Synthesis a

536 Jan 05, 2023
BMW TechOffice MUNICH 148 Dec 21, 2022
Implementation of ICCV21 paper: PnP-DETR: Towards Efficient Visual Analysis with Transformers

Implementation of ICCV 2021 paper: PnP-DETR: Towards Efficient Visual Analysis with Transformers arxiv This repository is based on detr Recently, DETR

twang 113 Dec 27, 2022
Toward Realistic Single-View 3D Object Reconstruction with Unsupervised Learning from Multiple Images (ICCV 2021)

Table of Content Introduction Getting Started Datasets Installation Experiments Training & Testing Pretrained models Texture fine-tuning Demo Toward R

VinAI Research 42 Dec 05, 2022
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation

CPT This repository contains code and checkpoints for CPT. CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Gener

fastNLP 341 Dec 29, 2022
Automated Evidence Collection for Fake News Detection

Automated Evidence Collection for Fake News Detection This is the code repo for the Automated Evidence Collection for Fake News Detection paper accept

Mrinal Rawat 2 Apr 12, 2022
Efficient Conformer: Progressive Downsampling and Grouped Attention for Automatic Speech Recognition

Efficient Conformer: Progressive Downsampling and Grouped Attention for Automatic Speech Recognition Official implementation of the Efficient Conforme

Maxime Burchi 145 Dec 30, 2022
atmaCup #11 の Public 4th / Pricvate 5th Solution のリポジトリです。

#11 atmaCup 2021-07-09 ~ 2020-07-21 に行われた #11 [初心者歓迎! / 画像編] atmaCup のリポジトリです。結果は Public 4th / Private 5th でした。 フレームワークは PyTorch で、実装は pytorch-image-m

Tawara 12 Apr 07, 2022
PyTorch implementations of the paper: "Learning Independent Instance Maps for Crowd Localization"

IIM - Crowd Localization This repo is the official implementation of paper: Learning Independent Instance Maps for Crowd Localization. The code is dev

tao han 91 Nov 10, 2022
Unofficial Implementation of Oboe (SIGCOMM'18').

Oboe-Reproduce This is the unofficial implementation of the paper "Oboe: Auto-tuning video ABR algorithms to network conditions, Zahaib Akhtar, Yun Se

Tianchi Huang 13 Nov 04, 2022
Exploiting a Zoo of Checkpoints for Unseen Tasks

Exploiting a Zoo of Checkpoints for Unseen Tasks This repo includes code to reproduce all results in the above Neurips paper, authored by Jiaji Huang,

Baidu Research 8 Sep 06, 2022
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning

SoCo [NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning By Fangyun Wei*, Yue Gao*, Zhirong Wu, Han Hu,

Yue Gao 139 Dec 14, 2022
You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling

You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling Transformer-based models are widely used in natural language processi

Zhanpeng Zeng 12 Jan 01, 2023
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Yu Bai 43 Nov 07, 2022