Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On

Related tags

Deep LearningUPMT
Overview

UPMT

Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On

See main.py as an example:

from model import PopMusicTransformer
import argparse
import tensorflow as tf
import os
import pickle
import numpy as np
from glob import glob
parser = argparse.ArgumentParser(description='')
parser.add_argument('--prompt_path', dest='prompt_path', default='./test/prompt/test_input.mid', help='path of prompt')
parser.add_argument('--output_path', dest='output_path', default='./test/output/test_generate.mid', help='path of the output')
parser.add_argument('--favorite_path', dest='favorite_path', default='./test/favorite/test_favorite.mid', help='path of favorite')
parser.add_argument('--trainingdata_path', dest='trainingdata_path', default='./test/data/training.pickle', help='path of favorite training data')
parser.add_argument('--output_checkpoint_folder', dest='output_checkpoint_folder', default='./test/checkpoint/', help='path of favorite')
parser.add_argument('--alpha', default=0.1, help='weight of events')
parser.add_argument('--temperature', default=300, help='sampling temperature')
parser.add_argument('--topk', default=5, help='sampling topk')
parser.add_argument('--smpi', default=[-2,-2,-1,-2,-2,2,2,5], help='signature music pattern interval')

parser.add_argument('--type', dest='type', default='generateno', help='generateno or pretrain or prepare')

args = parser.parse_args()


def main(_):

    tfconfig = tf.ConfigProto(allow_soft_placement=True)
    with tf.Session(config=tfconfig) as sess:
        if args.type == 'prepare':
            midi_paths = glob('./test/favorite'+'/*.mid')
            model = PopMusicTransformer(
                checkpoint='./test/model',
                is_training=False)
            model.prepare_data(
                        midi_paths=midi_paths)    
        elif args.type == 'generateno':
            model = PopMusicTransformer(
                checkpoint='./test/model',
                is_training=False)
            model.generate_noteon(
                        temperature=float(args.temperature),
                        topk=int(args.topk),
                        output_path=args.output_path,  
                        smpi= np.array(args.smpi),
                        prompt=args.prompt_path)
        elif args.type =='pretrain':
            training_data = pickle.load(open(args.trainingdata_path,"rb"))
            if not os.path.exists(args.output_checkpoint_folder):
                os.mkdir(args.output_checkpoint_folder)
            model = PopMusicTransformer(
                checkpoint='./test/model',
                is_training=True)
            model.finetune(
                training_data=training_data,
                alpha=float(args.alpha),
                favoritepath=args.favorite_path,
                output_checkpoint_folder=args.output_checkpoint_folder)

if __name__ == '__main__':
    tf.app.run()

Thanks https://github.com/YatingMusic/remi for the open source.

XViT - Space-time Mixing Attention for Video Transformer

XViT - Space-time Mixing Attention for Video Transformer This is the official implementation of the XViT paper: @inproceedings{bulat2021space, title

Adrian Bulat 33 Dec 23, 2022
Score refinement for confidence-based 3D multi-object tracking

Score refinement for confidence-based 3D multi-object tracking Our video gives a brief explanation of our Method. This is the official code for the pa

Cognitive Systems Research Group 47 Dec 26, 2022
PyTorch implementation for "Mining Latent Structures with Contrastive Modality Fusion for Multimedia Recommendation"

MIRCO PyTorch implementation for paper: Latent Structures Mining with Contrastive Modality Fusion for Multimedia Recommendation Dependencies Python 3.

Big Data and Multi-modal Computing Group, CRIPAC 9 Dec 08, 2022
Large scale PTM - PPI relation extraction

Large-scale protein-protein post-translational modification extraction with distant supervision and confidence calibrated BioBERT The silver standard

1 Feb 25, 2022
Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Easy Few-Shot Learning Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you

Sicara 399 Jan 08, 2023
Covid-19 Test AI (Deep Learning - NNs) Software. Accuracy is the %96.5, loss is the 0.09 :)

Covid-19 Test AI (Deep Learning - NNs) Software I developed a segmentation algorithm to understand whether Covid-19 Test Photos are positive or negati

Emirhan BULUT 28 Dec 04, 2021
Human head pose estimation using Keras over TensorFlow.

RealHePoNet: a robust single-stage ConvNet for head pose estimation in the wild.

Rafael Berral Soler 71 Jan 05, 2023
A python library to artfully visualize Factorio Blueprints and an interactive web demo for using it.

Factorio Blueprint Visualizer I love the game Factorio and I really like the look of factories after growing for many hours or blueprints after tweaki

Piet Brömmel 124 Jan 07, 2023
Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes (CVPR 2021 Oral)

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces Official code release for NGLOD. For technical details, please refer t

659 Dec 27, 2022
A Pytorch Implementation of ClariNet

ClariNet A Pytorch Implementation of ClariNet (Mel Spectrogram -- Waveform) Requirements PyTorch 0.4.1 & python 3.6 & Librosa Examples Step 1. Downlo

Sungwon Kim 286 Sep 15, 2022
Pytorch Implementation of Adversarial Deep Network Embedding for Cross-Network Node Classification

Pytorch Implementation of Adversarial Deep Network Embedding for Cross-Network Node Classification (ACDNE) This is a pytorch implementation of the Adv

陈志豪 8 Oct 13, 2022
Incorporating Transformer and LSTM to Kalman Filter with EM algorithm

Deep learning based state estimation: incorporating Transformer and LSTM to Kalman Filter with EM algorithm Overview Kalman Filter requires the true p

zshicode 57 Dec 27, 2022
For auto aligning, cropping, and scaling HR and LR images for training image based neural networks

ImgAlign For auto aligning, cropping, and scaling HR and LR images for training image based neural networks Usage Make sure OpenCV is installed, 'pip

15 Dec 04, 2022
Episodic-memory - Ego4D Episodic Memory Benchmark

Ego4D Episodic Memory Benchmark EGO4D is the world's largest egocentric (first p

3 Feb 18, 2022
PyTorch implementation of residual gated graph ConvNets, ICLR’18

Residual Gated Graph ConvNets April 24, 2018 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbress

Xavier Bresson 112 Aug 10, 2022
OpenDILab Multi-Agent Environment

Go-Bigger: Multi-Agent Decision Intelligence Environment GoBigger Doc (中文版) Ongoing 2021.11.13 We are holding a competition —— Go-Bigger: Multi-Agent

OpenDILab 441 Jan 05, 2023
Conjugated Discrete Distributions for Distributional Reinforcement Learning (C2D)

Conjugated Discrete Distributions for Distributional Reinforcement Learning (C2D) Code & Data Appendix for Conjugated Discrete Distributions for Distr

1 Jan 11, 2022
Reinforcement learning library in JAX.

Reinforcement learning library in JAX.

Yicheng Luo 96 Oct 30, 2022
Code for "Intra-hour Photovoltaic Generation Forecasting based on Multi-source Data and Deep Learning Methods."

pv_predict_unet-lstm Code for "Intra-hour Photovoltaic Generation Forecasting based on Multi-source Data and Deep Learning Methods." IEEE Transactions

FolkScientistInDL 8 Oct 08, 2022
Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechanism

Period-alternatives-of-Softmax Experimental Demo for our paper 'Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechani

slwang9353 0 Sep 06, 2021