A PyTorch Implementation of the paper - Choi, Woosung, et al. "Investigating u-nets with various intermediate blocks for spectrogram-based singing voice separation." 21th International Society for Music Information Retrieval Conference, ISMIR. 2020.

Overview

Investigating U-NETS With Various Intermediate Blocks For Spectrogram-based Singing Voice Separation

A Pytorch Implementation of the paper "Investigating U-NETS With Various Intermediate Blocks For Spectrogram-based Singing Voice Separation (ISMIR 2020)"

Installation

conda install pytorch=1.6 cudatoolkit=10.2 -c pytorch
conda install -c conda-forge ffmpeg librosa
conda install -c anaconda jupyter
pip install musdb museval pytorch_lightning effortless_config wandb pydub nltk spacy 

Dataset

  1. Download Musdb18
  2. Unzip files
  3. We recommend you to use the wav file mode for the fast data preparation.
    musdbconvert path/to/musdb-stems-root path/to/new/musdb-wav-root

Demonstration: A Pretrained Model (TFC_TDF_Net (large))

Colab Link

Tutorial

1. activate your conda

conda activate yourcondaname

2. Training a default UNet with TFC_TDFs

python main.py --musdb_root ../repos/musdb18_wav --musdb_is_wav True --filed_mode True --target_name vocals --mode train --gpus 4 --distributed_backend ddp --sync_batchnorm True --pin_memory True --num_workers 32 --precision 16 --run_id debug --optimizer adam --lr 0.001 --save_top_k 3 --patience 100 --min_epochs 1000 --max_epochs 2000 --n_fft 2048 --hop_length 1024 --num_frame 128  --train_loss spec_mse --val_loss raw_l1 --model tfc_tdf_net  --spec_est_mode mapping --spec_type complex --n_blocks 7 --internal_channels 24  --n_internal_layers 5 --kernel_size_t 3 --kernel_size_f 3 --min_bn_units 16 --tfc_tdf_activation relu  --first_conv_activation relu --last_activation identity --seed 2020

3. Evaluation

After training is done, checkpoints are saved in the following directory.

etc/modelname/run_id/*.ckpt

For evaluation,

python main.py --musdb_root ../repos/musdb18_wav --musdb_is_wav True --filed_mode True --target_name vocals --mode eval --gpus 1 --pin_memory True --num_workers 64 --precision 32 --run_id debug --batch_size 4 --n_fft 2048 --hop_length 1024 --num_frame 128 --train_loss spec_mse --val_loss raw_l1 --model tfc_tdf_net --spec_est_mode mapping --spec_type complex --n_blocks 7 --internal_channels 24 --n_internal_layers 5 --kernel_size_t 3 --kernel_size_f 3 --min_bn_units 16 --tfc_tdf_activation relu --first_conv_activation relu --last_activation identity --log wandb --ckpt vocals_epoch=891.ckpt

Below is the result.

wandb:          test_result/agg/vocals_SDR 6.954695
wandb:   test_result/agg/accompaniment_SAR 14.3738075
wandb:          test_result/agg/vocals_SIR 15.5527
wandb:   test_result/agg/accompaniment_SDR 13.561705
wandb:   test_result/agg/accompaniment_ISR 22.69328
wandb:   test_result/agg/accompaniment_SIR 18.68421
wandb:          test_result/agg/vocals_SAR 6.77698
wandb:          test_result/agg/vocals_ISR 12.45371

4. Interactive Report (wandb)

wandb report

Indermediate Blocks

Please see this document.

How to use

1. Training

1.1. Intermediate Block independent Parameters

1.1.A. General Parameters
  • --musdb_root musdb path
  • --musdb_is_wav whether the path contains wav files or not
  • --filed_mode whether you want to use filed mode or not. recommend to use it for the fast data preparation.
  • --target_name one of vocals, drum, bass, other
1.1.B. Training Environment
  • --mode train or eval
  • --gpus number of gpus
    • (WARN) gpus > 1 might be problematic when evaluating models.
  • distributed_backend use this option only when you are using multi-gpus. distributed backend, one of ddp, dp, ... we recommend you to use ddp.
  • --sync_batchnorm True only when you are using ddp
  • --pin_memory
  • --num_workers
  • --precision 16 or 32
  • --dev_mode whether you want a developement mode or not. dev mode is much faster because it uses only a small subset of the dataset.
  • --run_id (optional) directory path where you want to store logs and etc. if none then the timestamp.
  • --log True for default pytorch lightning log. wandb is also available.
  • --seed random seed for a deterministic result.
1.1.C. Training hyperparmeters
  • --batch_size trivial :)
  • --optimizer adam, rmsprop, etc
  • --lr learning rate
  • --save_top_k how many top-k epochs you want to save the training state (criterion: validation loss)
  • --patience early stop control parameter. see pytorch lightning docs.
  • --min_epochs trivial :)
  • --max_epochs trivial :)
  • --model
    • tfc_tdf_net
    • tfc_net
    • tdc_net
1.1.D. Fourier parameters
  • --n_fft
  • --hop_length
  • num_frame number of frames (time slices)
1.1.F. criterion
  • --train_loss: spec_mse, raw_l1, etc...
  • --val_loss: spec_mse, raw_l1, etc...

1.2. U-net Parameters

  • --n_blocks: number of intermediate blocks. must be an odd integer. (default=7)
  • --input_channels:
    • if you use two-channeled complex-valued spectrogram, then 4
    • if you use two-channeled manginutde spectrogram, then 2
  • --internal_channels: number of internal chennels (default=24)
  • --first_conv_activation: (default='relu')
  • --last_activation: (default='sigmoid')
  • --t_down_layers: list of layer where you want to doubles/halves the time resolution. if None, ds/us applied to every single layer. (default=None)
  • --f_down_layers: list of layer where you want to doubles/halves the frequency resolution. if None, ds/us applied to every single layer. (default=None)

1.3. SVS Framework

  • --spec_type: type of a spectrogram. ['complex', 'magnitude']

  • --spec_est_mode: spectrogram estimation method. ['mapping', 'masking']

  • CaC Framework

    • you can use cac framework [1] by setting
      • --spec_type complex --spec_est_mode mapping --last_activation identity
  • Mag-only Framework

    • if you want to use the traditional magnitude-only estimation with sigmoid, then try
      • --spec_type magnitude --spec_est_mode masking --last_activation sigmoid
    • you can also change the last activation as follows
      • --spec_type magnitude --spec_est_mode masking --last_activation relu
  • Alternatives

    • you can build an svs framework with any combination of these parameters
    • e.g. --spec_type complex --spec_est_mode masking --last_activation tanh

1.4. Block-dependent Parameters

1.4.A. TDF Net
  • --bn_factor: bottleneck factor $bn$ (default=16)
  • --min_bn_units: when target frequency domain size is too small, we just use this value instead of $\frac{f}{bn}$. (default=16)
  • --bias: (default=False)
  • --tdf_activation: activation function of each block (default=relu)

1.4.B. TDC Net
  • --n_internal_layers: number of 1-d CNNs in a block (default=5)
  • --kernel_size_f: size of kernel of frequency-dimension (default=3)
  • --tdc_activation: activation function of each block (default=relu)

1.4.C. TFC Net
  • --n_internal_layers: number of 1-d CNNs in a block (default=5)
  • --kernel_size_t: size of kernel of time-dimension (default=3)
  • --kernel_size_f: size of kernel of frequency-dimension (default=3)
  • --tfc_activation: activation function of each block (default=relu)

1.4.D. TFC_TDF Net
  • --n_internal_layers: number of 1-d CNNs in a block (default=5)
  • --kernel_size_t: size of kernel of time-dimension (default=3)
  • --kernel_size_f: size of kernel of frequency-dimension (default=3)
  • --tfc_tdf_activation: activation function of each block (default=relu)
  • --bn_factor: bottleneck factor $bn$ (default=16)
  • --min_bn_units: when target frequency domain size is too small, we just use this value instead of $\frac{f}{bn}$. (default=16)
  • --tfc_tdf_bias: (default=False)

1.4.E. TDC_RNN Net
  • '--n_internal_layers' : number of 1-d CNNs in a block (default=5)

  • '--kernel_size_f' : size of kernel of frequency-dimension (default=3)

  • '--bn_factor_rnn' : (default=16)

  • '--num_layers_rnn' : (default=1)

  • '--bias_rnn' : bool, (default=False)

  • '--min_bn_units_rnn' : (default=16)

  • '--bn_factor_tdf' : (default=16)

  • '--bias_tdf' : bool, (default=False)

  • '--tdc_rnn_activation' : (default='relu')

current bug - cuda error occurs when tdc_rnn net with precision 16

Reproducible Experimental Results

  • TFC_TDF_large
    • parameters
    --musdb_root ../repos/musdb18_wav
    --musdb_is_wav True
    --filed_mode True
    
    --gpus 4
    --distributed_backend ddp
    --sync_batchnorm True
    
    --num_workers 72
    --train_loss spec_mse
    --val_loss raw_l1
    --batch_size 12
    --precision 16
    --pin_memory True
    --num_worker 72         
    --save_top_k 3
    --patience 200
    --run_id debug_large
    --log wandb
    --min_epochs 2000
    --max_epochs 3000
    
    --optimizer adam
    --lr 0.001
    
    --model tfc_tdf_net
    --n_fft 4096
    --hop_length 1024
    --num_frame 128
    --spec_type complex
    --spec_est_mode mapping
    --last_activation identity
    --n_blocks 9
    --internal_channels 24
    --n_internal_layers 5
    --kernel_size_t 3 
    --kernel_size_f 3 
    --tfc_tdf_bias True
    --seed 2020
    
    
    • training
    python main.py --musdb_root ../repos/musdb18_wav --musdb_is_wav True --filed_mode True --gpus 4 --distributed_backend ddp --sync_batchnorm True --num_workers 72 --train_loss spec_mse --val_loss raw_l1 --batch_size 24 --precision 16 --pin_memory True --num_worker 72 --save_top_k 3 --patience 200 --run_id debug_large --log wandb --min_epochs 2000 --max_epochs 3000 --optimizer adam --lr 0.001 --model tfc_tdf_net --n_fft 4096 --hop_length 1024 --num_frame 128 --spec_type complex --spec_est_mode mapping --last_activation identity --n_blocks 9 --internal_channels 24 --n_internal_layers 5 --kernel_size_t 3 --kernel_size_f 3 --tfc_tdf_bias True --seed 2020
    • evaluation result (epoch 2007)
      • SDR 8.029
      • ISR 13.708
      • SIR 16.409
      • SAR 7.533

Interactive Report (wandb)

wandb report

You can cite this paper as follows:

@inproceedings{choi_2020, Author = {Choi, Woosung and Kim, Minseok and Chung, Jaehwa and Lee, Daewon and Jung, Soonyoung}, Booktitle = {21th International Society for Music Information Retrieval Conference}, Editor = {ISMIR}, Month = {OCTOBER}, Title = {Investigating U-Nets with various intermediate blocks for spectrogram-based singing voice separation.}, Year = {2020}}

Reference

[1] Woosung Choi, Minseok Kim, Jaehwa Chung, DaewonLee, and Soonyoung Jung, “Investigating u-nets with various intermediate blocks for spectrogram-based singingvoice separation.,” in 21th International Society for Music Information Retrieval Conference, ISMIR, Ed., OCTOBER 2020.

Owner
Woosung Choi
WooSung Choi Ph.d candidate @IELab-AT-KOREA-UNIV Seoul, Korea
Woosung Choi
Learning kernels to maximize the power of MMD tests

Code for the paper "Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy" (arXiv:1611.04488; published at ICLR 2017), by Douga

Danica J. Sutherland 201 Dec 17, 2022
[TPDS'21] COSCO: Container Orchestration using Co-Simulation and Gradient Based Optimization for Fog Computing Environments

COSCO Framework COSCO is an AI based coupled-simulation and container orchestration framework for integrated Edge, Fog and Cloud Computing Environment

imperial-qore 39 Dec 25, 2022
Deep ViT Features as Dense Visual Descriptors

dino-vit-features [paper] [project page] Official implementation of the paper "Deep ViT Features as Dense Visual Descriptors". We demonstrate the effe

Shir Amir 113 Dec 24, 2022
Official source code to CVPR'20 paper, "When2com: Multi-Agent Perception via Communication Graph Grouping"

When2com: Multi-Agent Perception via Communication Graph Grouping This is the PyTorch implementation of our paper: When2com: Multi-Agent Perception vi

34 Nov 09, 2022
Submodular Subset Selection for Active Domain Adaptation (ICCV 2021)

S3VAADA: Submodular Subset Selection for Virtual Adversarial Active Domain Adaptation ICCV 2021 Harsh Rangwani, Arihant Jain*, Sumukh K Aithal*, R. Ve

Video Analytics Lab -- IISc 13 Dec 28, 2022
A curated list of Machine Learning and Deep Learning tutorials in Jupyter Notebook format ready to run in Google Colaboratory

Awesome Machine Learning Jupyter Notebooks for Google Colaboratory A curated list of Machine Learning and Deep Learning tutorials in Jupyter Notebook

Carlos Toxtli 245 Jan 01, 2023
Equivariant layers for RC-complement symmetry in DNA sequence data

Equi-RC Equivariant layers for RC-complement symmetry in DNA sequence data This is a repository that implements the layers as described in "Reverse-Co

7 May 19, 2022
Learning Synthetic Environments and Reward Networks for Reinforcement Learning

Learning Synthetic Environments and Reward Networks for Reinforcement Learning We explore meta-learning agent-agnostic neural Synthetic Environments (

AutoML-Freiburg-Hannover 16 Sep 02, 2022
Image Segmentation Animation using Quadtree concepts.

QuadTree Image Segmentation Animation using QuadTree concepts. Usage usage: quad.py [-h] [-fps FPS] [-i ITERATIONS] [-ws WRITESTART] [-b] [-img] [-s S

Alex Eidt 29 Dec 25, 2022
Implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021).

[PDF] | [Slides] The official implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021 Long talk) Installation Inst

MilaGraph 117 Dec 09, 2022
SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data

SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data Au

14 Nov 28, 2022
Element selection for functional materials discovery by integrated machine learning of atomic contributions to properties

Element selection for functional materials discovery by integrated machine learning of atomic contributions to properties 8.11.2021 Andrij Vasylenko I

Leverhulme Research Centre for Functional Materials Design 4 Dec 20, 2022
Single-Shot Motion Completion with Transformer

Single-Shot Motion Completion with Transformer 👉 [Preprint] 👈 Abstract Motion completion is a challenging and long-discussed problem, which is of gr

FuxiCV 78 Dec 29, 2022
Repo for Photon-Starved Scene Inference using Single Photon Cameras, ICCV 2021

Photon-Starved Scene Inference using Single Photon Cameras ICCV 2021 Arxiv Project Video Bhavya Goyal, Mohit Gupta University of Wisconsin-Madison Abs

Bhavya Goyal 5 Nov 15, 2022
CLADE - Efficient Semantic Image Synthesis via Class-Adaptive Normalization (TPAMI 2021)

Efficient Semantic Image Synthesis via Class-Adaptive Normalization (Accepted by TPAMI)

tzt 49 Nov 17, 2022
Python package for missing-data imputation with deep learning

MIDASpy Overview MIDASpy is a Python package for multiply imputing missing data using deep learning methods. The MIDASpy algorithm offers significant

MIDASverse 77 Dec 03, 2022
Anchor-free Oriented Proposal Generator for Object Detection

Anchor-free Oriented Proposal Generator for Object Detection Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han, Intro

jbwang1997 56 Nov 15, 2022
Human4D Dataset tools for processing and visualization

HUMAN4D: A Human-Centric Multimodal Dataset for Motions & Immersive Media HUMAN4D constitutes a large and multimodal 4D dataset that contains a variet

tofis 15 Nov 09, 2022
A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes.

OMNI A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes. Why? When I finished my Kubernetes cluster using a few Raspber

Matias Godoy 148 Dec 29, 2022
Visual dialog agents with pre-trained vision-and-language encoders.

Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation Or READ-UP: Referring Expression Agent Dialog with Unified Pretr

7 Oct 08, 2022