Who calls the shots? Rethinking Few-Shot Learning for Audio (WASPAA 2021)

Overview

rethink-audio-fsl

This repo contains the source code for the paper "Who calls the shots? Rethinking Few-Shot Learning for Audio." (WASPAA 2021)

Table of Contents

Dataset

Models in this work are trained on FSD-MIX-CLIPS, an open dataset of programmatically mixed audio clips with a controlled level of polyphony and signal-to-noise ratio. We use single-labeled clips from FSD50K as the source material for the foreground sound events and Brownian noise as the background to generate 281,039 10-second strongly-labeled soundscapes with Scaper. We refer this (intermediate) dataset of 10s soundscapes as FSD-MIX-SED. Each soundscape contains n events from n different sound classes where n is ranging from 1 to 5. We then extract 614,533 1s clips centered on each sound event in the soundscapes in FSD-MIX-SED to produce FSD-MIX-CLIPS.

Due to the large size of the dataset, instead of releasing the raw audio files, we release the source material, a subset of FSD50K, and soundscape annotations in JAMS format which can be used to reproduce FSD-MIX-SED using Scaper. All clips in FSD-MIX-CLIPS are extracted from FSD-MIX-SED. Therefore, for FSD-MIX-CLIPS, instead of releasing duplicated audio content, we provide annotations that specify the filename in FSD-MIX-SED and the corresponding starting time (in second) of each 1-second clip.

To reproduce FSD-MIX-SED:

  1. Download all files from Zenodo.
  2. Extract .tar.gz files. You will get
  • FSD_MIX_SED.annotations: 281,039 annotation files, 35GB
  • FSD_MIX_SED.source: 10,296 single-labeled audio clips, 1.9GB
  • FSD_MIX_CLIPS.annotations: 5 annotation files for each class/data split
  • vocab.json: 89 classes, each class is then labeled by its index in the list in following experiments. 0-58: base, 59-73: novel-val, 74-88: novel-test.

We will use FSD_MIX_SED.annotations and FSD_MIX_SED.source to reproduce the audio data in FSD_MIX_SED, and use the audio with FSD_MIX_CLIPS.annotation for the following training and evaluation.

  1. Install Scaper
  2. Generate soundscapes from jams files by running the command. Set annpaths and audiopath to the extracted folders, and savepath to the desired path to save output audio files.
python ./data/generate_soundscapes.py \
--annpath PATH-TO-FSD_MIX_SED.annotations \
--audiopath PATH-TO-FSD_MIX_SED.source \
--savepath PATH-TO-SAVE-OUTPUT

Note that this will generate 281,039 audio files with a size of ~450GB to the folder FSD_MIX_SED.audio at the set savepath.

If you want to get the foreground material (FSD-MIX-SED.source) directly from FSD50K instead of downloading them, run

python ./data/preprocess_foreground_sounds.py \
--fsdpath PATH-TO-FSD50K \
--outpath PATH_TO_SAVE_OUTPUT

Experiment

We provide source code to train the best performing embedding model (pretrained OpenL3 + FC) and three different few-shot methods to predict both base and novel class data.

Preprocessing

Once audio files are reproduced, we pre-compute OpenL3 embeddings of clips in FSD-MIX-CLIPS and save them.

  1. Install OpenL3
  2. Set paths of the downloaded FSD_MIX_CLIPS.annotations and generated FSD_MIX_SED.audio, and run
python get_openl3emb_and_filelist.py \
--annpath PATH-TO-FSD_MIX_CLIPS.annotations \
--audiopath PATH-TO-FSD_MIX_SED.audio \
--savepath PATH_TO_SAVE_OUTPUT

This generates 614,533 .pkl files where each file contains an embedding. A set of filelists will also be saved under current folder.

Environment

Create conda environment from the environment.yml file and activate it.

Note that you only need the environment if you want to train/evaluate the models. For reproducing the dataset, see Dataset.

conda env create -f environment.yml
conda activate dfsl

Training

  • Training configuration can be specified using config files in ./config
  • Model checkpoints will be saved in the folder ./experiments, and tensorboard data will be saved in the folder ./run

1. Base classifier

First, to train the base classifier on base classes, run

python train.py --config openl3CosineClassifier --openl3

2. Few-shot weight generator for DFSL

Once the base model is trained, we can train the few-shot weight generator for DFSL by running

python train.py --config openl3CosineClassifierGenWeightAttN5 --openl3

By default, DFSL is trained with 5 support examples: n=5, to train DFSL with different n, run

# n=10
python train.py --config openl3CosineClassifierGenWeightAttN10 --openl3

# n=20
python train.py --config openl3CosineClassifierGenWeightAttN20 --openl3

# n=30
python train.py --config openl3CosineClassifierGenWeightAttN30 --openl3

Evaluation

We evaluate the trained models on test data from both base and novel classes. For each novel class, we need to sample a support set. Run the command below to split the original filelist for test classes to test_support_filelist.pkl and test_query_filelist.pkl.

python get_test_support_and_query.py
  • Here we consider monophonic support examples with mixed(random) SNR. Code to run evaluation with polyphonic support examples with specific low/high SNR will be released soon.

For evaluation, we compute features for both base and novel test data, then make predictions and compute metrics in a joint label space. The computed features, model predictions, and metrics will be saved in the folder ./experiments. We consider 3 few-shot methods to predict novel classes. To test different number of support examples, set different n_pos in the following commands.

1. Prototype

# Extract embeddings of evaluation data and save them.
python save_features.py --config=openl3CosineClassifier --openl3

# Get and save model prediction, run this multiple time (niter) to count for random selection of novel examples.
python pred.py --config=openl3CosineClassifier --openl3 --niter 100 --n_base 59 --n_novel 15 --n_pos 5

# compute and save evaluation metrics based on model prediction
python metrics.py --config=audioset_pannCosineClassifier --openl3 --n_base 59 --n_novel 15 --n_pos 5

2. DFSL

# Extract embeddings of evaluation data and save them.
python save_features.py --config=openl3CosineClassifierGenWeightAttN5 --openl3

# Get and save model prediction, run this multiple time (niter) to count for random selection of novel examples.
python pred.py --config=openl3CosineClassifierGenWeightAttN5 --openl3 --niter 100 --n_base 59 --n_novel 15 --n_pos 5

# compute and save evaluation metrics based on model prediction
python metrics.py --config=audioset_pannCosineClassifierGenWeightAttN5 --openl3 --n_base 59 --n_novel 15 --n_pos 5

3. Logistic regression

Train a binary logistic regression model for each novel class. Note that we need to sample n_neg of examples from the base training data as the negative examples. Default n_neg is 100. We also did a hyperparameter search on n_neg based on the validation data while n_pos changing from 5 to 30:

  • n_pos=5, n_neg=100
  • n_pos=10, n_neg=500
  • n_pos=20, n_neg=1000
  • n_pos=30, n_neg=5000
# Extract embeddings of evaluation data and save them.
python save_features.py --config=openl3CosineClassifier --openl3

# Train binary logistic regression models, predict test data, and compute metrics
python logistic_regression.py --config=openl3CosineClassifier --openl3 --niter 10 --n_base 59 --n_novel 15 --n_pos 5 --n_neg 100

Reference

This code is built upon the implementation from FewShotWithoutForgetting

Citation

Please cite our paper if you find the code or dataset useful for your research.

Y. Wang, N. J. Bryan, J. Salamon, M. Cartwright, and J. P. Bello. "Who calls the shots? Rethinking Few-shot Learning for Audio", IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2021

Owner
Yu Wang
Ph.D. Candidate
Yu Wang
BERTMap: A BERT-Based Ontology Alignment System

BERTMap: A BERT-based Ontology Alignment System Important Notices The relevant paper was accepted in AAAI-2022. Arxiv version is available at: https:/

KRR 36 Dec 24, 2022
PyTorch implementations of deep reinforcement learning algorithms and environments

Deep Reinforcement Learning Algorithms with PyTorch This repository contains PyTorch implementations of deep reinforcement learning algorithms and env

Petros Christodoulou 4.7k Jan 04, 2023
Image Recognition using Pytorch

PyTorch Project Template A simple and well designed structure is essential for any Deep Learning project, so after a lot practice and contributing in

Sarat Chinni 1 Nov 02, 2021
PyVideoAI: Action Recognition Framework

This reposity contains official implementation of: Capturing Temporal Information in a Single Frame: Channel Sampling Strategies for Action Recognitio

Kiyoon Kim 22 Dec 29, 2022
ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives

Status: Under development (expect bug fixes and huge updates) ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectiv

37 Dec 28, 2022
[NeurIPS'21 Spotlight] PyTorch code for our paper "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution"

ASSL This repository is for a new network pruning method (Aligned Structured Sparsity Learning, ASSL) for efficient single image super-resolution (SR)

Huan Wang 47 Nov 28, 2022
Forecasting directional movements of stock prices for intraday trading using LSTM and random forest

Forecasting directional movements of stock-prices for intraday trading using LSTM and random-forest https://arxiv.org/abs/2004.10178 Pushpendu Ghosh,

Pushpendu Ghosh 270 Dec 24, 2022
PyTorch Implementation of PortaSpeech: Portable and High-Quality Generative Text-to-Speech

PortaSpeech - PyTorch Implementation PyTorch Implementation of PortaSpeech: Portable and High-Quality Generative Text-to-Speech. Model Size Module Nor

Keon Lee 279 Jan 04, 2023
FinGAT: A Financial Graph Attention Networkto Recommend Top-K Profitable Stocks

FinGAT: A Financial Graph Attention Networkto Recommend Top-K Profitable Stocks This is our implementation for the paper: FinGAT: A Financial Graph At

Yu-Che Tsai 64 Dec 13, 2022
Nest - A flexible tool for building and sharing deep learning modules

Nest - A flexible tool for building and sharing deep learning modules Nest is a flexible deep learning module manager, which aims at encouraging code

ZhouYanzhao 41 Oct 10, 2022
Transformer part of 12th place solution in Riiid! Answer Correctness Prediction

kaggle_riiid Transformer part of 12th place solution in Riiid! Answer Correctness Prediction. Please see here for more information. Execution You need

Sakami Kosuke 2 Apr 23, 2022
Replication Package for "An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Datasets"

Replication Package for "An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Data

2 Oct 06, 2022
[ICLR2021oral] Rethinking Architecture Selection in Differentiable NAS

DARTS-PT Code accompanying the paper ICLR'2021: Rethinking Architecture Selection in Differentiable NAS Ruochen Wang, Minhao Cheng, Xiangning Chen, Xi

Ruochen Wang 86 Dec 27, 2022
Autonomous racing with the Anki Overdrive

Anki Autonomous Racing Autonomous racing with the Anki Overdrive. Using the Overdrive-Python API (https://github.com/xerodotc/overdrive-python) develo

3 Dec 11, 2022
📖 Deep Attentional Guided Image Filtering

📖 Deep Attentional Guided Image Filtering [Paper] Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao ,Xiangyang Ji Harbin Institute of Technology,

9 Dec 23, 2022
A developer interface for creating Chat AIs for the Chai app.

ChaiPy A developer interface for creating Chat AIs for the Chai app. Usage Local development A quick start guide is available here, with a minimal exa

Chai 28 Dec 28, 2022
CCP dataset from Clothing Co-Parsing by Joint Image Segmentation and Labeling

Clothing Co-Parsing (CCP) Dataset Clothing Co-Parsing (CCP) dataset is a new clothing database including elaborately annotated clothing items. 2, 098

Wei Yang 434 Dec 24, 2022
Spatiotemporal resampling methods for mlr3

mlr3spatiotempcv Package website: release | dev Spatiotemporal resampling methods for mlr3. This package extends the mlr3 package framework with spati

45 Nov 21, 2022
Collection of common code that's shared among different research projects in FAIR computer vision team.

fvcore fvcore is a light-weight core library that provides the most common and essential functionality shared in various computer vision frameworks de

Meta Research 1.5k Jan 07, 2023
GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory

tianyuluan 3 Jun 18, 2022