The ARCA23K baseline system

Overview

ARCA23K Baseline System

This is the source code for the baseline system associated with the ARCA23K dataset. Details about ARCA23K and the baseline system can be found in our DCASE2021 paper [1].

Requirements

This software requires Python >=3.8. To install the dependencies, run:

poetry install

or:

pip install -r requirements.txt

You are also free to use another package manager (e.g. Conda).

The ARCA23K and FSD50K datasets are required too. For convenience, bash scripts are provided to download the datasets automatically. The dependencies are bash, curl, and unzip. Simply run the following command from the root directory of the project:

$ scripts/download_arca23k.sh
$ scripts/download_fsd50k.sh

This will download the datasets to a directory called _datasets/. When running the software, the --arca23k_dir and --fsd50k_dir options (refer to the Usage section) can be used to specify the location of the datasets. This is only necessary if the dataset paths are different from the default.

Usage

The general usage pattern is:

python <script> [-f PATH] <args...> [options...]

The command-line options can also be specified in configuration files. The path of a configuration file can be specified to the program using the --config_file (or -f) command-line option. This option can be used multiple times. Options that are passed in the command-line override those in the config file(s). See default.ini for an example of a config file. Note that default.ini does not need to be specified in the command line and should not be modified.

Training

To train a model, run:

python baseline/train.py DATASET [-f FILE] [--experiment_id ID] [--work_dir DIR] [--arca23k_dir DIR] [--fsd50k_dir DIR] [--frac NUM] [--sample_rate NUM] [--block_length NUM] [--hop_length NUM] [--features SPEC] [--cache_features BOOL] [--model {vgg9a,vgg11a}] [--weights_path PATH] [--label_noise DICT] [--n_epochs N] [--batch_size N] [--lr NUM] [--lr_scheduler SPEC] [--partition SPEC] [--seed N] [--cuda BOOL] [--n_workers N] [--overwrite BOOL]

The DATASET argument accepts the following values:

  • arca23k - Train using the ARCA23K dataset.
  • arca23k-fsd - Train using the ARCA23K-FSD dataset.
  • mixed-p - Train using a mixture of ARCA23K and ARCA23K-FSD. Replace p with a fraction that represents the percentage of ARCA23K examples to be present in the training set.

The --experiment_id option is used to differentiate experiments. It determines where the output files are saved relative to the path given by the --work_dir option. When running multiple trials, either use the --seed option to specify different random seeds or set it to a negative number to disable setting the random seed. Otherwise, the learned models will be identical across different trials.

Example:

python baseline/train.py arca23k --experiment_id my_experiment

Prediction

To compute predictions, run:

python baseline/predict.py DATASET SUBSET [-f FILE] [--experiment_id ID] [--work_dir DIR] [--arca23k_dir DIR] [--fsd50k_dir DIR] [--output_name FILE_NAME] [--clean BOOL] [--sample_rate NUM] [--block_length NUM] [--features SPEC] [--cache_features BOOL] [--weights_path PATH] [--batch_size N] [--partition SPEC] [--n_workers N] [--seed N] [--cuda BOOL]

The SUBSET argument must be set to either training, validation, or test.

Example:

python baseline/predict.py arca23k test --experiment_id my_experiment

Evaluation

To evaluate the predictions, run:

python baseline/evaluate.py DATASET SUBSET [-f FILE] [--experiment_id LIST] [--work_dir DIR] [--arca23k_dir DIR] [--fsd50k_dir DIR] [--output_name FILE_NAME] [--cached BOOL]

The SUBSET argument must be set to either training, validation, or test.

Example:

python baseline/evaluate.py arca23k test --experiment_id my_experiment

Citing

If you wish to cite this work, please cite the following paper:

[1] T. Iqbal, Y. Cao, A. Bailey, M. D. Plumbley, and W. Wang, “ARCA23K: An audio dataset for investigating open-set label noise”, in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021), 2021, Barcelona, Spain, pp. 201–205.

BibTeX:

@inproceedings{Iqbal2021,
    author = {Iqbal, T. and Cao, Y. and Bailey, A. and Plumbley, M. D. and Wang, W.},
    title = {{ARCA23K}: An audio dataset for investigating open-set label noise},
    booktitle = {Proceedings of the Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021)},
    pages = {201--205},
    year = {2021},
    address = {Barcelona, Spain},
}
Advanced yabai wooting scripts

Yabai Wooting scripts Installation requirements Both https://github.com/xiamaz/python-yabai-client and https://github.com/xiamaz/python-wooting-rgb ne

Max Zhao 3 Dec 31, 2021
Implementation of various Vision Transformers I found interesting

Implementation of various Vision Transformers I found interesting

Kim Seonghyeon 78 Dec 06, 2022
A pytorch implementation of the CVPR2021 paper "VSPW: A Large-scale Dataset for Video Scene Parsing in the Wild"

VSPW: A Large-scale Dataset for Video Scene Parsing in the Wild A pytorch implementation of the CVPR2021 paper "VSPW: A Large-scale Dataset for Video

45 Nov 29, 2022
Python parser for DTED data.

DTED Parser This is a package written in pure python (with help from numpy) to parse and investigate Digital Terrain Elevation Data (DTED) files. This

Ben Bonenfant 12 Dec 18, 2022
High-Resolution Image Synthesis with Latent Diffusion Models

Latent Diffusion Models Requirements A suitable conda environment named ldm can be created and activated with: conda env create -f environment.yaml co

CompVis Heidelberg 5.6k Jan 04, 2023
The codes and related files to reproduce the results for Image Similarity Challenge Track 1.

ISC-Track1-Submission The codes and related files to reproduce the results for Image Similarity Challenge Track 1. Required dependencies To begin with

Wenhao Wang 115 Jan 02, 2023
The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection .

GCoNet The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection . Trained model Download final_gconet.pth

Qi Fan 46 Nov 17, 2022
Implementation of parameterized soft-exponential activation function.

Soft-Exponential-Activation-Function: Implementation of parameterized soft-exponential activation function. In this implementation, the parameters are

Shuvrajeet Das 1 Feb 23, 2022
Implementation of the SUMO (Slim U-Net trained on MODA) model

SUMO - Slim U-Net trained on MODA Implementation of the SUMO (Slim U-Net trained on MODA) model as described in: TODO: add reference to paper once ava

6 Nov 19, 2022
Simple Python application to transform Serial data into OSC messages

SerialToOSC-Bridge Simple Python application to transform Serial data into OSC messages. The current purpose is to be a compatibility layer between ha

Division of Applied Acoustics at Chalmers University of Technology 3 Jun 03, 2021
Codes for ACL-IJCNLP 2021 Paper "Zero-shot Fact Verification by Claim Generation"

Zero-shot-Fact-Verification-by-Claim-Generation This repository contains code and models for the paper: Zero-shot Fact Verification by Claim Generatio

Liangming Pan 47 Jan 01, 2023
[ICCV 2021] Learning A Single Network for Scale-Arbitrary Super-Resolution

ArbSR Pytorch implementation of "Learning A Single Network for Scale-Arbitrary Super-Resolution", ICCV 2021 [Project] [arXiv] Highlights A plug-in mod

Longguang Wang 229 Dec 30, 2022
The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines"

MangaLineExtraction_PyTorch The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines" Usage model_torch.py [sourc

Miaomiao Li 82 Jan 02, 2023
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

197 Jan 07, 2023
Piotr - IoT firmware emulation instrumentation for training and research

Piotr: Pythonic IoT exploitation and Research Introduction to Piotr Piotr is an emulation helper for Qemu that provides a convenient way to create, sh

Damien Cauquil 51 Nov 09, 2022
CLIP+FFT text-to-image

Aphantasia This is a text-to-image tool, part of the artwork of the same name. Based on CLIP model, with FFT parameterizer from Lucent library as a ge

vadim epstein 690 Jan 02, 2023
This is project is the implementation of the DeepShift: Towards Multiplication-Less Neural Networks paper

DeepShift This is project is the implementation of the DeepShift: Towards Multiplication-Less Neural Networks paper, that aims to replace multiplicati

Mostafa Elhoushi 88 Dec 23, 2022
The Adapter-Bot: All-In-One Controllable Conversational Model

The Adapter-Bot: All-In-One Controllable Conversational Model This is the implementation of the paper: The Adapter-Bot: All-In-One Controllable Conver

CAiRE 37 Nov 04, 2022
Train neural network for semantic segmentation (deep lab V3) with pytorch in less then 50 lines of code

Train neural network for semantic segmentation (deep lab V3) with pytorch in 50 lines of code Train net semantic segmentation net using Trans10K datas

17 Dec 19, 2022
Prefix-Tuning: Optimizing Continuous Prompts for Generation

Prefix Tuning Files: . ├── gpt2 # Code for GPT2 style autoregressive LM │ ├── train_e2e.py # high-level script

530 Jan 04, 2023