ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels

Overview

ROCKET + MINIROCKET

ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels.

Data Mining and Knowledge Discovery / arXiv:1910.13051 (preprint)

Most methods for time series classification that attain state-of-the-art accuracy have high computational complexity, requiring significant training time even for smaller datasets, and are intractable for larger datasets. Additionally, many existing methods focus on a single type of feature such as shape or frequency. Building on the recent success of convolutional neural networks for time series classification, we show that simple linear classifiers using random convolutional kernels achieve state-of-the-art accuracy with a fraction of the computational expense of existing methods. Using this method, it is possible to train and test a classifier on all 85 โ€˜bake offโ€™ datasets in the UCR archive in < 2 h, and it is possible to train a classifier on a large dataset of more than one million time series in approximately 1 h.

Please cite as:

@article{dempster_etal_2020,
  author = {Dempster, Angus and Petitjean, Fran\c{c}ois and Webb, Geoffrey I},
  title = {ROCKET: Exceptionally fast and accurate time classification using random convolutional kernels},
  year = {2020},
  journal = {Data Mining and Knowledge Discovery},
  doi = {https://doi.org/10.1007/s10618-020-00701-z}
}

sktime

An implementation of ROCKET (with basic multivariate capability) is available through sktime. See the examples.

MINIROCKET *NEW*

MINIROCKET is up to 75ร— faster than ROCKET on larger datasets.

Results

UCR Archive

Scalability

Code

rocket_functions.py

Requirements

  • Python;
  • Numba;
  • NumPy;
  • scikit-learn (or equivalent).

Example

from rocket_functions import generate_kernels, apply_kernels
from sklearn.linear_model import RidgeClassifierCV

[...] # load data, etc.

# generate random kernels
kernels = generate_kernels(X_training.shape[-1], 10_000)

# transform training set and train classifier
X_training_transform = apply_kernels(X_training, kernels)
classifier = RidgeClassifierCV(alphas = np.logspace(-3, 3, 10), normalize = True)
classifier.fit(X_training_transform, Y_training)

# transform test set and predict
X_test_transform = apply_kernels(X_test, kernels)
predictions = classifier.predict(X_test_transform)

Reproducing the Experiments

reproduce_experiments_ucr.py

Arguments:
-d --dataset_names : txt file of dataset names
-i --input_path    : parent directory for datasets
-o --output_path   : path for results
-n --num_runs      : number of runs (optional, default 10)
-k --num_kernels   : number of kernels (optional, default 10,000)

Examples:
> python reproduce_experiments_ucr.py -d bakeoff.txt -i ./Univariate_arff -o ./
> python reproduce_experiments_ucr.py -d additional.txt -i ./Univariate_arff -o ./ -n 1 -k 1000

reproduce_experiments_scalability.py

Arguments:
-tr --training_path : training dataset (csv)
-te --test_path     : test dataset (csv)
-o  --output_path   : path for results
-k  --num_kernels   : number of kernels

Examples:
> python reproduce_experiments_scalability.py -tr training.csv -te test.csv -o ./ -k 100
> python reproduce_experiments_scalability.py -tr training.csv -te test.csv -o ./ -k 1000

Acknowledgements

We thank Professor Eamonn Keogh and all the people who have contributed to the UCR time series classification archive. Figures in our paper showing the ranking of different classifiers and variants of ROCKET were produced using code from Ismail Fawaz et al. (2019).

๐Ÿš€
Artstation-Artistic-face-HQ Dataset (AAHQ)

Artstation-Artistic-face-HQ Dataset (AAHQ) Artstation-Artistic-face-HQ (AAHQ) is a high-quality image dataset of artistic-face images. It is proposed

onion 105 Dec 16, 2022
Understanding Convolution for Semantic Segmentation

TuSimple-DUC by Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, and Garrison Cottrell. Introduction This repository is for Under

TuSimple 585 Dec 31, 2022
Robust Self-augmentation for NER with Meta-reweighting

Robust Self-augmentation for NER with Meta-reweighting

Lam chi 17 Nov 22, 2022
A PaddlePaddle implementation of STGCN with a few modifications in the model architecture in order to forecast traffic jam.

About This repository contains the code of a PaddlePaddle implementation of STGCN based on the paper Spatio-Temporal Graph Convolutional Networks: A D

Tianjian Li 1 Jan 11, 2022
Fully Automatic Page Turning on Real Scores

Fully Automatic Page Turning on Real Scores This repository contains the corresponding code for our extended abstract Henkel F., Schwaiger S. and Widm

Florian Henkel 7 Jan 02, 2022
Code for ECIR'20 paper Diagnosing BERT with Retrieval Heuristics

Bert Axioms This is the repository with the code for the Paper Diagnosing BERT with Retrieval Heuristics Required Data In order to run this code, you

Arthur Cรขmara 5 Jan 21, 2022
A foreign language learning aid using a neural network to predict probability of translating foreign words

Langy Langy is a reading-focused foreign language learning aid orientated towards young children. Reading is an activity that every child knows. It is

Shona Lowden 6 Nov 17, 2021
YOLO-v5 ๊ธฐ๋ฐ˜ ๋‹จ์•ˆ ์นด๋ฉ”๋ผ์˜ ์˜์ƒ์„ ํ™œ์šฉํ•ด ์ฐจ๊ฐ„ ๊ฑฐ๋ฆฌ๋ฅผ ์ผ์ •ํ•˜๊ฒŒ ์œ ์ง€ํ•˜๋ฉฐ ์ฃผํ–‰ํ•˜๋Š” Adaptive Cruise Control ๊ธฐ๋Šฅ ๊ตฌํ˜„

์ž์œจ ์ฃผํ–‰์ฐจ์˜ ์˜์ƒ ๊ธฐ๋ฐ˜ ์ฐจ๊ฐ„๊ฑฐ๋ฆฌ ์œ ์ง€ ๊ฐœ๋ฐœ Table of Contents ํ”„๋กœ์ ํŠธ ์†Œ๊ฐœ ์ฃผ์š” ๊ธฐ๋Šฅ ์‹œ์Šคํ…œ ๊ตฌ์กฐ ๋””๋ ‰ํ† ๋ฆฌ ๊ตฌ์กฐ ๊ฒฐ๊ณผ ์‹คํ–‰ ๋ฐฉ๋ฒ• ์ฐธ์กฐ ํŒ€์› ํ”„๋กœ์ ํŠธ ์†Œ๊ฐœ YOLO-v5 ๊ธฐ๋ฐ˜์œผ๋กœ ๋‹จ์•ˆ ์นด๋ฉ”๋ผ์˜ ์˜์ƒ์„ ํ™œ์šฉํ•ด ์ฐจ๊ฐ„ ๊ฑฐ๋ฆฌ๋ฅผ ์ผ์ •ํ•˜๊ฒŒ ์œ ์ง€ํ•˜๋ฉฐ ์ฃผํ–‰ํ•˜๋Š” Adap

14 Jun 29, 2022
Easily benchmark PyTorch model FLOPs, latency, throughput, max allocated memory and energy consumption

โฑ pytorch-benchmark Easily benchmark model inference FLOPs, latency, throughput, max allocated memory and energy consumption Install pip install pytor

Lukas Hedegaard 21 Dec 22, 2022
Implementation of "Large Steps in Inverse Rendering of Geometry"

Large Steps in Inverse Rendering of Geometry ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia), December 2021. Baptiste Nicolet ยท Alec Jacob

RGL: Realistic Graphics Lab 274 Jan 06, 2023
Hyper-parameter optimization for sklearn

hyperopt-sklearn Hyperopt-sklearn is Hyperopt-based model selection among machine learning algorithms in scikit-learn. See how to use hyperopt-sklearn

1.4k Jan 01, 2023
A project for developing transformer-based models for clinical relation extraction

Clinical Relation Extration with Transformers Aim This package is developed for researchers easily to use state-of-the-art transformers models for ext

uf-hobi-informatics-lab 101 Dec 19, 2022
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)

Vision Transformer Pytorch reimplementation of Google's repository for the ViT model that was released with the paper An Image is Worth 16x16 Words: T

Eunkwang Jeon 1.4k Dec 28, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the momen

ChemEngAI 40 Dec 27, 2022
Video Autoencoder: self-supervised disentanglement of 3D structure and motion

Video Autoencoder: self-supervised disentanglement of 3D structure and motion This repository contains the code (in PyTorch) for the model introduced

157 Dec 22, 2022
This repository contains the map content ontology used in narrative cartography

Narrative-cartography-ontology This repository contains the map content ontology used in narrative cartography, which is associated with a submission

Weiming Huang 0 Oct 31, 2021
[Machine Learning Engineer Basic Guide] ๋ถ€์ŠคํŠธ์บ ํ”„ AI Tech - Product Serving ์ž๋ฃŒ

Boostcamp-AI-Tech-Product-Serving ๋ถ€์ŠคํŠธ์บ ํ”„ AI Tech - Product Serving ์ž๋ฃŒ Repository ๊ตฌ์กฐ part1(MLOps ๊ฐœ๋ก , Model Serving, ๋จธ์‹ ๋Ÿฌ๋‹ ํ”„๋กœ์ ํŠธ ๋ผ์ดํ”„ ์‚ฌ์ดํด์€ ๋ณ„๋„์˜ ์ฝ”๋“œ๊ฐ€ ์—†์œผ๋ฉฐ, part

Sung Yun Byeon 269 Dec 21, 2022
This is a model to classify Vietnamese sign language using Motion history image (MHI) algorithm and CNN.

Vietnamese sign lagnuage recognition using MHI and CNN This is a model to classify Vietnamese sign language using Motion history image (MHI) algorithm

Phat Pham 3 Feb 24, 2022
Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning

SkFlow has been moved to Tensorflow. SkFlow has been moved to http://github.com/tensorflow/tensorflow into contrib folder specifically located here. T

3.2k Dec 29, 2022
The code for our NeurIPS 2021 paper "Kernelized Heterogeneous Risk Minimization".

Kernelized-HRM Jiashuo Liu, Zheyuan Hu The code for our NeurIPS 2021 paper "Kernelized Heterogeneous Risk Minimization"[1]. This repo contains the cod

Liu Jiashuo 8 Nov 20, 2022