A Python package for time series augmentation

Overview

tsaug

Build Status Documentation Status Coverage Status PyPI Downloads Code style: black

tsaug is a Python package for time series augmentation. It offers a set of augmentation methods for time series, as well as a simple API to connect multiple augmenters into a pipeline.

See https://tsaug.readthedocs.io complete documentation.

Installation

Prerequisites: Python 3.5 or later.

It is recommended to install the most recent stable release of tsaug from PyPI.

pip install tsaug

Alternatively, you could install from source code. This will give you the latest, but unstable, version of tsaug.

git clone https://github.com/arundo/tsaug.git
cd tsaug/
git checkout develop
pip install ./

Examples

A first-time user may start with two examples:

Examples of every individual augmenter can be found here

For full references of implemented augmentation methods, please refer to References.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

Please see Contributing for more details.

License

tsaug is licensed under the Apache License 2.0. See the LICENSE file for details.

Comments
  • How to cite this repo?

    How to cite this repo?

    Basically the title. I used this awesome repo and I would like to cite this repo in my paper. How to do it. If you could provide a bibtex entry that will be great

    question 
    opened by kowshikthopalli 2
  • Default _Augmentor arguments will raise an error

    Default _Augmentor arguments will raise an error

    While working on #1 I found that the default args for initializing an _Augmentor object could lead to the code trying to call None when expecting a function.

    See: https://github.com/arundo/tsaug/blob/ebf1955664991fe51f038a5cc8506f1bfc849d91/src/tsaug/augmentor.py#L5 https://github.com/arundo/tsaug/blob/ebf1955664991fe51f038a5cc8506f1bfc849d91/src/tsaug/augmentor.py#L6

    and

    https://github.com/arundo/tsaug/blob/ebf1955664991fe51f038a5cc8506f1bfc849d91/src/tsaug/augmentor.py#L47

    I know that it's not intended to be initialized without an augmenter function, function, but I was wondering if you want to explicitly prevent an error here.

    Or is something else supposed to be happening?

    bug 
    opened by roycoding 1
  • can't find the deepad python package

    can't find the deepad python package

    In the quickstart notebook https://github.com/arundo/tsaug/blob/master/docs/quickstart.ipynb from deepad.visualization import plot where can you find the deepad package to install?

    opened by xsqian 1
  • Missing function calls in documentation

    Missing function calls in documentation

    Hi!

    I noticed that documentation is actually missing few important notes.

    For instance, first example contains such snippet:

    >>> import numpy as np
    >>> X = np.load("./X.npy")
    >>> Y = np.load("./Y.npy")
    >>> from tsaug.visualization import plot
    >>> plot(X, Y)
    

    and shows a chart which suggests that it is immediately rendered after calling plot function.

    In configurations I've seen and worked on, plot function does not render any chart immediately. Instead it returns Tuple[matplotlib.figure.Figure, matplotlib.axes._axes.Axes]. This means that we need to take first element of returned tuple and call .show() on it, so this example should rather be:

    >>> import numpy as np
    >>> X = np.load("./X.npy")
    >>> Y = np.load("./Y.npy")
    >>> from tsaug.visualization import plot
    >>> figure, _ = plot(X, Y)
    >>> figure.show()
    

    I can create a push request with such corrections if you're open for contribution

    opened by 15bubbles 0
  • Static random augmentation across multiple time series

    Static random augmentation across multiple time series

    Hello,

    I have a use case where I apply temporal augmentation with the same random anchor across multiple time series within a segmented object. I.e., I want certain augmentations to vary across objects, but remain constant within objects.

    In TimeWarp, e.g., I've added an optional keyword argument (static_rand):

        def __init__(
             self,
             n_speed_change: int = 3,
             max_speed_ratio: Union[float, Tuple[float, float], List[float]] = 3.0,
             repeats: int = 1,
             prob: float = 1.0,
             seed: Optional[int] = _default_seed,
             static_rand: Optional[bool] = False
         ):
    

    which is used by:

             if self.static_rand:                                                                                                                      
                 anchor_values = rand.uniform(low=0.0, high=1.0, size=self.n_speed_change + 1)
                 anchor_values = np.tile(anchor_values, (N, 1))
             else:
                 anchor_values = rand.uniform(
                     low=0.0, high=1.0, size=(N, self.n_speed_change + 1)
                 )
    

    Thus, instead of having N time series with different random anchor_values, I generate N time series with the same anchor value.

    I use this approach with TimeWarp and Drift. Would this be of any interest as a PR, or does it sound too specific?

    Thanks for the nice library.

    opened by jgrss 0
  • _Augmenter should be exposed properly as tsaug.Augmenter

    _Augmenter should be exposed properly as tsaug.Augmenter

    Might be related to https://github.com/arundo/tsaug/issues/1

    In the current state of the package, the _Augmenter class is an internal class that should not be used outside of the package itself... but it's also the base class for all usable classes from tsaug. This makes it very weird to type "generic" functions outside of tsaug, e.g.

    # this should not appear in a normal Python code
    from tsaug._augmenters.base import _Augmenter
    
    def apply_transformation(aug: _Augmenter):
        ...
    

    The _Augmenter class should be exposed as tsaug.Augmenter so that it can be used for proper typing outside of the tsaug package.

    help wanted 
    opened by Holt59 0
  • Equivalence in transformation names

    Equivalence in transformation names

    Hello

    I'm very interested to use and apply Tsaug library in my personal project.

    I have read the paper "Data Augmentation ofWearable Sensor Data for Parkinson’s Disease Monitoring using Convolutional Neural Networks" and I'm quite confused about the name of the transformations.

    What are the equivalent in TSAUG library for the transformations Jittering, Scaling, rotation, permutation, MagWarp mentioned in this paper?

    Also, I have read the blog "https://www.arundo.com/arundo_tech_blog/tsaug-an-open-source-python-package-for-time-series-augmentation", and I didn´t find the equivalent for RandomMagnify, RandomJitter, etc.

    Could you help me with these doubts.

    Best regards

    Oscar

    question 
    opened by ogreyesp 1
  • ValueError: The numbers of series in X and Y are different.

    ValueError: The numbers of series in X and Y are different.

    The shape of X is (54, 337) and the shape of y is (54,). But I am getting error. I am using the following code

    from tsaug import TimeWarp, Crop, Quantize, Drift, Reverse
    my_augmenter = (
        TimeWarp() * 5  # random time warping 5 times in parallel
        + Crop(size=300)  # random crop subsequences with length 300
        + Quantize(n_levels=[10, 20, 30])  # random quantize to 10-, 20-, or 30- level sets
        + Drift(max_drift=(0.1, 0.5)) @ 0.8  # with 80% probability, random drift the signal up to 10% - 50%
        + Reverse() @ 0.5  # with 50% probability, reverse the sequence
    )
    data, labels = my_augmenter.augment(data, labels)
    
    question 
    opened by talhaanwarch 3
  • How to augment multi_variate time series data?

    How to augment multi_variate time series data?

    I noticed that while augmenting multi-variate time series data, augmented data is concatenated on 0 axes, instead of being added to a new axis ie third axis. Let suppose data shape is (18,1000), after augmentation it turns to be (72,1000), but i believe it should be (4,18,1000). simply reshaping data.reshape(4,18,1000) resolve the problem or not?

    question 
    opened by talhaanwarch 2
Releases(v0.2.1)
AlgoVision - A Framework for Differentiable Algorithms and Algorithmic Supervision

NeurIPS 2021 Paper "Learning with Algorithmic Supervision via Continuous Relaxations"

Felix Petersen 76 Jan 01, 2023
Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021 [Projec

Zhengqi Li 583 Dec 30, 2022
A variational Bayesian method for similarity learning in non-rigid image registration (CVPR 2022)

A variational Bayesian method for similarity learning in non-rigid image registration We provide the source code and the trained models used in the re

daniel grzech 14 Nov 21, 2022
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging

Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging This repository contains an implementation

Computational Photography Lab @ SFU 1.1k Jan 02, 2023
Official implementation for "Style Transformer for Image Inversion and Editing" (CVPR 2022)

Style Transformer for Image Inversion and Editing (CVPR2022) https://arxiv.org/abs/2203.07932 Existing GAN inversion methods fail to provide latent co

Xueqi Hu 153 Dec 02, 2022
AI that generate music

PianoGPT ai that generate music try it here https://share.streamlit.io/annasajkh/pianogpt/main/main.py or here https://huggingface.co/spaces/Annas/Pia

Annas 28 Nov 27, 2022
Learning Chinese Character style with conditional GAN

zi2zi: Master Chinese Calligraphy with Conditional Adversarial Networks Introduction Learning eastern asian language typefaces with GAN. zi2zi(字到字, me

Yuchen Tian 2.2k Jan 02, 2023
Using machine learning to predict and analyze high and low reader engagement for New York Times articles posted to Facebook.

How The New York Times can increase Engagement on Facebook Using machine learning to understand characteristics of news content that garners "high" Fa

Jessica Miles 0 Sep 16, 2021
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

Van Nguyen Nguyen 92 Dec 28, 2022
Repository for Multimodal AutoML Benchmark

Benchmarking Multimodal AutoML for Tabular Data with Text Fields Repository for the NeurIPS 2021 Dataset Track Submission "Benchmarking Multimodal Aut

Xingjian Shi 44 Nov 24, 2022
Automatic self-diagnosis program (python required)Automatic self-diagnosis program (python required)

auto-self-checker 자동으로 자가진단 해주는 프로그램(python 필요) 중요 이 프로그램이 실행될때에는 절대로 마우스포인터를 움직이거나 키보드를 건드리면 안된다(화면인식, 마우스포인터로 직접 클릭) 사용법 프로그램을 구동할 폴더 내의 cmd창에서 pip

1 Dec 30, 2021
Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer.

DocEnTR Description Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer. This model is implemented on to

Mohamed Ali Souibgui 74 Jan 07, 2023
Model of an AI powered sign language interpreter.

TEXT AND SPEECH TO SIGN LANGUAGE. A web application which takes in text or live audio speech recording as input, converts and displays the relevant Si

Mark Gatere 4 Mar 30, 2022
Official code for article "Expression is enough: Improving traffic signal control with advanced traffic state representation"

1 Introduction Official code for article "Expression is enough: Improving traffic signal control with advanced traffic state representation". The code s

Liang Zhang 10 Dec 10, 2022
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Xinlong Wang 491 Jan 03, 2023
Linear Variational State Space Filters

Linear Variational State Space Filters To set up the environment, use the provided scripts in the docker/ folder to build and run the codebase inside

0 Dec 13, 2021
Some useful blender add-ons for SMPL skeleton's poses and global translation.

Blender add-ons for SMPL skeleton's poses and trans There are two blender add-ons for SMPL skeleton's poses and trans.The first is for making an offli

犹在镜中 154 Jan 04, 2023
An official source code for "Augmentation-Free Self-Supervised Learning on Graphs"

Augmentation-Free Self-Supervised Learning on Graphs An official source code for Augmentation-Free Self-Supervised Learning on Graphs paper, accepted

Namkyeong Lee 59 Dec 01, 2022
Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" (RSS 2022)

Intro Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" Robotics:Science and

Yunho Kim 21 Dec 07, 2022