A Python package for time series augmentation

Overview

tsaug

Build Status Documentation Status Coverage Status PyPI Downloads Code style: black

tsaug is a Python package for time series augmentation. It offers a set of augmentation methods for time series, as well as a simple API to connect multiple augmenters into a pipeline.

See https://tsaug.readthedocs.io complete documentation.

Installation

Prerequisites: Python 3.5 or later.

It is recommended to install the most recent stable release of tsaug from PyPI.

pip install tsaug

Alternatively, you could install from source code. This will give you the latest, but unstable, version of tsaug.

git clone https://github.com/arundo/tsaug.git
cd tsaug/
git checkout develop
pip install ./

Examples

A first-time user may start with two examples:

Examples of every individual augmenter can be found here

For full references of implemented augmentation methods, please refer to References.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

Please see Contributing for more details.

License

tsaug is licensed under the Apache License 2.0. See the LICENSE file for details.

Comments
  • How to cite this repo?

    How to cite this repo?

    Basically the title. I used this awesome repo and I would like to cite this repo in my paper. How to do it. If you could provide a bibtex entry that will be great

    question 
    opened by kowshikthopalli 2
  • Default _Augmentor arguments will raise an error

    Default _Augmentor arguments will raise an error

    While working on #1 I found that the default args for initializing an _Augmentor object could lead to the code trying to call None when expecting a function.

    See: https://github.com/arundo/tsaug/blob/ebf1955664991fe51f038a5cc8506f1bfc849d91/src/tsaug/augmentor.py#L5 https://github.com/arundo/tsaug/blob/ebf1955664991fe51f038a5cc8506f1bfc849d91/src/tsaug/augmentor.py#L6

    and

    https://github.com/arundo/tsaug/blob/ebf1955664991fe51f038a5cc8506f1bfc849d91/src/tsaug/augmentor.py#L47

    I know that it's not intended to be initialized without an augmenter function, function, but I was wondering if you want to explicitly prevent an error here.

    Or is something else supposed to be happening?

    bug 
    opened by roycoding 1
  • can't find the deepad python package

    can't find the deepad python package

    In the quickstart notebook https://github.com/arundo/tsaug/blob/master/docs/quickstart.ipynb from deepad.visualization import plot where can you find the deepad package to install?

    opened by xsqian 1
  • Missing function calls in documentation

    Missing function calls in documentation

    Hi!

    I noticed that documentation is actually missing few important notes.

    For instance, first example contains such snippet:

    >>> import numpy as np
    >>> X = np.load("./X.npy")
    >>> Y = np.load("./Y.npy")
    >>> from tsaug.visualization import plot
    >>> plot(X, Y)
    

    and shows a chart which suggests that it is immediately rendered after calling plot function.

    In configurations I've seen and worked on, plot function does not render any chart immediately. Instead it returns Tuple[matplotlib.figure.Figure, matplotlib.axes._axes.Axes]. This means that we need to take first element of returned tuple and call .show() on it, so this example should rather be:

    >>> import numpy as np
    >>> X = np.load("./X.npy")
    >>> Y = np.load("./Y.npy")
    >>> from tsaug.visualization import plot
    >>> figure, _ = plot(X, Y)
    >>> figure.show()
    

    I can create a push request with such corrections if you're open for contribution

    opened by 15bubbles 0
  • Static random augmentation across multiple time series

    Static random augmentation across multiple time series

    Hello,

    I have a use case where I apply temporal augmentation with the same random anchor across multiple time series within a segmented object. I.e., I want certain augmentations to vary across objects, but remain constant within objects.

    In TimeWarp, e.g., I've added an optional keyword argument (static_rand):

        def __init__(
             self,
             n_speed_change: int = 3,
             max_speed_ratio: Union[float, Tuple[float, float], List[float]] = 3.0,
             repeats: int = 1,
             prob: float = 1.0,
             seed: Optional[int] = _default_seed,
             static_rand: Optional[bool] = False
         ):
    

    which is used by:

             if self.static_rand:                                                                                                                      
                 anchor_values = rand.uniform(low=0.0, high=1.0, size=self.n_speed_change + 1)
                 anchor_values = np.tile(anchor_values, (N, 1))
             else:
                 anchor_values = rand.uniform(
                     low=0.0, high=1.0, size=(N, self.n_speed_change + 1)
                 )
    

    Thus, instead of having N time series with different random anchor_values, I generate N time series with the same anchor value.

    I use this approach with TimeWarp and Drift. Would this be of any interest as a PR, or does it sound too specific?

    Thanks for the nice library.

    opened by jgrss 0
  • _Augmenter should be exposed properly as tsaug.Augmenter

    _Augmenter should be exposed properly as tsaug.Augmenter

    Might be related to https://github.com/arundo/tsaug/issues/1

    In the current state of the package, the _Augmenter class is an internal class that should not be used outside of the package itself... but it's also the base class for all usable classes from tsaug. This makes it very weird to type "generic" functions outside of tsaug, e.g.

    # this should not appear in a normal Python code
    from tsaug._augmenters.base import _Augmenter
    
    def apply_transformation(aug: _Augmenter):
        ...
    

    The _Augmenter class should be exposed as tsaug.Augmenter so that it can be used for proper typing outside of the tsaug package.

    help wanted 
    opened by Holt59 0
  • Equivalence in transformation names

    Equivalence in transformation names

    Hello

    I'm very interested to use and apply Tsaug library in my personal project.

    I have read the paper "Data Augmentation ofWearable Sensor Data for Parkinson’s Disease Monitoring using Convolutional Neural Networks" and I'm quite confused about the name of the transformations.

    What are the equivalent in TSAUG library for the transformations Jittering, Scaling, rotation, permutation, MagWarp mentioned in this paper?

    Also, I have read the blog "https://www.arundo.com/arundo_tech_blog/tsaug-an-open-source-python-package-for-time-series-augmentation", and I didn´t find the equivalent for RandomMagnify, RandomJitter, etc.

    Could you help me with these doubts.

    Best regards

    Oscar

    question 
    opened by ogreyesp 1
  • ValueError: The numbers of series in X and Y are different.

    ValueError: The numbers of series in X and Y are different.

    The shape of X is (54, 337) and the shape of y is (54,). But I am getting error. I am using the following code

    from tsaug import TimeWarp, Crop, Quantize, Drift, Reverse
    my_augmenter = (
        TimeWarp() * 5  # random time warping 5 times in parallel
        + Crop(size=300)  # random crop subsequences with length 300
        + Quantize(n_levels=[10, 20, 30])  # random quantize to 10-, 20-, or 30- level sets
        + Drift(max_drift=(0.1, 0.5)) @ 0.8  # with 80% probability, random drift the signal up to 10% - 50%
        + Reverse() @ 0.5  # with 50% probability, reverse the sequence
    )
    data, labels = my_augmenter.augment(data, labels)
    
    question 
    opened by talhaanwarch 3
  • How to augment multi_variate time series data?

    How to augment multi_variate time series data?

    I noticed that while augmenting multi-variate time series data, augmented data is concatenated on 0 axes, instead of being added to a new axis ie third axis. Let suppose data shape is (18,1000), after augmentation it turns to be (72,1000), but i believe it should be (4,18,1000). simply reshaping data.reshape(4,18,1000) resolve the problem or not?

    question 
    opened by talhaanwarch 2
Releases(v0.2.1)
Python library for computer vision labeling tasks. The core functionality is to translate bounding box annotations between different formats-for example, from coco to yolo.

PyLabel pip install pylabel PyLabel is a Python package to help you prepare image datasets for computer vision models including PyTorch and YOLOv5. I

PyLabel Project 176 Jan 01, 2023
PyTorch implementation of PP-LCNet

PP-LCNet-Pytorch Pre-Trained Models Google Drive p018 Accuracy Models Top1 Top5 PPLCNet_x0_25 0.5186 0.7565 PPLCNet_x0_35 0.5809 0.8083 PPLCNet_x0_5 0

24 Dec 12, 2022
Unofficial pytorch implementation of 'Image Inpainting for Irregular Holes Using Partial Convolutions'

pytorch-inpainting-with-partial-conv Official implementation is released by the authors. Note that this is an ongoing re-implementation and I cannot f

Naoto Inoue 525 Jan 01, 2023
Cascaded Pyramid Network (CPN) based on Keras (Tensorflow backend)

ML2 Takehome Project Reimplementing the paper: Cascaded Pyramid Network for Multi-Person Pose Estimation Dataset The model uses the COCO dataset which

Vo Van Tu 1 Nov 22, 2021
This repository contains the code for the ICCV 2019 paper "Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics"

Occupancy Flow This repository contains the code for the project Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics. You can find detail

189 Dec 29, 2022
An official implementation of "Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation" (CVPR 2021) in PyTorch.

BANA This is the implementation of the paper "Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation". For more inf

CV Lab @ Yonsei University 59 Dec 12, 2022
Dynamic View Synthesis from Dynamic Monocular Video

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer This repository contains code to compute depth from a

Intelligent Systems Lab Org 2.3k Jan 01, 2023
unet-family: Ultimate version

unet-family: Ultimate version 基于之前my-unet代码,我整理出来了这一份终极版本unet-family,方便其他人阅读。 相比于之前的my-unet代码,代码分类更加规范,有条理 对于clone下来的代码不需要修改各种复杂繁琐的路径问题,直接就可以运行。 并且代码有

2 Sep 19, 2022
Python Interview Questions

Python Interview Questions Clone the code to your computer. You need to understand the code in main.py and modify the content in if __name__ =='__main

ClassmateLin 575 Dec 28, 2022
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION

Query Selector Here you can find code and data loaders for the paper https://arxiv.org/pdf/2107.08687v1.pdf . Query Selector is a novel approach to sp

MORAI 62 Dec 17, 2022
This repo contains the source code and a benchmark for predicting user's utilities with Machine Learning techniques for Computational Persuasion

Machine Learning for Argument-Based Computational Persuasion This repo contains the source code and a benchmark for predicting user's utilities with M

Ivan Donadello 4 Nov 07, 2022
MAVE: : A Product Dataset for Multi-source Attribute Value Extraction

The dataset contains 3 million attribute-value annotations across 1257 unique categories on 2.2 million cleaned Amazon product profiles. It is a large, multi-sourced, diverse dataset for product attr

Google Research Datasets 89 Jan 08, 2023
BEAMetrics: Benchmark to Evaluate Automatic Metrics in Natural Language Generation

BEAMetrics: Benchmark to Evaluate Automatic Metrics in Natural Language Generation Installing The Dependencies $ conda create --name beametrics python

7 Jul 04, 2022
Lux AI environment interface for RLlib multi-agents

Lux AI interface to RLlib MultiAgentsEnv For Lux AI Season 1 Kaggle competition. LuxAI repo RLlib-multiagents docs Kaggle environments repo Please let

Jaime 12 Nov 07, 2022
Employee-Managment - Company employee registration software in the face recognition system

Employee-Managment Company employee registration software in the face recognitio

Alireza Kiaeipour 7 Jul 10, 2022
Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis"

Beyond the Spectrum Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis" by Yang He, Ning Yu, Margret Keu

Yang He 27 Jan 07, 2023
Source code for TACL paper "KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation".

KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation Source code for TACL 2021 paper KEPLER: A Unified Model for Kn

THU-KEG 138 Dec 22, 2022
SAT: 2D Semantics Assisted Training for 3D Visual Grounding, ICCV 2021 (Oral)

SAT: 2D Semantics Assisted Training for 3D Visual Grounding SAT: 2D Semantics Assisted Training for 3D Visual Grounding by Zhengyuan Yang, Songyang Zh

Zhengyuan Yang 22 Nov 30, 2022
Unofficial PyTorch Implementation for HifiFace (https://arxiv.org/abs/2106.09965)

HifiFace — Unofficial Pytorch Implementation Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 1, pg. 1)

MINDs Lab 218 Jan 04, 2023
GEA - Code for Guided Evolution for Neural Architecture Search

Efficient Guided Evolution for Neural Architecture Search Usage Create a conda e

6 Jan 03, 2023