sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code

Overview

sequitur

sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code. It implements three different autoencoder architectures in PyTorch, and a predefined training loop. sequitur is ideal for working with sequential data ranging from single and multivariate time series to videos, and is geared for those who want to get started quickly with autoencoders.

import torch
from sequitur.models import LINEAR_AE
from sequitur import quick_train

train_seqs = [torch.randn(4) for _ in range(100)] # 100 sequences of length 4
encoder, decoder, _, _ = quick_train(LINEAR_AE, train_seqs, encoding_dim=2, denoise=True)

encoder(torch.randn(4)) # => torch.tensor([0.19, 0.84])

Each autoencoder learns to represent input sequences as lower-dimensional, fixed-size vectors. This can be useful for finding patterns among sequences, clustering sequences, or converting sequences into inputs for other algorithms.

Installation

Requires Python 3.X and PyTorch 1.2.X

You can install sequitur with pip:

$ pip install sequitur

Getting Started

1. Prepare your data

First, you need to prepare a set of example sequences to train an autoencoder on. This training set should be a list of torch.Tensors, where each tensor has shape [num_elements, *num_features]. So, if each example in your training set is a sequence of 10 5x5 matrices, then each example would be a tensor with shape [10, 5, 5].

2. Choose an autoencoder

Next, you need to choose an autoencoder model. If you're working with sequences of numbers (e.g. time series) or 1D vectors (e.g. word vectors), then you should use the LINEAR_AE or LSTM_AE model. For sequences of 2D matrices (e.g. videos) or 3D matrices (e.g. fMRI scans), you'll want to use CONV_LSTM_AE. Each model is a PyTorch module, and can be imported like so:

from sequitur.models import CONV_LSTM_AE

More details about each model are in the "Models" section below.

3. Train the autoencoder

From here, you can either initialize the model yourself and write your own training loop, or import the quick_train function and plug in the model, training set, and desired encoding size, like so:

import torch
from sequitur.models import CONV_LSTM_AE
from sequitur import quick_train

train_set = [torch.randn(10, 5, 5) for _ in range(100)]
encoder, decoder, _, _ = quick_train(CONV_LSTM_AE, train_set, encoding_dim=4)

After training, quick_train returns the encoder and decoder models, which are PyTorch modules that can encode and decode new sequences. These can be used like so:

x = torch.randn(10, 5, 5)
z = encoder(x) # Tensor with shape [4]
x_prime = decoder(z) # Tensor with shape [10, 5, 5]

API

Training your Model

quick_train(model, train_set, encoding_dim, verbose=False, lr=1e-3, epochs=50, denoise=False, **kwargs)

Lets you train an autoencoder with just one line of code. Useful if you don't want to create your own training loop. Training involves learning a vector encoding of each input sequence, reconstructing the original sequence from the encoding, and calculating the loss (mean-squared error) between the reconstructed input and the original input. The autoencoder weights are updated using the Adam optimizer.

Parameters:

  • model (torch.nn.Module): Autoencoder model to train (imported from sequitur.models)
  • train_set (list): List of sequences (each a torch.Tensor) to train the model on; has shape [num_examples, seq_len, *num_features]
  • encoding_dim (int): Desired size of the vector encoding
  • verbose (bool, optional (default=False)): Whether or not to print the loss at each epoch
  • lr (float, optional (default=1e-3)): Learning rate
  • epochs (int, optional (default=50)): Number of epochs to train for
  • **kwargs: Parameters to pass into model when it's instantiated

Returns:

  • encoder (torch.nn.Module): Trained encoder model; takes a sequence (as a tensor) as input and returns an encoding of the sequence as a tensor of shape [encoding_dim]
  • decoder (torch.nn.Module): Trained decoder model; takes an encoding (as a tensor) and returns a decoded sequence
  • encodings (list): List of tensors corresponding to the final vector encodings of each sequence in the training set
  • losses (list): List of average MSE values at each epoch

Models

Every autoencoder inherits from torch.nn.Module and has an encoder attribute and a decoder attribute, both of which also inherit from torch.nn.Module.

Sequences of Numbers

LINEAR_AE(input_dim, encoding_dim, h_dims=[], h_activ=torch.nn.Sigmoid(), out_activ=torch.nn.Tanh())

Consists of fully-connected layers stacked on top of each other. Can only be used if you're dealing with sequences of numbers, not vectors or matrices.

Parameters:

  • input_dim (int): Size of each input sequence
  • encoding_dim (int): Size of the vector encoding
  • h_dims (list, optional (default=[])): List of hidden layer sizes for the encoder
  • h_activ (torch.nn.Module or None, optional (default=torch.nn.Sigmoid())): Activation function to use for hidden layers; if None, no activation function is used
  • out_activ (torch.nn.Module or None, optional (default=torch.nn.Tanh())): Activation function to use for the output layer in the encoder; if None, no activation function is used

Example:

To create the autoencoder shown in the diagram above, use the following arguments:

from sequitur.models import LINEAR_AE

model = LINEAR_AE(
  input_dim=10,
  encoding_dim=4,
  h_dims=[8, 6],
  h_activ=None,
  out_activ=None
)

x = torch.randn(10) # Sequence of 10 numbers
z = model.encoder(x) # z.shape = [4]
x_prime = model.decoder(z) # x_prime.shape = [10]

Sequences of 1D Vectors

LSTM_AE(input_dim, encoding_dim, h_dims=[], h_activ=torch.nn.Sigmoid(), out_activ=torch.nn.Tanh())

Autoencoder for sequences of vectors which consists of stacked LSTMs. Can be trained on sequences of varying length.

Parameters:

  • input_dim (int): Size of each sequence element (vector)
  • encoding_dim (int): Size of the vector encoding
  • h_dims (list, optional (default=[])): List of hidden layer sizes for the encoder
  • h_activ (torch.nn.Module or None, optional (default=torch.nn.Sigmoid())): Activation function to use for hidden layers; if None, no activation function is used
  • out_activ (torch.nn.Module or None, optional (default=torch.nn.Tanh())): Activation function to use for the output layer in the encoder; if None, no activation function is used

Example:

To create the autoencoder shown in the diagram above, use the following arguments:

from sequitur.models import LSTM_AE

model = LSTM_AE(
  input_dim=3,
  encoding_dim=7,
  h_dims=[64],
  h_activ=None,
  out_activ=None
)

x = torch.randn(10, 3) # Sequence of 10 3D vectors
z = model.encoder(x) # z.shape = [7]
x_prime = model.decoder(z, seq_len=10) # x_prime.shape = [10, 3]

Sequences of 2D/3D Matrices

CONV_LSTM_AE(input_dims, encoding_dim, kernel, stride=1, h_conv_channels=[1], h_lstm_channels=[])

Autoencoder for sequences of 2D or 3D matrices/images, loosely based on the CNN-LSTM architecture described in Beyond Short Snippets: Deep Networks for Video Classification. Uses a CNN to create vector encodings of each image in an input sequence, and then an LSTM to create encodings of the sequence of vectors.

Parameters:

  • input_dims (tuple): Shape of each 2D or 3D image in the input sequences
  • encoding_dim (int): Size of the vector encoding
  • kernel (int or tuple): Size of the convolving kernel; use tuple to specify a different size for each dimension
  • stride (int or tuple, optional (default=1)): Stride of the convolution; use tuple to specify a different stride for each dimension
  • h_conv_channels (list, optional (default=[1])): List of hidden channel sizes for the convolutional layers
  • h_lstm_channels (list, optional (default=[])): List of hidden channel sizes for the LSTM layers

Example:

from sequitur.models import CONV_LSTM_AE

model = CONV_LSTM_AE(
  input_dims=(50, 100),
  encoding_dim=16,
  kernel=(5, 8),
  stride=(3, 5),
  h_conv_channels=[4, 8],
  h_lstm_channels=[32, 64]
)

x = torch.randn(22, 50, 100) # Sequence of 22 50x100 images
z = model.encoder(x) # z.shape = [16]
x_prime = model.decoder(z, seq_len=22) # x_prime.shape = [22, 50, 100]
Owner
Jonathan Shobrook
Jonathan Shobrook
Expressive Power of Invariant and Equivaraint Graph Neural Networks (ICLR 2021)

Expressive Power of Invariant and Equivaraint Graph Neural Networks In this repository, we show how to use powerful GNN (2-FGNN) to solve a graph alig

Marc Lelarge 36 Dec 12, 2022
Weakly-supervised semantic image segmentation with CNNs using point supervision

Code for our ECCV paper What's the Point: Semantic Segmentation with Point Supervision. Summary This library is a custom build of Caffe for semantic i

27 Sep 14, 2022
This repository accompanies the ACM TOIS paper "What can I cook with these ingredients?" - Understanding cooking-related information needs in conversational search

In this repository you find data that has been gathered when conducting in-situ experiments in a conversational cooking setting. These data include tr

6 Sep 22, 2022
MVGCN: a novel multi-view graph convolutional network (MVGCN) framework for link prediction in biomedical bipartite networks.

MVGCN MVGCN: a novel multi-view graph convolutional network (MVGCN) framework for link prediction in biomedical bipartite networks. Developer: Fu Hait

13 Dec 01, 2022
[CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

Visual-Reasoning-eXplanation [CVPR 2021 A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts] Project Page | Vid

Andy_Ge 54 Dec 21, 2022
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin Sinanoğlu 2 Mar 04, 2022
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Wonjong Jang 8 Nov 01, 2022
Pytorch implementation of "Geometrically Adaptive Dictionary Attack on Face Recognition" (WACV 2022)

Geometrically Adaptive Dictionary Attack on Face Recognition This is the Pytorch code of our paper "Geometrically Adaptive Dictionary Attack on Face R

6 Nov 21, 2022
Simple Tensorflow implementation of Toward Spatially Unbiased Generative Models (ICCV 2021)

Spatial unbiased GANs — Simple TensorFlow Implementation [Paper] : Toward Spatially Unbiased Generative Models (ICCV 2021) Abstract Recent image gener

Junho Kim 16 Apr 15, 2022
Quick program made to generate alpha and delta tables for Hidden Markov Models

HMM_Calc Functions for generating Alpha and Delta tables from a Hidden Markov Model. Parameters: a: Matrix of transition probabilities. a[i][j] = a_{i

Adem Odza 1 Dec 04, 2021
《DeepViT: Towards Deeper Vision Transformer》(2021)

DeepViT This repo is the official implementation of "DeepViT: Towards Deeper Vision Transformer". The repo is based on the timm library (https://githu

109 Dec 02, 2022
Discord-Protect is a simple discord bot allowing you to have some security on your discord server by ordering a captcha to the user who joins your server.

Discord-Protect Discord-Protect is a simple discord bot allowing you to have some security on your discord server by ordering a captcha to the user wh

Tir Omar 2 Oct 28, 2021
This is a custom made virus code in python, using tkinter module.

skeleterrorBetaV0.1-Virus-code This is a custom made virus code in python, using tkinter module. This virus is not harmful to the computer, it only ma

AR 0 Nov 21, 2022
Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch

Rotary Embeddings - Pytorch A standalone library for adding rotary embeddings to transformers in Pytorch, following its success as relative positional

Phil Wang 110 Dec 30, 2022
Space Invaders For Python

Space-Invaders Just download or clone the git repository. To run the Space Invader game you need to have pyhton installed in you system. If you dont h

Fei 5 Jul 27, 2022
Source code for Acorn, the precision farming rover by Twisted Fields

Acorn precision farming rover This is the software repository for Acorn, the precision farming rover by Twisted Fields. For more information see twist

Twisted Fields 198 Jan 02, 2023
a reimplementation of Holistically-Nested Edge Detection in PyTorch

pytorch-hed This is a personal reimplementation of Holistically-Nested Edge Detection [1] using PyTorch. Should you be making use of this work, please

Simon Niklaus 375 Dec 06, 2022
ISNAS-DIP: Image Specific Neural Architecture Search for Deep Image Prior [CVPR 2022]

ISNAS-DIP: Image-Specific Neural Architecture Search for Deep Image Prior (CVPR 2022) Metin Ersin Arican*, Ozgur Kara*, Gustav Bredell, Ender Konukogl

Özgür Kara 24 Dec 18, 2022