Code for the paper "Next Generation Reservoir Computing"

Overview

Next Generation Reservoir Computing

This is the code for the results and figures in our paper "Next Generation Reservoir Computing". They are written in Python, and require recent versions of NumPy, SciPy, and matplotlib. If you are using a Python environment like Anaconda, these are likely already installed.

Python Virtual Environment

If you are not using Anaconda, or want to run this code on the command line in vanilla Python, you can create a virtual environment with the required dependencies by running:

python3 -m venv env
./env/bin/pip install -r requirements.txt

This will install the most recent version of the requirements available to you. If you wish to use the exact versions we used, use requirements-exact.txt instead.

You can then run the individual scripts, for example:

./env/bin/python DoubleScrollNVAR-RK23.py
You might also like...
Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

Code for our CVPR 2021 paper
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

Comments
  • Generalized Performance

    Generalized Performance

    I modified the code given in this repo to what I think is a more generalized version (below) where the input is an array containing points generated by any sort of process. It gives a perfect result on predicting sin functions, but on a constant linear trend gives absolutely terrible, nonsense performance. By my understanding, that is simply the nature of reservoir computing, it can't handle a trend. Is that correct?

    I would also appreciate any other insight you might have on the generalization of this function. Thanks!

    import numpy as np
    import pandas as pd
    
    
    def load_linear(long=False, shape=None, start_date: str = "2021-01-01"):
        """Create a dataset of just zeroes for testing edge case."""
        if shape is None:
            shape = (500, 5)
        df_wide = pd.DataFrame(
            np.ones(shape), index=pd.date_range(start_date, periods=shape[0], freq="D")
        )
        df_wide = (df_wide * list(range(0, shape[1]))).cumsum()
        if not long:
            return df_wide
        else:
            df_wide.index.name = "datetime"
            df_long = df_wide.reset_index(drop=False).melt(
                id_vars=['datetime'], var_name='series_id', value_name='value'
            )
            return df_long
    
    
    def load_sine(long=False, shape=None, start_date: str = "2021-01-01"):
        """Create a dataset of just zeroes for testing edge case."""
        if shape is None:
            shape = (500, 5)
        df_wide = pd.DataFrame(
            np.ones(shape),
            index=pd.date_range(start_date, periods=shape[0], freq="D"),
            columns=range(shape[1])
        )
        X = pd.to_numeric(df_wide.index, errors='coerce', downcast='integer').values
    
        def sin_func(a, X):
            return a * np.sin(1 * X) + a
        for column in df_wide.columns:
            df_wide[column] = sin_func(column, X)
        if not long:
            return df_wide
        else:
            df_wide.index.name = "datetime"
            df_long = df_wide.reset_index(drop=False).melt(
                id_vars=['datetime'], var_name='series_id', value_name='value'
            )
            return df_long
    
    
    def predict_reservoir(df, forecast_length, warmup_pts, k=2, ridge_param=2.5e-6):
        # k =  # number of time delay taps
        # pass in traintime_pts to limit as .tail() for huge datasets?
    
        n_pts = df.shape[1]
        # handle short data edge case
        min_train_pts = 10
        max_warmup_pts = n_pts - min_train_pts
        if warmup_pts >= max_warmup_pts:
            warmup_pts = max_warmup_pts if max_warmup_pts > 0 else 0
    
        traintime_pts = n_pts - warmup_pts   # round(traintime / dt)
        warmtrain_pts = warmup_pts + traintime_pts
        testtime_pts = forecast_length + 1  # round(testtime / dt)
        maxtime_pts = n_pts  # round(maxtime / dt)
    
        # input dimension
        d = df.shape[0]
        # size of the linear part of the feature vector
        dlin = k * d
        # size of nonlinear part of feature vector
        dnonlin = int(dlin * (dlin + 1) / 2)
        # total size of feature vector: constant + linear + nonlinear
        dtot = 1 + dlin + dnonlin
    
        # create an array to hold the linear part of the feature vector
        x = np.zeros((dlin, maxtime_pts))
    
        # fill in the linear part of the feature vector for all times
        for delay in range(k):
            for j in range(delay, maxtime_pts):
                x[d * delay : d * (delay + 1), j] = df[:, j - delay]
    
        # create an array to hold the full feature vector for training time
        # (use ones so the constant term is already 1)
        out_train = np.ones((dtot, traintime_pts))
    
        # copy over the linear part (shift over by one to account for constant)
        out_train[1 : dlin + 1, :] = x[:, warmup_pts - 1 : warmtrain_pts - 1]
    
        # fill in the non-linear part
        cnt = 0
        for row in range(dlin):
            for column in range(row, dlin):
                # shift by one for constant
                out_train[dlin + 1 + cnt] = (
                    x[row, warmup_pts - 1 : warmtrain_pts - 1]
                    * x[column, warmup_pts - 1 : warmtrain_pts - 1]
                )
                cnt += 1
    
        # ridge regression: train W_out to map out_train to Lorenz[t] - Lorenz[t - 1]
        W_out = (
            (x[0:d, warmup_pts:warmtrain_pts] - x[0:d, warmup_pts - 1 : warmtrain_pts - 1])
            @ out_train[:, :].T
            @ np.linalg.pinv(
                out_train[:, :] @ out_train[:, :].T + ridge_param * np.identity(dtot)
            )
        )
    
        # create a place to store feature vectors for prediction
        out_test = np.ones(dtot)  # full feature vector
        x_test = np.zeros((dlin, testtime_pts))  # linear part
    
        # copy over initial linear feature vector
        x_test[:, 0] = x[:, warmtrain_pts - 1]
    
        # do prediction
        for j in range(testtime_pts - 1):
            # copy linear part into whole feature vector
            out_test[1 : dlin + 1] = x_test[:, j]  # shift by one for constant
            # fill in the non-linear part
            cnt = 0
            for row in range(dlin):
                for column in range(row, dlin):
                    # shift by one for constant
                    out_test[dlin + 1 + cnt] = x_test[row, j] * x_test[column, j]
                    cnt += 1
            # fill in the delay taps of the next state
            x_test[d:dlin, j + 1] = x_test[0 : (dlin - d), j]
            # do a prediction
            x_test[0:d, j + 1] = x_test[0:d, j] + W_out @ out_test[:]
        return x_test[0:d, 1:]
    
    
    # note transposed from the opposite of my usual shape
    data_pts = 7000
    series = 3
    forecast_length = 10
    df_sine = load_sine(long=False, shape=(data_pts, series)).transpose().to_numpy()
    df_sine_train = df_sine[:, :-10]
    df_sine_test = df_sine[:, -10:]
    prediction_sine = predict_reservoir(df_sine_train, forecast_length=forecast_length, warmup_pts=150, k=2, ridge_param=2.5e-6)
    print(f"sine MAE {np.mean(np.abs(df_sine_test - prediction_sine))}")
    
    df_linear = load_linear(long=False, shape=(data_pts, series)).transpose().to_numpy()
    df_linear_train = df_linear[:, :-10]
    df_linear_test = df_linear[:, -10:]
    prediction_linear = predict_reservoir(df_linear_train, forecast_length=forecast_length, warmup_pts=150, k=2, ridge_param=2.5e-6)
    print(f"linear MAE {np.mean(np.abs(df_linear_test - prediction_linear))}")
    
    
    opened by winedarksea 2
  • Link to your paper

    Link to your paper

    I'm documenting here the link to your paper. I couldn't find it in the readme:


    Next generation reservoir computing

    Daniel J. Gauthier, Erik Bollt, Aaron Griffith & Wendson A. S. Barbosa 
    

    Nature Communications volume 12, Article number: 5564 (2021) https://www.nature.com/articles/s41467-021-25801-2

    opened by impredicative 1
Releases(v1.0)
Owner
OSU QuantInfo Lab
Daniel Gauthier's Research Group
OSU QuantInfo Lab
Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)

Length-Adaptive Transformer This is the official Pytorch implementation of Length-Adaptive Transformer. For detailed information about the method, ple

Clova AI Research 93 Dec 28, 2022
This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametric Head Model (CVPR 2022)".

HeadNeRF: A Real-time NeRF-based Parametric Head Model This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametr

294 Jan 01, 2023
Implementation for Learning to Track with Object Permanence

Learning to Track with Object Permanence A video-based MOT approach capable of tracking through full occlusions: Learning to Track with Object Permane

Toyota Research Institute - Machine Learning 91 Jan 03, 2023
A PyTorch-centric hybrid classical-quantum machine learning framework

torchquantum A PyTorch-centric hybrid classical-quantum dynamic neural networks framework. News Add a simple example script using quantum gates to do

MIT HAN Lab 400 Jan 02, 2023
Code for CVPR 2018 paper --- Texture Mapping for 3D Reconstruction with RGB-D Sensor

G2LTex This repository contains the implementation of "Texture Mapping for 3D Reconstruction with RGB-D Sensor (CVPR2018)" based on mvs-texturing. Due

Fu Yanping(付燕平) 129 Dec 30, 2022
OpenL3: Open-source deep audio and image embeddings

OpenL3 OpenL3 is an open-source Python library for computing deep audio and image embeddings. Please refer to the documentation for detailed instructi

Music and Audio Research Laboratory - NYU 326 Jan 02, 2023
Anderson Acceleration for Deep Learning

Anderson Accelerated Deep Learning (AADL) AADL is a Python package that implements the Anderson acceleration to speed-up the training of deep learning

Oak Ridge National Laboratory 7 Nov 24, 2022
1st place solution in CCF BDCI 2021 ULSEG challenge

1st place solution in CCF BDCI 2021 ULSEG challenge This is the source code of the 1st place solution for ultrasound image angioma segmentation task (

Chenxu Peng 30 Nov 22, 2022
Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments (CoRL 2020)

Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments [Project website] [Paper] This project is a PyTorch

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 49 Nov 28, 2022
This repository contains the source codes for the paper AtlasNet V2 - Learning Elementary Structures.

AtlasNet V2 - Learning Elementary Structures This work was build upon Thibault Groueix's AtlasNet and 3D-CODED projects. (you might want to have a loo

Théo Deprelle 123 Nov 11, 2022
SARS-Cov-2 Recombinant Finder for fasta sequences

Sc2rf - SARS-Cov-2 Recombinant Finder Pronounced: Scarf What's this? Sc2rf can search genome sequences of SARS-CoV-2 for potential recombinants - new

Lena Schimmel 41 Oct 03, 2022
Machine Learning Framework for Operating Systems - Brings ML to Linux kernel

KML: A Machine Learning Framework for Operating Systems & Storage Systems Storage systems and their OS components are designed to accommodate a wide v

File systems and Storage Lab (FSL) 186 Nov 24, 2022
Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation".

PixelTransformer Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation". Project Page Installation Please insta

Shubham Tulsiani 24 Dec 17, 2022
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers Authors: Jaemin Cho, Abhay Zala, and Mohit Bansal (

Jaemin Cho 98 Dec 15, 2022
All the code and files related to the MI-Lab of UE19CS305 course in sem 5

Machine-Intelligence-Lab-CS305 The compilation of all the code an drelated files from MI-Lab UE19CS305 (of batch 2019-2023) offered by PES University

Arvind Krishna 3 Nov 10, 2022
Official Pytorch Implementation for Splicing ViT Features for Semantic Appearance Transfer presenting Splice

Splicing ViT Features for Semantic Appearance Transfer [Project Page] Splice is a method for semantic appearance transfer, as described in Splicing Vi

Omer Bar Tal 253 Jan 06, 2023
Script utilizando OpenCV e modelo Machine Learning para detectar o uso de máscaras.

Reconhecendo máscaras Este repositório contém um script em Python3 que reconhece se um rosto está ou não portando uma máscara! O código utiliza da bib

Maria Eduarda de Azevedo Silva 168 Oct 20, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
Create and implement a deep learning library from scratch.

In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The Proj

Rishabh Bali 22 Aug 23, 2022
Pixel-wise segmentation on VOC2012 dataset using pytorch.

PiWiSe Pixel-wise segmentation on the VOC2012 dataset using pytorch. FCN SegNet PSPNet UNet RefineNet For a more complete implementation of segmentati

Bodo Kaiser 378 Dec 30, 2022