Code base for reproducing results of I.Schubert, D.Driess, O.Oguz, and M.Toussaint: Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics. NeurIPS (2021)

Related tags

Deep Learningl2e
Overview

Learning to Execute (L2E)

Official code base for completely reproducing all results reported in

I.Schubert, D.Driess, O.Oguz, and M.Toussaint: Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics. NeurIPS (2021)

Installation

Initialize submodules:

git submodule init
git submodule update

Install rai-python

For rai-python, it is recommended to use this docker image.

If you want to install rai-python manually, follow instructions here. You will also need to install PhysX, ideally following these instructions.

Install gym-physx

Modify the path to rai-python/rai/rai/ry in gym-physx/gym_physx/envs/physx_pushing_env.py depending on your installation. Then install gym-physx using pip:

cd gym-physx
pip install .

Install gym-obstacles

In case you also want to run the 2D maze example with moving obstacles as introduced in section A.3, install gym-obstacles:

cd gym-obstacles
pip install .

Install our fork of stable-baselines3

cd stable-baselines3
pip install .

Reproduce figures

l2e/l2e/ contains code to reproduce the reults in the paper.

Figures consist of multiple experiments and are defined in plot_results.json.

Experiments are defined in config_$EXPERIMENT.json.

Intermediate and final results are saved to $scratch_root/$EXPERIMENT/ (configure $scratch_root in each config_$EXPERIMENT.json as well as in plot_results.json).

Step-by-step instructions to reproduce figures:

  1. Depending on experiment, use the following train scripts:

    1. For the RL runs ($EXPERIMENT=l2e* and $EXPERIMENT=her*)

      ./train.sh $EXPERIMENT
    2. For the Inverse Model runs ($EXPERIMENT=im_plan_basic and $EXPERIMENT=im_plan_obstacle_training)

      First collect data:

      ./imitation_data.sh $EXPERIMENT

      Then train inverse model

      ./imitation_learning.sh $EXPERIMENT
    3. For the Direct Execution runs ($EXPERIMENT=plan_basic and $EXPERIMENT=plan_obstacle)

      No training stage is needed here.

    ./train.sh $EXPERIMENT will launch multiple screens with multiple independent runs of $EXPERIMENT. The number of runs is configured using $AGENTS_MIN and $AGENTS_MAX in config_$EXPERIMENT.json.

    ./imitation_data.sh will launch $n_data_collect_workers workers for collecting data, and ./imitation_learning.sh will launch $n_training_workers runs training models independently.

  2. Evaluate results

    ./evaluate.sh $EXPERIMENT

    python evaluate.py $EXPERIMENT will launch multiple screens, one for each agent that was trained in step 1. python evaluate.py $EXPERIMENT will automatically scan for new training output, and only evaluate model checkpoints that haven't been evaluated yet.

  3. Plot results

    After all experiments are finished, create plots using

    python plot_results.py

    This will create all data figures contained in the paper. Figures are saved in l2e/figs/ (configure in plot_results.json)

A PyTorch implementation of the Relational Graph Convolutional Network (RGCN).

Torch-RGCN Torch-RGCN is a PyTorch implementation of the RGCN, originally proposed by Schlichtkrull et al. in Modeling Relational Data with Graph Conv

Thiviyan Singam 66 Nov 30, 2022
The "breathing k-means" algorithm with datasets and example notebooks

The Breathing K-Means Algorithm (with examples) The Breathing K-Means is an approximation algorithm for the k-means problem that (on average) is bette

Bernd Fritzke 75 Nov 17, 2022
Vector.ai assignment

fabio-tests-nisargatman Low Level Approach: ###Tables: continents: id*, name, population, area, createdAt, updatedAt countries: id*, name, population,

Ravi Pullagurla 1 Nov 09, 2021
Starter Code for VALUE benchmark

StarterCode for VALUE Benchmark This is the starter code for VALUE Benchmark [website], [paper]. This repository currently supports all baseline model

VALUE Benchmark 73 Dec 09, 2022
A Fast Sequence Transducer Implementation with PyTorch Bindings

transducer A Fast Sequence Transducer Implementation with PyTorch Bindings. The corresponding publication is Sequence Transduction with Recurrent Neur

Awni Hannun 184 Dec 18, 2022
Submission to Twitter's algorithmic bias bounty challenge

Twitter Ethics Challenge: Pixel Perfect Submission to Twitter's algorithmic bias bounty challenge, by Travis Hoppe (@metasemantic). Abstract We build

Travis Hoppe 4 Aug 19, 2022
Course about deep learning for computer vision and graphics co-developed by YSDA and Skoltech.

Deep Vision and Graphics This repo supplements course "Deep Vision and Graphics" taught at YSDA @fall'21. The course is the successor of "Deep Learnin

Yandex School of Data Analysis 160 Jan 02, 2023
Chess reinforcement learning by AlphaGo Zero methods.

About Chess reinforcement learning by AlphaGo Zero methods. This project is based on these main resources: DeepMind's Oct 19th publication: Mastering

Samuel 2k Dec 29, 2022
Graph WaveNet apdapted for brain connectivity analysis.

Graph WaveNet for brain network analysis This is the implementation of the Graph WaveNet model used in our manuscript: S. Wein , A. Schüller, A. M. To

4 Dec 17, 2022
Learning Versatile Neural Architectures by Propagating Network Codes

Learning Versatile Neural Architectures by Propagating Network Codes Mingyu Ding, Yuqi Huo, Haoyu Lu, Linjie Yang, Zhe Wang, Zhiwu Lu, Jingdong Wang,

Mingyu Ding 36 Dec 06, 2022
Test-Time Personalization with a Transformer for Human Pose Estimation, NeurIPS 2021

Transforming Self-Supervision in Test Time for Personalizing Human Pose Estimation This is an official implementation of the NeurIPS 2021 paper: Trans

41 Nov 28, 2022
PyTorch Implementation of our paper Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation

PyTorch Implementation of our paper Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation

Zechen Bai 12 Jul 08, 2022
[ICCV2021] Official Pytorch implementation for SDGZSL (Semantics Disentangling for Generalized Zero-Shot Learning)

Semantics Disentangling for Generalized Zero-shot Learning This is the official implementation for paper Zhi Chen, Yadan Luo, Ruihong Qiu, Zi Huang, J

25 Dec 06, 2022
Code release for "Making a Bird AI Expert Work for You and Me".

Making-a-Bird-AI-Expert-Work-for-You-and-Me Code release for "Making a Bird AI Expert Work for You and Me". arxiv (Coming soon...) Changelog 2021/12/6

PRIS-CV: Computer Vision Group 11 Dec 11, 2022
Dynamics-aware Adversarial Attack of 3D Sparse Convolution Network

Leaded Gradient Method (LGM) This repository contains the PyTorch implementation for paper Dynamics-aware Adversarial Attack of 3D Sparse Convolution

An Tao 2 Oct 18, 2022
The code for paper "Contrastive Spatio-Temporal Pretext Learning for Self-supervised Video Representation" which is accepted by AAAI 2022

Contrastive Spatio Temporal Pretext Learning for Self-supervised Video Representation (AAAI 2022) The code for paper "Contrastive Spatio-Temporal Pret

8 Jun 30, 2022
MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction

MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction This is the official implementation for the ICCV 2021 paper Learning Sign

110 Dec 20, 2022
ObsPy: A Python Toolbox for seismology/seismological observatories.

ObsPy is an open-source project dedicated to provide a Python framework for processing seismological data. It provides parsers for common file formats

ObsPy 979 Jan 07, 2023
OpenIPDM is a MATLAB open-source platform that stands for infrastructures probabilistic deterioration model

Open-Source Toolbox for Infrastructures Probabilistic Deterioration Modelling OpenIPDM is a MATLAB open-source platform that stands for infrastructure

CIVML 0 Jan 20, 2022
Code for paper [ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot] (ICCV 2021, oral))

ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot This repository is the official PyTorch implementation of ICCV-21 pape

Jiarui 21 May 09, 2022