Dynamic Environments with Deformable Objects (DEDO)

Related tags

Deep Learningdedo
Overview

DEDO  - Dynamic Environments with Deformable Objects

DEDO - Dynamic Environments with Deformable Objects

DEDO is a lightweight and customizable suite of environments with deformable objects. It is aimed for researchers in the machine learning, reinforcement learning, robotics and computer vision communities. The suite provides a set of every day tasks that involve deformables, such as hanging cloth, dressing a person, and buttoning buttons. We provide examples for integrating two popular reinforcement learning libraries: StableBaselines3 and RLlib. We also provide reference implementaionts for training a various Variational Autoencoder variants with our environment. DEDO is easy to set up and has few dependencies, it is highly parallelizable and supports a wide range of customizations: loading custom objects and textures, adjusting material properties.

<<<<<<< HEAD

Note: updates for this repo are in progress (until the presentation at NeurIPS2021 in mid-December).

@inproceedings{dedo2021,
  title={Dynamic Environments with Deformable Objects},
  author={Rika Antonova and Peiyang Shi and Hang Yin and Zehang Weng and Danica Kragic},
  booktitle={Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track},
  year={2021},
}

d221b6994e8189457ea6f0513e6807824d11bb29 Table of Contents:
Installation
GettingStarted
Tasks
Use with RL
Use with VAE
Customization

Please refer to Wiki for the full documentation

Installation

Optional initial step: create a new conda environment with conda create --name dedo python=3.7 and activate it with conda activate dedo. Conda is not strictly needed, alternatives like virtualenv can be used; a direct install without using virtual environments is ok as well.

git clone https://github.com/contactrika/dedo
cd dedo
pip install numpy  # important: Nessasary for compiling numpy-enabled PyBullet
pip install -e .

Python3.7 is recommended as we have encountered that on some OS + CPU combo, PyBullet could not be compiled with Numpy enabled in Pip Python 3.8. To enable recording/logging videos install ffmpeg:

sudo apt-get install ffmpeg

See more in Installation Guide in wiki

Getting started

To get started, one can run one of the following commands to visualize the tasks through a hard-coded policy.

python -m dedo.demo --env=HangGarment-v1 --viz --debug
  • dedo.demo is the demo module
  • --env=HangGarment-v1 specifies the environment
  • --viz enables the GUI
  • ---debug outputs additional information in the console
  • --cam_resolution 400 specifies the size of the output window

See more in Usage-guide

Tasks

See more in Task Overview

We provide a set of 10 tasks involving deformable objects, most tasks contains 5 handmade deformable objects. There are also two procedurally generated tasks, ButtonProc and HangProcCloth, in which the deformable objects are procedurally generated. Furthermore, to improve generalzation, the v0 of each task will randomizes textures and meshes.

All tasks have -v1 and -v2 with a particular choice of meshes and textures that is not randomized. Most tasks have versions up to -v5 with additional mesh and texture variations.

Tasks with procedurally generated cloth (ButtonProc and HangProcCloth) generate random cloth objects for all versions (but randomize textures only in v0).

HangBag

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangBag-v1 --viz

HangBag-v0: selects one of 108 bag meshes; randomized textures

HangBag-v[1-3]: three bag versions with textures shown below:

images/imgs/hang_bags_annotated.jpg

HangGarment

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangGarment-v1 --viz

HangGarment-v0: hang garment with randomized textures (a few examples below):

HangGarment-v[1-5]: 5 apron meshes and texture combos shown below:

images/imgs/hang_garments_5.jpg

HangGarment-v[6-10]: 5 shirt meshes and texture combos shown below:

images/imgs/hang_shirts_5.jpg

HangProcCloth

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangProcCloth-v1 --viz

HangProcCloth-v0: random textures, procedurally generated cloth with 1 and 2 holes.

HangProcCloth-v[1-2]: same, but with either 1 or 2 holes

images/imgs/hang_proc_cloth.jpg

Buttoning

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Button-v1 --viz

ButtonProc-v0: randomized textures and procedurally generated cloth with 2 holes, randomized hole/button positions.

ButtonProc-v[1-2]: procedurally generated cloth, 1 or two holes.

images/imgs/button_proc.jpg

Button-v0: randomized textures, but fixed cloth and button positions.

Button-v1: fixed cloth and button positions with one texture (see image below):

images/imgs/button.jpg

Hoop

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Hoop-v1 --viz

Hoop-v0: randomized textures Hoop-v1: pre-selected textures images/imgs/hoop_and_lasso.jpg

Lasso

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Lasso-v1 --viz

Lasso-v0: randomized textures Lasso-v1: pre-selected textures

DressBag

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=DressBag-v1 --viz

DressBag-v0, DressBag-v[1-5]: demo for -v1 shown below

images/imgs/dress_bag.jpg

Visualizations of the 5 backpack mesh and texture variants for DressBag-v[1-5]:

images/imgs/backpack_meshes.jpg

DressGarment

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=DressGarment-v1 --viz

DressGarment-v0, DressGarment-v[1-5]: demo for -v1 shown below

images/imgs/dress_garment.jpg

Mask

python -m dedo.demo_preset --env=Mask-v1 --viz

Mask-v0, Mask-v[1-5]: a few texture variants shown below: images/imgs/dress_garment.jpg

RL Examples

dedo/run_rl_sb3.py gives an example of how to train an RL algorithm from Stable Baselines 3:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

dedo/run_rllib.py gives an example of how to train an RL algorithm using RLLib:

python -m dedo.run_rllib --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

For documentation, please refer to Arguments Reference page in wiki

To launch the Tensorboard:

tensorboard --logdir=/tmp/dedo --bind_all --port 6006 \
  --samples_per_plugin images=1000

SVAE Examples

dedo/run_svae.py gives an example of how to train various flavors of VAE:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

dedo/run_rllib.py gives an example of how to train an RL algorithm from Stable Baselines 3:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

To launch the Tensorboard:

tensorboard --logdir=/tmp/dedo --bind_all --port 6006 \
  --samples_per_plugin images=1000

Customization

To load custom object you would first have to fill an entry in DEFORM_INFO in task_info.py. The key should the the .obj file path relative to data/:

DEFORM_INFO = {
...
    # An example of info for a custom item.
    'bags/custom.obj': {
        'deform_init_pos': [0, 0.47, 0.47],
        'deform_init_ori': [np.pi/2, 0, 0],
        'deform_scale': 0.1,
        'deform_elastic_stiffness': 1.0,
        'deform_bending_stiffness': 1.0,
        'deform_true_loop_vertices': [
            [0, 1, 2, 3]  # placeholder, since we don't know the true loops
        ]
    },

Then you can use --override_deform_obj flag:

python -m dedo.demo --env=HangBag-v0 --cam_resolution 200 --viz --debug \
    --override_deform_obj bags/custom.obj

For items not in DEFORM_DICT you will need to specify sensible defaults, for example:

python -m dedo.demo --env=HangGarment-v0 --viz --debug \
  --override_deform_obj=generated_cloth/generated_cloth.obj \
  --deform_init_pos 0.02 0.41 0.63 --deform_init_ori 0 0 1.5708

Example of scaling up the custom mesh objects:

python -m dedo.demo --env=HangGarment-v0 --viz --debug \
   --override_deform_obj=generated_cloth/generated_cloth.obj \
   --deform_init_pos 0.02 0.41 0.55 --deform_init_ori 0 0 1.5708 \
   --deform_scale 2.0 --anchor_init_pos -0.10 0.40 0.70 \
   --other_anchor_init_pos 0.10 0.40 0.70

See more in Customization Wiki

Additonal Assets

BGarment dataset is adapter from Berkeley Garment Library

Sewing dataset is adapted from Generating Datasets of 3D Garments with Sewing Patterns

You might also like...
PyTorch implementation of Deformable Convolution

Deformable Convolutional Networks in PyTorch This repo is an implementation of Deformable Convolution. Ported from author's MXNet implementation. Buil

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

PyTorch implementation of Deformable Convolution
PyTorch implementation of Deformable Convolution

PyTorch implementation of Deformable Convolution !!!Warning: There is some issues in this implementation and this repo is not maintained any more, ple

A multi-scale unsupervised learning for deformable image registration

A multi-scale unsupervised learning for deformable image registration Shuwei Shao, Zhongcai Pei, Weihai Chen, Wentao Zhu, Xingming Wu and Baochang Zha

Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

MoCoPnet - Deformable 3D Convolution for Video Super-Resolution
MoCoPnet - Deformable 3D Convolution for Video Super-Resolution

Deformable 3D Convolution for Video Super-Resolution Pytorch implementation of l

3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021)

3DDUNET This is the code for 3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021) Conference Paper Link Dataset We use SMOID dataset

Selfplay In MultiPlayer Environments
Selfplay In MultiPlayer Environments

This project allows you to train AI agents on custom-built multiplayer environments, through self-play reinforcement learning.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Comments
  • Adding Point Cloud Observations to DEDO

    Adding Point Cloud Observations to DEDO

    This PR adds point cloud (pcd) rendering to the DEDO. Summary of changes:

    • Point cloud data extracted from sim environment based on a set of object ids that we want to retain
    • Depth cameras are instantiated using a cameraConfig class, which abstracts out the various camera configurations needed.
    • The cameraConfig class loads camera configs from JSON (for easy loading & sharing of camera configs), or directly by instantiation (if you know how you want to dynamically set your camera).
    • Some sample JSON camera configs are provided (4 total)
    • Unprojecting from depth image to point cloud is vectorized, so rendering point cloud observations adds negligible runtime to overall pipeline process time (should benchmark this?).
    • The original deform_env had to be adjusted so that the deformable object would have ID 0. For some reason, pybullet only renders the deformable if this is true.

    Known issues:

    • The floor has disappeared from the visual.
    opened by edwin-pan 3
  • Enables base motion on fetch robot with 1 anchor

    Enables base motion on fetch robot with 1 anchor

    Changes allow the fetch robot to move towards the hanger with an apron.

    Google Doc that explains the changes: https://docs.google.com/document/d/18_9K29K4N6atvtqUxIqKhq6Bt0YSPhQgfWldUWdyvLM/edit?usp=sharing

    There are some TODO's: related to removing some hardcoded values and improving the results

    opened by Nishantjannu 0
Releases(v0.1)
  • v0.1(Jan 11, 2022)

    This is the initial release of the code and functionality presented at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks in December 2021.

    Source code(tar.gz)
    Source code(zip)
Owner
Rika
Sim-to-real with Reinforcement Learning, Variational Inference, Bayesian Optimization
Rika
Patient-Survival - Using Python, I developed a Machine Learning model using classification techniques such as Random Forest and SVM classifiers to predict a patient's survival status that have undergone breast cancer surgery.

Patient-Survival - Using Python, I developed a Machine Learning model using classification techniques such as Random Forest and SVM classifiers to predict a patient's survival status that have underg

Nafis Ahmed 1 Dec 28, 2021
Learnable Boundary Guided Adversarial Training (ICCV2021)

Learnable Boundary Guided Adversarial Training This repository contains the implementation code for the ICCV2021 paper: Learnable Boundary Guided Adve

DV Lab 27 Sep 25, 2022
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)

Vision Transformer Pytorch reimplementation of Google's repository for the ViT model that was released with the paper An Image is Worth 16x16 Words: T

Eunkwang Jeon 1.4k Dec 28, 2022
Reimplementation of Learning Mesh-based Simulation With Graph Networks

Pytorch Implementation of Learning Mesh-based Simulation With Graph Networks This is the unofficial implementation of the approach described in the pa

Jingwei Xu 33 Dec 14, 2022
Discovering Interpretable GAN Controls [NeurIPS 2020]

GANSpace: Discovering Interpretable GAN Controls Figure 1: Sequences of image edits performed using control discovered with our method, applied to thr

Erik Härkönen 1.7k Jan 03, 2023
Source code for Task-Aware Variational Adversarial Active Learning

Contrastive Coding for Active Learning under Class Distribution Mismatch Official PyTorch implementation of ["Contrastive Coding for Active Learning u

27 Nov 23, 2022
LaneDetectionAndLaneKeeping - Lane Detection And Lane Keeping

LaneDetectionAndLaneKeeping This project is part of my bachelor's thesis. The go

5 Jun 27, 2022
Python interface for SmartRF Sniffer 2 Firmware

#TI SmartRF Packet Sniffer 2 Python Interface TI Makes available a nice packet sniffer firmware, which interfaces to Wireshark. You can see this proje

Colin O'Flynn 3 May 18, 2021
Inferring Lexicographically-Ordered Rewards from Preferences

Inferring Lexicographically-Ordered Rewards from Preferences Code author: Alihan Hüyük ([e

Alihan Hüyük 1 Feb 13, 2022
A Unified Generative Framework for Various NER Subtasks.

This is the code for ACL-ICJNLP2021 paper A Unified Generative Framework for Various NER Subtasks. Install the package in the requirements.txt, then u

177 Jan 05, 2023
Camera Distortion-aware 3D Human Pose Estimation in Video with Optimization-based Meta-Learning

Camera Distortion-aware 3D Human Pose Estimation in Video with Optimization-based Meta-Learning This is the official repository of "Camera Distortion-

Hanbyel Cho 12 Oct 06, 2022
Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks

Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks This is a Pytorch-Lightning implementation of the paper "Self-s

Photogrammetry & Robotics Bonn 111 Dec 06, 2022
Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style disentanglement in image generation and translation" (ICCV 2021)

DiagonalGAN Official Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style Disentanglement in Image Generation and Trans

32 Dec 06, 2022
ERISHA is a mulitilingual multispeaker expressive speech synthesis framework. It can transfer the expressivity to the speaker's voice for which no expressive speech corpus is available.

ERISHA: Multilingual Multispeaker Expressive Text-to-Speech Library ERISHA is a multilingual multispeaker expressive speech synthesis framework. It ca

Ajinkya Kulkarni 43 Nov 27, 2022
Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes

Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized C

Sam Bond-Taylor 139 Jan 04, 2023
Codebase for "ProtoAttend: Attention-Based Prototypical Learning."

Codebase for "ProtoAttend: Attention-Based Prototypical Learning." Authors: Sercan O. Arik and Tomas Pfister Paper: Sercan O. Arik and Tomas Pfister,

47 2 May 17, 2022
Classifies galaxy morphology with Bayesian CNN

Zoobot Zoobot classifies galaxy morphology with deep learning. This code will let you: Reproduce and improve the Galaxy Zoo DECaLS automated classific

Mike Walmsley 39 Dec 20, 2022
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
Pytorch implementation for M^3L

Learning to Generalize Unseen Domains via Memory-based Multi-Source Meta-Learning for Person Re-Identification (CVPR 2021) Introduction This is the Py

Yuyang Zhao 45 Dec 26, 2022
The implementation of DeBERTa

DeBERTa: Decoding-enhanced BERT with Disentangled Attention This repository is the official implementation of DeBERTa: Decoding-enhanced BERT with Dis

Microsoft 1.2k Jan 06, 2023