SceneCollisionNet This repo contains the code for "Object Rearrangement Using Learned Implicit Collision Functions", an ICRA 2021 paper. For more info

Overview

SceneCollisionNet

This repo contains the code for "Object Rearrangement Using Learned Implicit Collision Functions", an ICRA 2021 paper. For more information, please visit the project website.

License

This repo is released under NVIDIA source code license. For business inquiries, please contact [email protected]. For press and other inquiries, please contact Hector Marinez at [email protected]

Install and Setup

Clone and install the repo (we recommend a virtual environment, especially if training or benchmarking, to avoid dependency conflicts):

git clone --recursive https://github.com/mjd3/SceneCollisionNet.git
cd SceneCollisionNet
pip install -e .

These commands install the minimum dependencies needed for generating a mesh dataset and then training/benchmarking using Docker. If you instead wish to train or benchmark without using Docker, please first install an appropriate version of PyTorch and corresponding version of PyTorch Scatter for your system. Then, execute these commands:

git clone --recursive https://github.com/mjd3/SceneCollisionNet.git
cd SceneCollisionNet
pip install -e .[train]

If benchmarking, replace train in the last command with bench.

To rollout the object rearrangement MPPI policy in a simulated tabletop environment, first download Isaac Gym and place it in the extern folder within this repo. Next, follow the previous installation instructions for training, but replace the train option with policy.

To download the pretrained weights for benchmarking or policy rollout, run bash scripts/download_weights.sh.

Generating a Mesh Dataset

To save time during training/benchmarking, meshes are preprocessed and mesh stable poses are calculated offline. SceneCollisionNet was trained using the ACRONYM dataset. To use this dataset for training or benchmarking, download the ShapeNetSem meshes here (note: you must first register for an account) and the ACRONYM grasps here. Next, build Manifold (an external library included as a submodule):

./scripts/install_manifold.sh

Then, use the following script to generate a preprocessed version of the ACRONYM dataset:

python tools/generate_acronym_dataset.py /path/to/shapenetsem/meshes /path/to/acronym datasets/shapenet

If you have your own set of meshes, run:

python tools/generate_mesh_dataset.py /path/to/meshes datasets/your_dataset_name

Note that this dataset will not include grasp data, which is not needed for training or benchmarking SceneCollisionNet, but is be used for rolling out the MPPI policy.

Training/Benchmarking with Docker

First, install Docker and nvidia-docker2 following the instructions here. Pull the SceneCollisionNet docker image from DockerHub (tag scenecollisionnet) or build locally using the provided Dockerfile (docker build -t scenecollisionnet .). Then, use the appropriate configuration .yaml file in cfg to set training or benchmarking parameters (note that cfg file paths are relative to the Docker container, not the local machine) and run one of the commands below (replacing paths with your local paths as needed; -v requires absolute paths).

Train a SceneCollisionNet

Edit cfg/train_scenecollisionnet.yaml, then run:

docker run --gpus all --rm -it -v /path/to/dataset:/dataset:ro -v /path/to/models:/models:rw -v /path/to/cfg:/cfg:ro scenecollisionnet /SceneCollisionNet/scripts/train_scenecollisionnet_docker.sh

Train a RobotCollisionNet

Edit cfg/train_robotcollisionnet.yaml, then run:

docker run --gpus all --rm -it -v /path/to/models:/models:rw -v /path/to/cfg:/cfg:ro scenecollisionnet /SceneCollisionNet/scripts/train_robotcollisionnet_docker.sh

Benchmark a SceneCollisionNet

Edit cfg/benchmark_scenecollisionnet.yaml, then run:

docker run --gpus all --rm -it -v /path/to/dataset:/dataset:ro -v /path/to/models:/models:ro -v /path/to/cfg:/cfg:ro -v /path/to/benchmark_results:/benchmark:rw scenecollisionnet /SceneCollisionNet/scripts/benchmark_scenecollisionnet_docker.sh

Benchmark a RobotCollisionNet

Edit cfg/benchmark_robotcollisionnet.yaml, then run:

docker run --gpus all --rm -it -v /path/to/models:/models:rw -v /path/to/cfg:/cfg:ro -v /path/to/benchmark_results:/benchmark:rw scenecollisionnet /SceneCollisionNet/scripts/train_robotcollisionnet_docker.sh

Loss Plots

To get loss plots while training, run:

docker exec -d <container_name> python3 tools/loss_plots.py /models/<model_name>/log.csv

Benchmark FCL or SDF Baselines

Edit cfg/benchmark_baseline.yaml, then run:

docker run --gpus all --rm -it -v /path/to/dataset:/dataset:ro -v /path/to/benchmark_results:/benchmark:rw -v /path/to/cfg:/cfg:ro scenecollisionnet /SceneCollisionNet/scripts/benchmark_baseline_docker.sh

Training/Benchmarking without Docker

First, install system dependencies. The system dependencies listed assume an Ubuntu 18.04 install with NVIDIA drivers >= 450.80.02 and CUDA 10.2. You can adjust the dependencies accordingly for different driver/CUDA versions. Note that the NVIDIA drivers come packaged with EGL, which is used during training and benchmarking for headless rendering on the GPU.

System Dependencies

See Dockerfile for a full list. For training/benchmarking, you will need:

python3-dev
python3-pip
ninja-build
libcudnn8=8.1.1.33-1+cuda10.2
libcudnn8-dev=8.1.1.33-1+cuda10.2
libsm6
libxext6
libxrender-dev
freeglut3-dev
liboctomap-dev
libfcl-dev
gifsicle
libfreetype6-dev
libpng-dev

Python Dependencies

Follow the instructions above to install the necessary dependencies for your use case (either the train, bench, or policy options).

Train a SceneCollisionNet

Edit cfg/train_scenecollisionnet.yaml, then run:

PYOPENGL_PLATFORM=egl python tools/train_scenecollisionnet.py

Train a RobotCollisionNet

Edit cfg/train_robotcollisionnet.yaml, then run:

python tools/train_robotcollisionnet.py

Benchmark a SceneCollisionNet

Edit cfg/benchmark_scenecollisionnet.yaml, then run:

PYOPENGL_PLATFORM=egl python tools/benchmark_scenecollisionnet.py

Benchmark a RobotCollisionNet

Edit cfg/benchmark_robotcollisionnet.yaml, then run:

python tools/benchmark_robotcollisionnet.py

Benchmark FCL or SDF Baselines

Edit cfg/benchmark_baseline.yaml, then run:

PYOPENGL_PLATFORM=egl python tools/benchmark_baseline.py

Policy Rollout

To view a rearrangement MPPI policy rollout in a simulated Isaac Gym tabletop environment, run the following command (note that this requires a local machine with an available GPU and display):

python tools/rollout_policy.py --self-coll-nn weights/self_coll_nn --scene-coll-nn weights/scene_coll_nn --control-frequency 1

There are many possible options for this command that can be viewed using the --help command line argument and set with the appropriate argument. If you get RuntimeError: CUDA out of memory, try reducing the horizon (--mppi-horizon, default 40), number of trajectories (--mppi-num-rollouts, default 200) or collision steps (--mppi-collision-steps, default 10). Note that this may affect policy performance.

Citation

If you use this code in your own research, please consider citing:

@inproceedings{danielczuk2021object,
  title={Object Rearrangement Using Learned Implicit Collision Functions},
  author={Danielczuk, Michael and Mousavian, Arsalan and Eppner, Clemens and Fox, Dieter},
  booktitle={Proc. IEEE Int. Conf. Robotics and Automation (ICRA)},
  year={2021}
}
Owner
NVIDIA Research Projects
NVIDIA Research Projects
An unofficial package help developers to implement ZATCA (Fatoora) QR code easily which required for e-invoicing

ZATCA (Fatoora) QR-Code Implementation An unofficial package help developers to implement ZATCA (Fatoora) QR code easily which required for e-invoicin

TheAwiteb 28 Nov 03, 2022
Controlling Volume by Hand Gestures

This program allows the user to control the volume of their device with specific hand gestures involving their thumb and index finger!

Riddhi Bajaj 1 Nov 11, 2021
Application that instantly translates sign-language to letters.

Sign Language Translator Project Description The main purpose of project is translating sign-language to letters. In accordance with this purpose we d

3 Sep 29, 2022
Code for CVPR 2022 paper "SoftGroup for Instance Segmentation on 3D Point Clouds"

SoftGroup We provide code for reproducing results of the paper SoftGroup for 3D Instance Segmentation on Point Clouds (CVPR 2022) Author: Thang Vu, Ko

Thang Vu 231 Dec 27, 2022
An Optical Character Recognition system using Pytesseract/Extracting data from Blood Pressure Reports.

Optical_Character_Recognition An Optical Character Recognition system using Pytesseract/Extracting data from Blood Pressure Reports. As an IOT/Compute

Ramsis Hammadi 1 Feb 12, 2022
pyntcloud is a Python library for working with 3D point clouds.

pyntcloud is a Python library for working with 3D point clouds.

David de la Iglesia Castro 1.2k Jan 07, 2023
This project is basically to draw lines with your hand, using python, opencv, mediapipe.

Paint Opencv 📷 This project is basically to draw lines with your hand, using python, opencv, mediapipe. Screenshoots 📱 Tools ⚙️ Python Opencv Mediap

Williams Ismael Bobadilla Torres 3 Nov 17, 2021
A pure pytorch implemented ocr project including text detection and recognition

ocr.pytorch A pure pytorch implemented ocr project. Text detection is based CTPN and text recognition is based CRNN. More detection and recognition me

coura 444 Dec 30, 2022
A tool for extracting text from scanned documents (via OCR), with user-defined post-processing.

The project is based on older versions of tesseract and other tools, and is now superseded by another project which allows for more granular control o

Maxim 32 Jul 24, 2022
Msos searcher - A half-hearted attempt at finding a magic square of squares

MSOS searcher A half-hearted attempt at finding (or rather searching) a MSOS (Magic Square of Squares) in the spirit of the Parker Square. Running I r

Niels Mündler 1 Jan 02, 2022
APS 6º Semestre - UNIP (2021)

UNIP - Universidade Paulista Ciência da Computação (CC) DESENVOLVIMENTO DE UM SISTEMA COMPUTACIONAL PARA ANÁLISE E CLASSIFICAÇÃO DE FORMAS Link do git

Eduardo Talarico 5 Mar 09, 2022
Opencv face recognition desktop application

Opencv-Face-Recognition Opencv face recognition desktop application Program developed by Gustavo Wydler Azuaga - 2021-11-19 Screenshots of the program

Gus 1 Nov 19, 2021
Automatically resolve RidderMaster based on TensorFlow & OpenCV

AutoRiddleMaster Automatically resolve RidderMaster based on TensorFlow & OpenCV 基于 TensorFlow 和 OpenCV 实现的全自动化解御迷士小马谜题 Demo How to use Deploy the ser

神龙章轩 5 Nov 19, 2021
7th place solution

SIIM-FISABIO-RSNA-COVID-19-Detection 7th place solution Validation: We used iterative-stratification with 5 folds (https://github.com/trent-b/iterativ

11 Jul 17, 2022
docstrum

Docstrum Algorithm Getting Started This repo is for developing a Docstrum algorithm presented by O’Gorman (1993). Disclaimer This source code is built

Chulwoo Mike Pack 54 Dec 13, 2022
A Joint Video and Image Encoder for End-to-End Retrieval

Frozen️ in Time ❄️ ️️️️ ⏳ A Joint Video and Image Encoder for End-to-End Retrieval (arXiv) Repository to contain the code, models, data for end-to-end

225 Dec 25, 2022
governance proposal to make fei redeemable for eth

Feil Proposal 🌲 Abstract Migrate all ETH from Fei protocol-controlled value into Yearn ETH Vault. Allow redemptions of outstanding FEI for yvETH. At

13 Mar 31, 2022
OCR-D-compliant page segmentation

ocrd_segment This repository aims to provide a number of OCR-D-compliant processors for layout analysis and evaluation. Installation In your virtual e

OCR-D 59 Sep 10, 2022
Create single line SVG illustrations from your pictures

Create single line SVG illustrations from your pictures

Javier Bórquez 686 Dec 26, 2022
Go package for OCR (Optical Character Recognition), by using Tesseract C++ library

gosseract OCR Golang OCR package, by using Tesseract C++ library. OCR Server Do you just want OCR server, or see the working example of this package?

Hiromu OCHIAI 1.9k Dec 28, 2022