A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking

Related tags

Deep LearningPoseRBPF
Overview

PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking

PoseRBPF

Citing PoseRBPF

If you find the PoseRBPF code useful, please consider citing:

@inproceedings{deng2019pose,
author    = {Xinke Deng and Arsalan Mousavian and Yu Xiang and Fei Xia and Timothy Bretl and Dieter Fox},
title     = {PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking},
booktitle = {Robotics: Science and Systems (RSS)},
year      = {2019}
}
@inproceedings{deng2020self,
author    = {Xinke Deng and Yu Xiang and Arsalan Mousavian and Clemens Eppner and Timothy Bretl and Dieter Fox},
title     = {Self-supervised 6D Object Pose Estimation for Robot Manipulation},
booktitle = {International Conference on Robotics and Automation (ICRA)},
year      = {2020}
}

Installation

git clone https://github.com/NVlabs/PoseRBPF.git --recursive

Install dependencies:

  • install anaconda according to the official website.
  • create the virtual env with pose_rbpf_env.yml:
conda env create -f pose_rbpf_env.yml
conda activate pose_rbpf_env
  • compile the YCB Renderer according to the instruction.
  • compile the utility functions with:
sh build.sh

Download

Downolad files as needed. Extract CAD models under the cad_models directory, and extract model weights under the checkpoints directory.

A quick demo on the YCB Video Dataset

demo

  • The demo shows tracking 003_cracker_box on YCB Video Dataset.
  • Run script download_demo.sh to download checkpoint (434 MB), CAD models (743 MB), 2D detections (13 MB), and necessary data (3 GB) for the demo:
./scripts/download_demo.sh
  • Then you should have files organized like:
├── ...
├── PoseRBPF
|   |── cad_models
|   |   |── ycb_models
|   |   └── ...
|   |── checkpoints
|   |   |── ycb_ckpts_roi_rgbd
|   |   |── ycb_codebooks_roi_rgbd
|   |   |── ycb_configs_roi_rgbd
|   |   └── ... 
|   |── detections
|   |   |── posecnn_detections
|   |   |── tless_retina_detections 
|   |── config                      # configuration files for training and DPF
|   |── networks                    # auto-encoder networks
|   |── pose_rbpf                   # particle filters
|   └── ...
|── YCB_Video_Dataset               # to store ycb data
|   |── cameras
|   |── data 
|   |── image_sets 
|   |── keyframes 
|   |── poses
|   └── ...
└── ...
  • Run demo with 003_cracker_box. The results will be stored in ./results/
./scripts/run_demo.sh

Online Real-world Pose Estimation using ROS

ros_demo

  • Due to the incompatibility between ROS Kinetic and Python 3, the ROS node only runs with Python 2.7. We first create the virtual env with pose_rbpf_env_py2.yml:
conda env create -f pose_rbpf_env_py2.yml
conda activate pose_rbpf_env_py2
  • compile the YCB Renderer according to the instruction.
  • compile the utility functions with:
sh build.sh
  • Make sure you can run the demo above first.
  • Install ROS if it's not there:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
sudo apt-get update
sudo apt-get install ros-kinetic-desktop-full
  • Update python packages:
conda install -c auto catkin_pkg
pip install -U rosdep rosinstall_generator wstool rosinstall six vcstools
pip install msgpack
pip install empy
  • Source ROS (every time before launching the node):
source /opt/ros/kinetic/setup.bash
  • Initialze rosdep:
sudo rosdep init
rosdep update

Single object tracking demo:

  • Download demo rosbag:
./scripts/download_ros_demo.sh
  • Run PoseCNN node (with roscore running in another terminal, download PoseCNN weights first):
./scripts/run_ros_demo_posecnn.sh
  • Run PoseRBPF node for RGB-D tracking (with roscore running in another terminal):
./scripts/run_ros_demo.sh
  • (Optional) For RGB tracking run this command instead:
./scripts/run_ros_demo_rgb.sh
  • Run RVIZ in the PoseRBPF directory:
rosrun rviz rviz -d ./ros/tracking.rviz
  • Once you see *** PoseRBPF Ready ... in the PoseRBPF terminal, run rosbag in another terminal, then you should be able to see the tracking demo:
rosbag play ./ros_data/demo_single.bag

Multiple object tracking demo:

  • Download demo rosbag:
./scripts/download_ros_demo_multiple.sh
  • Run PoseCNN node (with roscore running in another terminal, download PoseCNN weights first):
./scripts/run_ros_demo_posecnn.sh
  • Run PoseRBPF node with self-supervised trained RGB Auto-encoder weights:
./scripts/run_ros_demo_rgb_multiple_ssv.sh
  • (Optional) Run PoseRBPF node with RGB-D Auto-encoder weights instead:
./scripts/run_ros_demo_multiple.sh
  • (Optional) Run PoseRBPF node with RGB Auto-encoder weights instead:
./scripts/run_ros_demo_rgb_multiple.sh
  • Run RVIZ in the PoseRBPF directory:
rosrun rviz rviz -d ./ros/tracking.rviz
  • Once you see *** PoseRBPF Ready ... in the PoseRBPF terminal, run rosbag in another terminal, then you should be able to see the tracking demo:
rosbag play ./ros_data/demo_multiple.bag

Note that PoseRBPF takes certain time to initialize each object before tracking. You can pause the ROS bag by pressing space for initialization, and then press space again to resume tracking.

Testing on the YCB Video Dataset

  • Download checkpoints from the google drive folder (ycb_rgbd_full.tar.gz or ycb_rgb_full.tar.gz) and unzip to the checkpoint directory.
  • Download all the data in the YCB Video Dataset so the ../YCB_Video_Dataset/data folder contains all the sequences.
  • Run RGB-D tracking (use 002_master_chef_can as an example here):
sh scripts/test_ycb_rgbd/val_ycb_002_rgbd.sh 0 1
  • Run RGB tracking (use 002_master_chef_can as an example here):
sh scripts/test_ycb_rgb/val_ycb_002_rgb.sh 0 1

Testing on the T-LESS Dataset

  • Download checkpoints from the google drive folder (tless_rgbd_full.tar.gz or tless_rgb_full.tar.gz) and unzip to the checkpoint directory.
  • Download all the data in the T-LESS Dataset so the ../TLess/ folder contains all the sequences.
  • Download all the models for T-LESS objects from the google drive folder.
  • Then you should have files organized like:
├── ...
├── PoseRBPF
|   |── cad_models
|   |   |── ycb_models
|   |   |── tless_models
|   |   └── ...
|   |── checkpoints
|   |   |── tless_ckpts_roi_rgbd
|   |   |── tless_codebooks_roi_rgbd
|   |   |── tless_configs_roi_rgbd
|   |   └── ... 
|   |── detections
|   |   |── posecnn_detections
|   |   |── tless_retina_detections 
|   |── config                      # configuration files for training and DPF
|   |── networks                    # auto-encoder networks
|   |── pose_rbpf                   # particle filters
|   └── ...
|── YCB_Video_Dataset               # to store ycb data
|   |── cameras  
|   |── data 
|   |── image_sets 
|   |── keyframes 
|   |── poses               
|   └── ...   
|── TLess               # to store tless data
|   |── t-less_v2 
|── tless_ckpts_roi_rgbd
|   |   |── test_primesense
|   |   └── ... 
|   └── ...        
└── ...
  • Run RGB-D tracking (use obj_01 as an example here):
sh scripts/test_tless_rgbd/val_tless_01_rgbd.sh 0 1
  • Run RGB tracking (use obj_01 as an example here):
sh scripts/test_tless_rgb/val_tless_01_rgb.sh 0 1

Testing on the DexYCB Dataset

  • Download checkpoints from the google drive folder (ycb_rgbd_full.tar.gz or ycb_rgb_full.tar.gz) and unzip to the checkpoint directory.

  • Download the DexYCB dataset from here.

  • Download PoseCNN results on the DexYCB dataset from here.

  • Create a symlink for the DexYCB dataset and the PoseCNN results

    cd $ROOT/data/DEX_YCB
    ln -s $dex_ycb_data data
    ln -s $results_posecnn_data results_posecnn
  • Install PyTorch PoseCNN layers according to the instructions here.

  • Run RGB-D tracking:

    ./scripts/test_dex_rgbd/dex_ycb_test_rgbd_s0.sh $GPU_ID
    
  • Run RGB tracking:

    ./scripts/test_dex_rgb/dex_ycb_test_rgb_s0.sh $GPU_ID
    

Training

  • Download microsoft coco dataset 2017 val images from here for data augmentation.
  • Store the folder val2017 in ../coco/
  • Run training example for 002_master_chef_can in the YCB objects. The training should be able to run on one single NVIDIA TITAN Xp GPU:
sh scripts/train_ycb_rgbd/train_script_ycb_002.sh

Acknowledgements

We have referred to part of the RoI align code from maskrcnn-benchmark.

License

PoseRBPF is licensed under the NVIDIA Source Code License - Non-commercial.

Owner
NVIDIA Research Projects
NVIDIA Research Projects
Code for "Adversarial Training for a Hybrid Approach to Aspect-Based Sentiment Analysis

HAABSAStar Code for "Adversarial Training for a Hybrid Approach to Aspect-Based Sentiment Analysis". This project builds on the code from https://gith

1 Sep 14, 2020
PyTorch GPU implementation of the ES-RNN model for time series forecasting

Fast ES-RNN: A GPU Implementation of the ES-RNN Algorithm A GPU-enabled version of the hybrid ES-RNN model by Slawek et al that won the M4 time-series

Kaung 305 Jan 03, 2023
A particular navigation route using satellite feed and can help in toll operations & traffic managemen

How about adding some info that can quanitfy the stress on a particular navigation route using satellite feed and can help in toll operations & traffic management The current analysis is on the satel

Ashish Pandey 1 Feb 14, 2022
My take on a practical implementation of Linformer for Pytorch.

Linformer Pytorch Implementation A practical implementation of the Linformer paper. This is attention with only linear complexity in n, allowing for v

Peter 349 Dec 25, 2022
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron,

Pratul Srinivasan 65 Dec 14, 2022
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 46 Dec 27, 2022
Cupytorch - A small framework mimics PyTorch using CuPy or NumPy

CuPyTorch CuPyTorch是一个小型PyTorch,名字来源于: 不同于已有的几个使用NumPy实现PyTorch的开源项目,本项目通过CuPy支持

Xingkai Yu 23 Aug 17, 2022
The second project in Python course on FCC

Assignment Write a function named add_time that takes in two required parameters and one optional parameter: a start time in the 12-hour clock format

Denise T 1 Dec 13, 2021
TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation Paper Links: TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentati

Hust Visual Learning Team 253 Dec 21, 2022
Activity image-based video retrieval

Cross-modal-retrieval Our approach is focus on Activity Image-to-Video Retrieval (AIVR) task. The compared methods are state-of-the-art single modalit

BCMI 75 Oct 21, 2021
[SIGIR22] Official PyTorch implementation for "CORE: Simple and Effective Session-based Recommendation within Consistent Representation Space".

CORE This is the official PyTorch implementation for the paper: Yupeng Hou, Binbin Hu, Zhiqiang Zhang, Wayne Xin Zhao. CORE: Simple and Effective Sess

RUCAIBox 26 Dec 19, 2022
Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

41 Jan 04, 2023
[3DV 2020] PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction

PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction International Conference on 3D Vision, 2020 Sai Sagar Jinka1, Rohan

Rohan Chacko 39 Oct 12, 2022
Volumetric parameterization of the placenta to a flattened template

placenta-flattening A MATLAB algorithm for volumetric mesh parameterization. Developed for mapping a placenta segmentation derived from an MRI image t

Mazdak Abulnaga 12 Mar 14, 2022
A collection of Google research projects related to Federated Learning and Federated Analytics.

Federated Research Federated Research is a collection of research projects related to Federated Learning and Federated Analytics. Federated learning i

Google Research 483 Jan 05, 2023
Official Python implementation of the FuzionCoin protocol

PyFuzc Official Python implementation of the FuzionCoin protocol WARNING: Under construction. Use at your own risk. Some functions may not work. Setup

FuzionCoin 3 Jul 07, 2022
This is a yolo3 implemented via tensorflow 2.7

YoloV3 - an object detection algorithm implemented via TF 2.x source code In this article I assume you've already familiar with basic computer vision

2 Jan 17, 2022
This project provides a stock market environment using OpenGym with Deep Q-learning and Policy Gradient.

Stock Trading Market OpenAI Gym Environment with Deep Reinforcement Learning using Keras Overview This project provides a general environment for stoc

Kim, Ki Hyun 769 Dec 25, 2022
PointCNN: Convolution On X-Transformed Points (NeurIPS 2018)

PointCNN: Convolution On X-Transformed Points Created by Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Introduction PointCNN

Yangyan Li 1.3k Dec 21, 2022
Repository for open research on optimizers.

Open Optimizers Repository for open research on optimizers. This is a test in sharing research/exploration as it happens. If you use anything from thi

Ariel Ekgren 6 Jun 24, 2022