Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation

Related tags

Deep Learningsuo_slam
Overview

SUO-SLAM

This repository hosts the code for our CVPR 2022 paper "Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation". ArXiv link.

Citation

If you use any part of this repository in an academic work, please cite our paper as:

@inproceedings{Merrill2022CVPR,
  Title      = {Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation},
  Author     = {Nathaniel Merrill and Yuliang Guo and Xingxing Zuo and Xinyu Huang and Stefan Leutenegger and Xi Peng and Liu Ren and Guoquan Huang},
  Booktitle  = {2022 Conference on Computer Vision and Pattern Recognition (CVPR)},
  Year       = {2022},
  Address    = {New Orleans, USA},
  Month      = jun,
}

Installation

Click for details... This codebase was tested on Ubuntu 18.04. To use the BOP rendering (i.e. for keypoint labeling) install
sudo apt install libfreetype6-dev libglfw3

You will also need a python environment that contains the required packages. To see what packages we used, check out the list of requirements in requirements.txt. They can be installed via pip install -r requirements.txt

Preparing Data

Click for details...

Datasets

To be able to run the training and testing (i.e. single view or with SLAM), first decide on a place to download the data to. The disk will need a few hundred GB of space for all the data (at least 150GB for download and more to extract). All of our code expects the data to be in a local directory ./data, but you can of course symlink this to another location (perhaps with more disk space). So, first of all, in the root of this repo run

$ mkdir data

or to symlink to an external location

$ ln -s /path/to/drive/with/space/ ./data

You can pick and choose what data you want to download (for example just T-LESS or YCBV). Note that all YCBV and TLESS downloads have our keypoint labels packaged along with the data. Download the following google drive links into ./data and extract them.

When all is said and done, the tree should look like this

$ cd ./data && tree --filelimit 3
.
├── bop_datasets
│   ├── tless 
│   └── ycbv 
├── saved_detections
└── VOCdevkit
    └── VOC2012

Pre-trained models

You can download the pretrained models anywhere, but I like to keep them in the results directory that is written to during training.

Training

Click for details...

First set the default arguments in ./lib/args.py for your username if desired, then execute

$ ./train.py

with the appropriate arguments for your filesystem. You can also run

$ ./train.py -h

for a full list of arguments and their meaning. Some important args are batch_size, which is the number of images loaded for each training batch. Note that there may be a variable number of objects in each image, and the objects are all stacked together into one big batch to run the network -- so the actual batch size being run might be multiple times batch_size. In order to keep batch_size reasonably large, we provide another arg called truncate_obj, which, as the help says, truncates the object batches to this number if it exceeds it. We recommend that you start with a large batch size so that you can find out the maximum truncate_obj for you GPUs, then reduce the batch size until there are little to no warnings about too many objects being truncated.

Evaluation

Click for details...

Before you can evaluate in a single-view or SLAM fashion, you will need to build the thirdparty libraries for PnP and graph optimization. First make sure that you have CERES solver installed. The run

$ ./build_thirdparty.sh

Reproducing Results

To reproduce the results of the paper with the pretrained models, check out the scripts under the scripts directory:

eval_all_tless.sh  eval_all_ycbv.sh  make_video.sh

These will reproduce most of the results in the paper as well as any video clips you want. You may have to change the first few lines of each script. Note that these examples can also show you the proper arguments if you want to run from command line alone.

Note that for the T-LESS dataset, we use the thirdparty BOP toolkit to get the VSD error recall, which will show up in the final terminal output as "Mean object recall" among other numbers.

Labeling

Click for details...

Overview

We manually label keypoints on the CAD model to enable some keypoints with semantic meaning. For the full list of keypoint meanings, see the specific README

We provide our landmark labeling tool. Check out the script manual_keypoints.py. This same script can be used to make a visualization of the keypoints as shown below with the --viz option.

The script will show a panel of the same object but oriented slightly differently. The idea is that you pick the same keypoint multiple times to ensure correctness and to get a better label by averaging multiple samples.

The script will also print the following directions to follow in the terminal.

============= Welcome ===============
Select the keypoints with a left click!
Use the "wasd" to turn the objects.
Press "i" to zoom in and "o" to zoom out.
Make sure that the keypoint colors match between all views.
Messed up? Just press 'u' to undo.
Press "Enter" to finish and save the keypoints
Press "Esc" to just quit

Once you have pressed "enter", you will get to an inspection pane.

Where the unscaled mean keypoints are on the left, and the ones scaled by covariance is on the left, where the ellipses are the Gaussian 3-sigma projected onto the image. If the covariance is too large, or the mean is out of place, then you may have messed up. Again, the program will print out these directions to terminal:

Inspect the results!
Use the "wasd" to turn the object.
Press "i" to zoom in and "o" to zoom out.
Press "Esc" to go back, "Enter" to accept (saving keypoints and viewpoint for vizualization).
Please pick a point on the object!

So if you are done, and the result looks good, then press "Enter", if not then "Esc" to go back. Make sure also that when you are done, you rotate and scale the object into the best "view pose" (with the front facing the camera, and top facing up), as this pose is used by both the above vizualization and the actual training code for determining the best symmetry to pick for an initial detection.

Labeling Tips

Even though there are 8 panels, you don't need to fill out all 8. Each keypoint just needs at least 3 samples to sample the covariance.

We recommend that you label the same keypoint (say keypoint i) on all the object renderings first, then go to the inspection panel at the end of this each time so that you can easily undo a mistake for keypoint i with the "u" key and not lose any work. Otherwise, if you label each object rendering completely, then you may have to undo a lot of labelings that were not mistakes.

Also, if there is an object that you want to label a void in the CAD model, like the top center of the bowl, then you can use the multiple samples to your advantage, and choose samples that will average to the desired result, since the labels are required to land on the actual CAD model in the labeling tool.

<\details>

Owner
Robot Perception & Navigation Group (RPNG)
Research on robot sensing, estimation, localization, mapping, perception, and planning
Robot Perception & Navigation Group (RPNG)
Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation

CorrNet This project provides the code and results for 'Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation'

Gongyang Li 13 Nov 03, 2022
GraPE is a Rust/Python library for high-performance Graph Processing and Embedding.

GraPE GraPE (Graph Processing and Embedding) is a fast graph processing and embedding library, designed to scale with big graphs and to run on both of

AnacletoLab 194 Dec 29, 2022
9th place solution in "Santa 2020 - The Candy Cane Contest"

Santa 2020 - The Candy Cane Contest My solution in this Kaggle competition "Santa 2020 - The Candy Cane Contest", 9th place. Basic Strategy In this co

toshi_k 22 Nov 26, 2021
Official Repsoitory for "Activate or Not: Learning Customized Activation." [CVPR 2021]

CVPR 2021 | Activate or Not: Learning Customized Activation. This repository contains the official Pytorch implementation of the paper Activate or Not

184 Dec 27, 2022
Differentiable scientific computing library

xitorch: differentiable scientific computing library xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely

98 Dec 26, 2022
Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation The skip connections in U-Net pass features from the levels of enc

Boheng Cao 1 Dec 29, 2021
Uses OpenCV and Python Code to detect a face on the screen

Simple-Face-Detection This code uses OpenCV and Python Code to detect a face on the screen. This serves as an example program. Important prerequisites

Denis Woolley (CreepyD) 1 Feb 12, 2022
The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022

DG-TrajGen The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022. Our Meth

Wang 25 Sep 26, 2022
Official implementation of Densely connected normalizing flows

Densely connected normalizing flows This repository is the official implementation of NeurIPS 2021 paper Densely connected normalizing flows. Poster a

Matej Grcić 31 Dec 12, 2022
magiCARP: Contrastive Authoring+Reviewing Pretraining

magiCARP: Contrastive Authoring+Reviewing Pretraining Welcome to the magiCARP API, the test bed used by EleutherAI for performing text/text bi-encoder

EleutherAI 43 Dec 29, 2022
Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding (CVPR2022)

Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding by Qiaole Dong*, Chenjie Cao*, Yanwei Fu Paper and Supple

Qiaole Dong 190 Dec 27, 2022
Over-the-Air Ensemble Inference with Model Privacy

Over-the-Air Ensemble Inference with Model Privacy This repository contains simulations for our private ensemble inference method. Installation Instal

Selim Firat Yilmaz 1 Jun 29, 2022
Code for "Graph-Evolving Meta-Learning for Low-Resource Medical Dialogue Generation". [AAAI 2021]

Graph Evolving Meta-Learning for Low-resource Medical Dialogue Generation Code to be further cleaned... This repo contains the code of the following p

Shuai Lin 29 Nov 01, 2022
Learn the Deep Learning for Computer Vision in three steps: theory from base to SotA, code in PyTorch, and space-repetition with Anki

DeepCourse: Deep Learning for Computer Vision arthurdouillard.com/deepcourse/ This is a course I'm giving to the French engineering school EPITA each

Arthur Douillard 113 Nov 29, 2022
Improving Factual Consistency of Abstractive Text Summarization

Improving Factual Consistency of Abstractive Text Summarization We provide the code for the papers: "Entity-level Factual Consistency of Abstractive T

61 Nov 27, 2022
HGCN: Harmonic Gated Compensation Network For Speech Enhancement

HGCN The official repo of "HGCN: Harmonic Gated Compensation Network For Speech Enhancement", which was accepted at ICASSP2022. How to use step1: Calc

ScorpioMiku 33 Nov 14, 2022
Code for "Adversarial Attack Generation Empowered by Min-Max Optimization", NeurIPS 2021

Min-Max Adversarial Attacks [Paper] [arXiv] [Video] [Slide] Adversarial Attack Generation Empowered by Min-Max Optimization Jingkang Wang, Tianyun Zha

Jingkang Wang 12 Nov 23, 2022
The most simple and minimalistic navigation dashboard.

Navigation This project follows a goal to have simple and lightweight dashboard with different links. I use it to have my own self-hosted service dash

Yaroslav 23 Dec 23, 2022
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Payphone 8 Nov 21, 2022
Amazing-Python-Scripts - 🚀 Curated collection of Amazing Python scripts from Basics to Advance with automation task scripts.

📑 Introduction A curated collection of Amazing Python scripts from Basics to Advance with automation task scripts. This is your Personal space to fin

Avinash Ranjan 1.1k Dec 29, 2022