Elevation Mapping on GPU.

Overview

Elevation Mapping cupy

Overview

This is a ros package of elevation mapping on GPU.
Code are written in python and uses cupy for GPU calculation.
screenshot

* plane segmentation is coming soon.

Citing

Takahiro Miki, Lorenz Wellhausen, Ruben Grandia, Fabian Jenelten, Timon Homberger, Marco Hutter
Elevation Mapping for Locomotion and Navigation using GPU arXiv

@misc{https://doi.org/10.48550/arxiv.2204.12876,
  doi = {10.48550/ARXIV.2204.12876},
  
  url = {https://arxiv.org/abs/2204.12876},
  
  author = {Miki, Takahiro and Wellhausen, Lorenz and Grandia, Ruben and Jenelten, Fabian and Homberger, Timon and Hutter, Marco},
  
  keywords = {Robotics (cs.RO), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {Elevation Mapping for Locomotion and Navigation using GPU},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {arXiv.org perpetual, non-exclusive license}
}

Installation

CUDA & cuDNN

The tested versions are CUDA10.2, 11.6

CUDA
cuDNN.

Check how to install here.

Python dependencies

You will need

For traversability filter, either of

Optinally, opencv for inpainting filter.

Install numpy, scipy, shapely, opencv-python with the following command.

pip3 install -r requirements.txt

Cupy

cupy can be installed with specific CUDA versions. (On jetson, only "from source" i.e. pip install cupy could work)

For CUDA 10.2 pip install cupy-cuda102

For CUDA 11.0 pip install cupy-cuda110

For CUDA 11.1 pip install cupy-cuda111

For CUDA 11.2 pip install cupy-cuda112

For CUDA 11.3 pip install cupy-cuda113

For CUDA 11.4 pip install cupy-cuda114

For CUDA 11.5 pip install cupy-cuda115

For CUDA 11.6 pip install cupy-cuda116

(Install CuPy from source) % pip install cupy

Traversability filter

You can choose either pytorch, or chainer to run the CNN based traversability filter.
Install by following the official documents.

Pytorch uses ~2GB more GPU memory than Chainer, but runs a bit faster.
Use parameter use_chainer to select which backend to use.

ROS package dependencies

sudo apt install ros-noetic-pybind11-catkin
sudo apt install ros-noetic-grid-map-core ros-noetic-grid-map-msgs

On Jetson

CUDA CuDNN

CUDA and cuDNN can be installed via apt. It comes with nvidia-jetpack. The tested version is jetpack 4.5 with L4T 32.5.0.

python dependencies

On jetson, you need the version for its CPU arch:

wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl
pip3 install Cython
pip3 install numpy==1.19.5 torch-1.8.0-cp36-cp36m-linux_aarch64.whl

Also, you need to install cupy with

pip3 install cupy

This builds the packages from source so it would take time.

ROS dependencies

sudo apt install ros-melodic-pybind11-catkin
sudo apt install ros-melodic-grid-map-core ros-melodic-grid-map-msgs

Also, on jetson you need fortran (should already be installed).

sudo apt install gfortran

If the Jetson is set up with Jetpack 4.5 with ROS Melodic the following package is additionally required:

git clone [email protected]:ros/filters.git -b noetic-devel

Usage

Build

catkin build elevation_mapping_cupy

Errors

If you get error such as

Make Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message):
  Could NOT find PythonInterp: Found unsuitable version "2.7.18", but
  required is at least "3" (found /usr/bin/python)

Build with option.

catkin build elevation_mapping_cupy -DPYTHON_EXECUTABLE=$(which python3)

Run

Basic usage.

roslaunch elevation_mapping_cupy elevation_mapping_cupy.launch

Run TurtleBot example

First, install turtlebot simulation.

sudo apt install ros-noetic-turtlebot3*

Then, you can run the example.

export TURTLEBOT3_MODEL=waffle
roslaunch elevation_mapping_cupy turtlesim_example.launch

To control the robot with a keyboard, a new terminal window needs to be opened.
Then run

export TURTLEBOT3_MODEL=waffle
roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch

Velocity inputs can be sent to the robot by pressing the keys a, w, d, x. To stop the robot completely, press s.

Subscribed Topics

  • topics specified in pointcloud_topics in parameters.yaml ([sensor_msgs/PointCloud2])

    The distance measurements.

  • /tf ([tf/tfMessage])

    The transformation tree.

Published Topics

The topics are published as set in the rosparam.
You can specify which layers to publish in which fps.

Under publishers, you can specify the topic_name, layers basic_layers and fps.

publishers:
  your_topic_name:
    layers: ['list_of_layer_names', 'layer1', 'layer2']             # Choose from 'elevation', 'variance', 'traversability', 'time' + plugin layers
    basic_layers: ['list of basic layers', 'layer1']                # basic_layers for valid cell computation (e.g. Rviz): Choose a subset of `layers`.
    fps: 5.0                                                        # Publish rate. Use smaller value than `map_acquire_fps`.

Example setting in config/parameters.yaml.

  • elevation_map_raw ([grid_map_msg/GridMap])

    The entire elevation map.

  • elevation_map_recordable ([grid_map_msg/GridMap])

    The entire elevation map with slower update rate for visualization and logging.

  • elevation_map_filter ([grid_map_msg/GridMap])

    The filtered maps using plugins.

Plugins

You can create your own plugin to process the elevation map and publish as a layer in GridMap message.

Let's look at the example.

First, create your plugin file in elevation_mapping_cupy/script/plugins/ and save as example.py.

import cupy as cp
from typing import List
from .plugin_manager import PluginBase


class NameOfYourPlugin(PluginBase):
    def __init__(self, add_value:float=1.0, **kwargs):
        super().__init__()
        self.add_value = float(add_value)

    def __call__(self, elevation_map: cp.ndarray, layer_names: List[str],
            plugin_layers: cp.ndarray, plugin_layer_names: List[str])->cp.ndarray:
        # Process maps here
        # You can also use the other plugin's data through plugin_layers.
        new_elevation = elevation_map[0] + self.add_value
        return new_elevation

Then, add your plugin setting to config/plugin_config.yaml

example:                                      # Use the same name as your file name.
  enable: True                                # weather to laod this plugin
  fill_nan: True                              # Fill nans to invalid cells of elevation layer.
  is_height_layer: True                       # If this is a height layer (such as elevation) or not (such as traversability)
  layer_name: "example_layer"                 # The layer name.
  extra_params:                               # This params are passed to the plugin class on initialization.
    add_value: 2.0                            # Example param

Finally, add your layer name to publishers in config/parameters.yaml. You can create a new topic or add to existing topics.

  plugin_example:   # Topic name
    layers: ['elevation', 'example_layer']
    basic_layers: ['example_layer']
    fps: 1.0        # The plugin is called with this fps.
Owner
Robotic Systems Lab - Legged Robotics at ETH Zürich
The Robotic Systems Lab investigates the development of machines and their intelligence to operate in rough and challenging environments.
Robotic Systems Lab - Legged Robotics at ETH Zürich
Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, L

3 Dec 02, 2022
Code for Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing(ICCV21)

NeuralGIF Code for Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing(ICCV21) We present Neural Generalized Implicit F

Garvita Tiwari 104 Nov 18, 2022
OCRA (Object-Centric Recurrent Attention) source code

OCRA (Object-Centric Recurrent Attention) source code Hossein Adeli and Seoyoung Ahn Please cite this article if you find this repository useful: For

Hossein Adeli 2 Jun 18, 2022
Code and data for paper "Deep Photo Style Transfer"

deep-photo-styletransfer Code and data for paper "Deep Photo Style Transfer" Disclaimer This software is published for academic and non-commercial use

Fujun Luan 9.9k Dec 29, 2022
[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation

Few-shot 3D Point Cloud Semantic Segmentation Created by Na Zhao from National University of Singapore Introduction This repository contains the PyTor

117 Dec 27, 2022
Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch

SRDenseNet-pytorch Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch (http://openaccess.thecvf.com/content_ICC

wxy 114 Nov 26, 2022
Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction

Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction. arxiv This repository contains python scripts for tr

12 Dec 12, 2022
Deep Learning Head Pose Estimation using PyTorch.

Hopenet is an accurate and easy to use head pose estimation network. Models have been trained on the 300W-LP dataset and have been tested on real data with good qualitative performance.

Nataniel Ruiz 1.3k Dec 26, 2022
Mask-invariant Face Recognition through Template-level Knowledge Distillation

Mask-invariant Face Recognition through Template-level Knowledge Distillation This is the official repository of "Mask-invariant Face Recognition thro

Fadi Boutros 35 Dec 06, 2022
Official implementation of the paper "Steganographer Detection via a Similarity Accumulation Graph Convolutional Network"

SAGCN - Official PyTorch Implementation | Paper | Project Page This is the official implementation of the paper "Steganographer detection via a simila

ZHANG Zhi 1 Nov 26, 2021
N-RPG - Novel role playing game da turfu

N-RPG Ce README sera la page de garde du projet. Contenu Il contiendra la présen

4 Mar 15, 2022
Code for the paper: Sketch Your Own GAN

Sketch Your Own GAN Project | Paper | Youtube Our method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to match the in

677 Dec 28, 2022
Udacity's CS101: Intro to Computer Science - Building a Search Engine

Udacity's CS101: Intro to Computer Science - Building a Search Engine All soluti

Phillip 0 Feb 26, 2022
DeceFL: A Principled Decentralized Federated Learning Framework

DeceFL: A Principled Decentralized Federated Learning Framework This repository comprises codes that reproduce experiments in Ye, et al (2021), which

Huazhong Artificial Intelligence Lab (HAIL) 10 May 31, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 06, 2023
A curated list of awesome Active Learning

Awesome Active Learning 🤩 A curated list of awesome Active Learning ! 🤩 Background (image source: Settles, Burr) What is Active Learning? Active lea

BAI Fan 431 Jan 03, 2023
Copy Paste positive polyp using poisson image blending for medical image segmentation

Copy Paste positive polyp using poisson image blending for medical image segmentation According poisson image blending I've completely used it for bio

Phạm Vũ Hùng 2 Oct 19, 2021
[ICCV 2021] Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation

MAED: Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation Getting Started Our codes are implemented and tested with pyth

ZiNiU WaN 176 Dec 15, 2022
LBK 20 Dec 02, 2022
DiscoNet: Learning Distilled Collaboration Graph for Multi-Agent Perception [NeurIPS 2021]

DiscoNet: Learning Distilled Collaboration Graph for Multi-Agent Perception [NeurIPS 2021] Yiming Li, Shunli Ren, Pengxiang Wu, Siheng Chen, Chen Feng

Automation and Intelligence for Civil Engineering (AI4CE) Lab @ NYU 98 Dec 21, 2022