Cross View SLAM

Overview

Cross View SLAM

This is the associated code and dataset repository for our paper

I. D. Miller et al., "Any Way You Look at It: Semantic Crossview Localization and Mapping With LiDAR," in IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2397-2404, April 2021, doi: 10.1109/LRA.2021.3061332.

See also our accompanying video

XView demo video

Compilation

We release the localization portion of the system, which can be integrated with a LiDAR-based mapper of the user's choice. The system reqires ROS and should be built as a catkin package. We have tested with ROS Melodic and Ubuntu 18.04. Note that we require GCC 9 or greater as well as Intel TBB.

Datasets

Our datasets

We release our own datasets from around University City in Philadelphia and Morgantown, PA. They can be downloaded here. Ucity2 was taken several months after Ucity, and both follow the same path. These datasets are in rosbag format, including the following topics:

  • /lidar_rgb_calib/painted_pc is the semantically labelled motion-compensated pointcloud. Classes are encoded as a per-point color, with each channel equal to the class ID. Classes are based off of cityscapes and listed below.
  • /os1_cloud_node/imu is raw IMU data from the Ouster OS1-64.
  • /quad/front/image_color/compressed is a compressed RGB image from the forward-facing camera.
  • /subt/global_pose is the global pose estimate from UPSLAM.
  • /subt/integrated_pose is the integrated pose estimate from UPSLAM. This differs from the above in that it does not take into account loop closures, and is used as the motion prior for the localization filter.

Please note that UPSLAM odometry was generated purely based on LiDAR without semantics, and is provided to act as a loose motion prior. It should not be used as ground truth.

If you require access to the raw data for your work, please reach out directly at iandm (at) seas (dot) upenn (dot) edu.

KITTI

We provide a derivative of the excellent kitti2bag tool in the scripts directory, modified to use semantics from SemanticKITTI. To use this tool, you will need to download the raw synced + rectified data from KITTI as well as the SemanticKITTI data. Your final directory structure should look like

2011-09-30
  2011_09_30_drive_0033_sync  
    image_00
      timestamps.txt
      data
    image_01
      timestamps.txt
      data
    image_02
      timestamps.txt
      data
    image_03
      timestamps.txt
      data
    labels
      000000.label
      000001.label
      ...
    oxts
      dataformat.txt
      timestamps.txt
      data
    velodyne_points
      timestamps_end.txt  
      timestamps_start.txt
      timestamps.txt
      data
  calib_cam_to_cam.txt  
  calib_imu_to_velo.txt  
  calib_velo_to_cam.txt

You can then run ./kitti2bag.py -t 2011_09_30 -r 0033 raw_synced /path/to/kitti in order to generate a rosbag usable with our system.

Classes

Class Label
2 Building
7 Vegetation
13 Vehicle
100 Road/Parking Lot
102 Ground/Sidewalk
255 Unlabelled

Usage

We provide a launch file for KITTI and for our datasets. To run, simply launch the appropriate launch file and play the bag. Note that when data has been modified, the system will take several minutes to regenerate the processed map TDF. Once this has been done once, and parameters are not changed, it will be cached. The system startup should look along the lines of

[ INFO] [1616266360.083650372]: Found cache, checking if parameters have changed
[ INFO] [1616266360.084357050]: No cache found, loading raster map
[ INFO] [1616266360.489371763]: Computing distance maps...
[ INFO] [1616266360.489428570]: maps generated
[ INFO] [1616266360.597603324]: transforming coord
[ INFO] [1616266360.641200529]: coord rotated
[ INFO] [1616266360.724551466]: Sample grid generated
[ INFO] [1616266385.379985385]: class 0 complete
[ INFO] [1616266439.390797168]: class 1 complete
[ INFO] [1616266532.004976919]: class 2 complete
[ INFO] [1616266573.041695479]: class 3 complete
[ INFO] [1616266605.901935236]: class 4 complete
[ INFO] [1616266700.533124618]: class 5 complete
[ INFO] [1616266700.537600570]: Rasterization complete
[ INFO] [1616266700.633949062]: maps generated
[ INFO] [1616266700.633990791]: transforming coord
[ INFO] [1616266700.634004336]: coord rotated
[ INFO] [1616266700.634596830]: maps generated
[ INFO] [1616266700.634608101]: transforming coord
[ INFO] [1616266700.634618110]: coord rotated
[ INFO] [1616266700.634666000]: Initializing particles...
[ INFO] [1616266700.710166543]: Particles initialized
[ INFO] [1616266700.745398596]: Setup complete

ROS Topics

  • /cross_view_slam/gt_pose Input, takes in ground truth localization if provided to draw on the map. Not used.
  • /cross_view_slam/pc Input, the pointwise-labelled pointcloud
  • /cross_view_slam/motion_prior Input, the prior odometry (from some LiDAR odometry system)
  • /cross_view_slam/map Output image of map with particles
  • /cross_view_slam/scan Output image visualization of flattened polar LiDAR scan
  • /cross_view_slam/pose_est Estimated pose of the robot with uncertainty, not published until convergence
  • /cross_view_slam/scale Estimated scale of the map in px/m, not published until convergence

ROS Parameters

  • raster_res Resolution to rasterize the svg at. 1 is typically fine.
  • use_raster Load the map svg or raster images. If the map svg is loaded, raster images are automatically generated in the accompanying folder.
  • map_path Path to map file.
  • svg_res Resolution of the map in px/m. If not specified, the localizer will try to estimate.
  • svg_origin_x Origin of the map in pixel coordinates, x value. Used only for ground truth visualization
  • svg_origin_y Origin of the map in pixel coordinates, y value.
  • use_motion_prior If true, use the provided motion estimate. Otherwise, use 0 velocity prior.
  • num_particles Number of particles to use in the filter.
  • filter_pos_cov Motion prior uncertainty in position.
  • filter_theta_cov Motion prior uncertainty in bearing.
  • filter_regularization Gamma in the paper, see for more details.

Citation

If you find this work or datasets helpful, please cite

@ARTICLE{9361130,
author={I. D. {Miller} and A. {Cowley} and R. {Konkimalla} and S. S. {Shivakumar} and T. {Nguyen} and T. {Smith} and C. J. {Taylor} and V. {Kumar}},
journal={IEEE Robotics and Automation Letters},
title={Any Way You Look at It: Semantic Crossview Localization and Mapping With LiDAR},
year={2021},
volume={6},
number={2},
pages={2397-2404},
doi={10.1109/LRA.2021.3061332}}
Owner
Ian D. Miller
Currently a PhD student at Penn in Electrical and Systems Engineering under Prof. Vijay Kumar.
Ian D. Miller
September-Assistant - Open-source Windows Voice Assistant

September - Windows Assistant September is an open-source Windows personal assis

The Nithin Balaji 9 Nov 22, 2022
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Jan 03, 2023
Official repo for BMVC2021 paper ASFormer: Transformer for Action Segmentation

ASFormer: Transformer for Action Segmentation This repo provides training & inference code for BMVC 2021 paper: ASFormer: Transformer for Action Segme

42 Dec 23, 2022
DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene. We achieve NeRF-comparable novel-view synthesis quality with super-fast convergence.

sunset 709 Dec 31, 2022
ISTR: End-to-End Instance Segmentation with Transformers (https://arxiv.org/abs/2105.00637)

This is the project page for the paper: ISTR: End-to-End Instance Segmentation via Transformers, Jie Hu, Liujuan Cao, Yao Lu, ShengChuan Zhang, Yan Wa

Jie Hu 182 Dec 19, 2022
Magic tool for managing internet connection in local network by @zalexdev

Megacut ✂️ A new powerful Python3 tool for managing internet on a local network Installation git clone https://github.com/stryker-project/megacut cd m

Stryker 12 Dec 15, 2022
Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device" @ CAD&Graphics2019

PortraitNet Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device". @ CAD&Graphics 2019 Introduction We propose a

265 Dec 01, 2022
Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network

Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network Paddle-PANet 目录 结果对比 论文介绍 快速安装 结果对比 CTW1500 Method Backbone Fine

7 Aug 08, 2022
9th place solution in "Santa 2020 - The Candy Cane Contest"

Santa 2020 - The Candy Cane Contest My solution in this Kaggle competition "Santa 2020 - The Candy Cane Contest", 9th place. Basic Strategy In this co

toshi_k 22 Nov 26, 2021
A high-level Python library for Quantum Natural Language Processing

lambeq About lambeq is a toolkit for quantum natural language processing (QNLP). Documentation: https://cqcl.github.io/lambeq/ Getting started Prerequ

Cambridge Quantum 315 Jan 01, 2023
Pytorch implementation of Learning Rate Dropout.

Learning-Rate-Dropout Pytorch implementation of Learning Rate Dropout. Paper Link: https://arxiv.org/pdf/1912.00144.pdf Train ResNet-34 for Cifar10: r

42 Nov 25, 2022
Projecting interval uncertainty through the discrete Fourier transform

Projecting interval uncertainty through the discrete Fourier transform This repo

1 Mar 02, 2022
Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment (ICCV2021)

Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment This is a pytorch project for the paper Seeing Dynamic Scene i

DV Lab 21 Nov 28, 2022
Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker

Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker This repository contai

Nikita 12 Dec 14, 2022
Meshed-Memory Transformer for Image Captioning. CVPR 2020

M²: Meshed-Memory Transformer This repository contains the reference code for the paper Meshed-Memory Transformer for Image Captioning (CVPR 2020). Pl

AImageLab 422 Dec 28, 2022
git《FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding》(CVPR 2021) GitHub: [fig8]

FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding (CVPR 2021) This repo contains the implementation of our state-of-the-art fewshot ob

233 Dec 29, 2022
A stable algorithm for GAN training

DRAGAN (Deep Regret Analytic Generative Adversarial Networks) Link to our paper - https://arxiv.org/abs/1705.07215 Pytorch implementation (thanks!) -

195 Oct 10, 2022
Code for "Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification", ECCV 2020 Spotlight

Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification Implementation of "Learning From Multiple Experts: Se

27 Nov 05, 2022
“英特尔创新大师杯”深度学习挑战赛 赛道3:CCKS2021中文NLP地址相关性任务

ccks2021-track3 CCKS2021中文NLP地址相关性任务-赛道三-冠军方案 团队:我的加菲鱼- wodejiafeiyu 初赛第二/复赛第一/决赛第一 前言 19年开始,陆陆续续参加了一些比赛,拿到过一些top,比较懒一直都没分享过,这次比较幸运又拿了top1,打算分享下 分类的任务

shaochenjie 131 Dec 31, 2022
MWPToolkit is a PyTorch-based toolkit for Math Word Problem (MWP) solving.

MWPToolkit is a PyTorch-based toolkit for Math Word Problem (MWP) solving. It is a comprehensive framework for research purpose that integrates popular MWP benchmark datasets and typical deep learnin

119 Jan 04, 2023