DLL: Direct Lidar Localization

Related tags

Deep Learningdll
Overview

DLL: Direct Lidar Localization

Summary

This package presents DLL, a direct map-based localization technique using 3D LIDAR for its application to aerial robots. DLL implements a point cloud to map registration based on non-linear optimization of the distance of the points and the map, thus not requiring features, neither point correspondences. Given an initial pose, the method is able to track the pose of the robot by refining the predicted pose from odometry. The method performs much better than Monte-Carlo localization methods and achieves comparable precision to other optimization-based approaches but running one order of magnitude faster. The method is also robust under odometric errors.

DLL is fully integarted in Robot Operating System (ROS). It follows the general localization apparoch of ROS, DLL makes use of sensor data to compute the transform that better fits the robot odometry TF into the map. Although an odometry system is recommended for fast and accurate localization, DLL also performs well without odometry information if the robot moves smoothly.

DLL experimental results in different setups

Software dependencies

There are not hard dependencies except for Google Ceres Solver and ROS:

Hardware requirements

DLL has been tested in a 10th generation Intel i7 processor, with 16GB of RAM. No graphics card is needed. The optimization is currently configured to be single threaded. You can easily reduce the processing time by a 33% just increasing the number of threads used by Ceres Solver.

Compilation

Download this source code into the src folder of your catkin worksapce:

$ cd catkin_ws/src
$ git clone https://github.com/robotics-upo/dll

Compile the project:

$ cd catkin_ws
$ source devel/setup.bash
$ catkin_make

How to use DLL

You can find several examples into the launch directory. The module needs the following input information:

  • A map of the environment. This map is provided as a .bt file
  • You need to provide an initial position of the robot into the map.
  • base_link to odom TF. If the sensor is not in base_link frame, the corresponding TF from sensor to base_link must be provided.
  • 3D point cloud from the sensor. This information can be provided by a 3D LIDAR or 3D camera.
  • IMU information is used to get roll and pitch angles. If you don't have IMU, DLL will take the roll and pitch estimations from odometry as the truth values.

Once launched, DLL will publish a TF between map and odom that alligns the sensor point cloud to the map.

When a new map is provided, DLL will compute the Distance Field grid. This file will be automatically generated on startup if it does not exist. Once generated, it is stored in the same path of the .bt map, so that it is not needed to be computed in future executions.

As example, you can download 5 datasets from the Service Robotics Laboratory repository (https://robotics.upo.es/datasets/dll/). The example launch files are prepared and configured to work with these bags. You can see the different parameters of the method. Notice that, except for mbzirc.bag, these bags do not include odometry estimation. For this reason, as an easy work around, the lauch files publish a fake odometry that is the identity matrix. DLL is faster and more accurate when a good odometry is available.

Cite

DLL has been accepted for publication in IROS 2021.

F. Caballero and L. Merino. "DLL: Direct LIDAR Localization. A map-based localization approach for aerial robots". Sumbitted to the International Conference on Intelligent Robots and Systems, IROS 2021.

You can download preliminar version of the the paper from arXiv

Comments
  • Using Livox mid 70 get bad result

    Using Livox mid 70 get bad result

    Hi, I use Livox mid 70 with wheel odometry and IMU, but the localization result is not good, the robot pose always "jump" when running. any idea to make a better result (stable, smooth, continues path)

    opened by gongyue666 9
  • Run other datasets

    Run other datasets

    hello!I saved a .ot file in dll/maps. And <arg name="map" default="myown.ot" /> But when I run the program , it shows "NULL otcomap". How come?Where else do I need to set the path?

    opened by MIke-1118 6
  • tested the given bag failed

    tested the given bag failed

    Hi, thanks for your great work! I have download the given bag for test the dll,but when i launched the launch file,it always shows the error,which is : " Octomap loaded Map size: x: 37.2 to 92.75 y: 41.95 to 95.65 z: -10.4 to 0.15 Res: 0.05 Error opening file /home/whx/study/dll_ws/src/dll/maps/airsim.grid for reading Computing 3D occupancy grid. This will take some time... [ INFO] [1640669470.668451692, 1614448809.604375476]: Progress: 0.000000 % [ INFO] [1640669471.163893210, 1614448810.107720910]: Progress: 0.021567 % [ INFO] [1640669471.668560708, 1614448810.612384198]: Progress: 0.039648 % [ INFO] [1640669472.172075265, 1614448811.115887848]: Progress: 0.053874 % [ INFO] [1640669472.680451449, 1614448811.624293216]: Progress: 0.065055 % [ INFO] [1640669473.184041975, 1614448812.127884273]: Progress: 0.073926 % ... ... [bag_player-2] process has finished cleanly log file: /home/whx/.ros/log/5879e12a-679f-11ec-9f57-c0e43482dfff/bag_player-2*.log " I have noticed there is a closed issue which talk about it,so i repeated the same test for many times.But it didn't work.

    I hope someone can help me solve the problem.

    Best wishes

    opened by numb0824 2
  • open map file failed

    open map file failed

    Thanks for your great works! I want to run your code just used roslaunch dll airsim1.launch and changed the true path about the .bag. But I meet the following error Screenshot from 2021-11-30 10-16-11 Could you help me how to solve the problem? Thanks.

    opened by huangsiyuan0717 2
  • Transform of input map

    Transform of input map

    Hello!

    I'd first like to thank you for this work, it's very interesting!

    I have a question regarding the internal representation of the map: when looking through the code I notice that you subtract the minimum values from each axis of the points. I suppose this is relevant for the method? I got some (obviously) poor results when I assumed the input map and internal representation were the same.

    I think it would be nice to make this clearer in the readme, or potentially add some transform between the original map and the internal representation such that the initial position set in the launch file could be relative the original map.

    opened by MartinEekGerhardsen 3
Releases(v1.1)
  • v1.1(Mar 22, 2022)

    Improved memory allocation and solver parameterization

    • Added use_yaw_increments parameter that uses yaw increments from IMU since last LIDAR update as initial guess for the optimizer. This is a good choice when robot performs very fast yaw rotations
    • Added grid trilinear interpolation computation online. This will reduce the DLL memory requirements by a factor of 7 approximatelly
    • Added parameters to set solver max iterations and max threads
    • Added comprehensive message when .grid files is no found
    Source code(tar.gz)
    Source code(zip)
  • v1.0(Mar 22, 2022)

    Initial Commit

    • This version contains the source code related wit the IROS paper detailed in the README
    • Some cleaning has been done to make it simpler to understand
    Source code(tar.gz)
    Source code(zip)
Owner
Service Robotics Lab
Service Robotics, Autonomous Robot Navigation, Machine Learning, Social Robotics
Service Robotics Lab
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 09, 2023
PyTorch implementation of DUL (Data Uncertainty Learning in Face Recognition, CVPR2020)

PyTorch implementation of DUL (Data Uncertainty Learning in Face Recognition, CVPR2020)

Mouxiao Huang 20 Nov 15, 2022
Cross-Task Consistency Learning Framework for Multi-Task Learning

Cross-Task Consistency Learning Framework for Multi-Task Learning Tested on numpy(v1.19.1) opencv-python(v4.4.0.42) torch(v1.7.0) torchvision(v0.8.0)

Aki Nakano 2 Jan 08, 2022
This project is for a Twitter bot that monitors a bird feeder in my backyard. Any detected birds are identified and posted to Twitter.

Backyard Birdbot Introduction This is a silly hobby project to use existing ML models to: Detect any birds sighted by a webcam Identify whic

Chi Young Moon 71 Dec 25, 2022
PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data.

Anti-Backdoor Learning PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data. Check the unlearning effect

Yige-Li 51 Dec 07, 2022
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 03, 2023
Hydra: an Extensible Fuzzing Framework for Finding Semantic Bugs in File Systems

Hydra: An Extensible Fuzzing Framework for Finding Semantic Bugs in File Systems Paper Finding Semantic Bugs in File Systems with an Extensible Fuzzin

gts3.org (<a href=[email protected])"> 129 Dec 15, 2022
Per-Pixel Classification is Not All You Need for Semantic Segmentation

MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation Bowen Cheng, Alexander G. Schwing, Alexander Kirillov [arXiv] [Proj

Facebook Research 1k Jan 08, 2023
An investigation project for SISR.

SISR-Survey An investigation project for SISR. This repository is an official project of the paper "From Beginner to Master: A Survey for Deep Learnin

Juncheng Li 79 Oct 20, 2022
Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers The repository contains the code to reproduce the experimen

Alessandro Berti 4 Aug 24, 2022
Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi.

Spchcat Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi. Description spchcat is a command-line tool that read

Pete Warden 279 Jan 03, 2023
Real-time object detection on Android using the YOLO network with TensorFlow

TensorFlow YOLO object detection on Android Source project android-yolo is the first implementation of YOLO for TensorFlow on an Android device. It is

Nataniel Ruiz 624 Jan 03, 2023
Learning with Subset Stacking

Learning with Subset Stacking (LESS) LESS is a new supervised learning algorithm that is based on training many local estimators on subsets of a given

S. Ilker Birbil 19 Oct 04, 2022
The Face Mask recognition system uses AI technology to detect the person with or without a mask.

Face Mask Detection Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Rohan Kasabe 4 Apr 05, 2022
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a-Service". Being busy recently, the code in this repo and this tutoria

Tianxiang Sun 149 Jan 04, 2023
Open Source Differentiable Computer Vision Library for PyTorch

Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer

kornia 7.6k Jan 04, 2023
Static-test - A playground to play with ideas related to testing the comparability of the code

Static test playground ⚠️ The code is just an experiment. Compiles and runs on U

Igor Bogoslavskyi 4 Feb 18, 2022
Facestar dataset. High quality audio-visual recordings of human conversational speech.

Facestar Dataset Description Existing audio-visual datasets for human speech are either captured in a clean, controlled environment but contain only a

Meta Research 87 Dec 21, 2022
Simple, efficient and flexible vision toolbox for mxnet framework.

MXbox: Simple, efficient and flexible vision toolbox for mxnet framework. MXbox is a toolbox aiming to provide a general and simple interface for visi

Ligeng Zhu 31 Oct 19, 2019
PyTorch Connectomics: segmentation toolbox for EM connectomics

Introduction The field of connectomics aims to reconstruct the wiring diagram of the brain by mapping the neural connections at the level of individua

Zudi Lin 132 Dec 26, 2022