Learning High-Speed Flight in the Wild

Overview

Learning High-Speed Flight in the Wild

This repo contains the code associated to the paper Learning Agile Flight in the Wild. For more information, please check the project webpage.

Cover

Paper, Video, and Datasets

If you use this code in an academic context, please cite the following publication:

Paper: Learning High-Speed Flight in the Wild

Video (Narrated): YouTube

Datasets: Zenodo

Science Paper: DOI

@inproceedings{Loquercio2021Science,
  title={Learning High-Speed Flight in the Wild},
    author={Loquercio, Antonio and Kaufmann, Elia and Ranftl, Ren{\'e} and M{\"u}ller, Matthias and Koltun, Vladlen and Scaramuzza, Davide},
      booktitle={Science Robotics}, 
      year={2021}, 
      month={October}, 
} 

Installation

Requirements

The code was tested with Ubuntu 20.04, ROS Noetic, Anaconda v4.8.3., and gcc/g++ 7.5.0. Different OS and ROS versions are possible but not supported.

Before you start, make sure that your compiler versions match gcc/g++ 7.5.0. To do so, use the following commands:

sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-7 100
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 100

Step-by-Step Procedure

Use the following commands to create a new catkin workspace and a virtual environment with all the required dependencies.

export ROS_VERSION=noetic
mkdir agile_autonomy_ws
cd agile_autonomy_ws
export CATKIN_WS=./catkin_aa
mkdir -p $CATKIN_WS/src
cd $CATKIN_WS
catkin init
catkin config --extend /opt/ros/$ROS_VERSION
catkin config --merge-devel
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=-fdiagnostics-color
cd src

git clone [email protected]:uzh-rpg/agile_autonomy.git
vcs-import < agile_autonomy/dependencies.yaml
cd rpg_mpl_ros
git submodule update --init --recursive

#install extra dependencies (might need more depending on your OS)
sudo apt-get install libqglviewer-dev-qt5

# Install external libraries for rpg_flightmare
sudo apt install -y libzmqpp-dev libeigen3-dev libglfw3-dev libglm-dev

# Install dependencies for rpg_flightmare renderer
sudo apt install -y libvulkan1 vulkan-utils gdb

# Add environment variables (Careful! Modify path according to your local setup)
echo 'export RPGQ_PARAM_DIR=/home/
   
   catkin_aa/src/rpg_flightmare' >> ~/.bashrc

Now open a new terminal and type the following commands.

# Build and re-source the workspace
catkin build
. ../devel/setup.bash

# Create your learning environment
roscd planner_learning
conda create --name tf_24 python=3.7
conda activate tf_24
conda install tensorflow-gpu
pip install rospkg==1.2.3,pyquaternion,open3d,opencv-python

Now download the flightmare standalone available at this link, extract it and put in the flightrender folder.

Let's Fly!

Once you have installed the dependencies, you will be able to fly in simulation with our pre-trained checkpoint. You don't need necessarely need a GPU for execution. Note that if the network can't run at least at 15Hz, you won't be able to fly successfully.

Lauch the simulation! Open a terminal and type:

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
roslaunch agile_autonomy simulation.launch

Run the Network in an other terminal:

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
conda activate tf_24
python test_trajectories.py --settings_file=config/test_settings.yaml

Change execution speed or environment

You can change the average speed at which the policy will fly as well as the environment type by changing the following files.

Environment Change:

rosed agile_autonomy flightmare.yaml

Set either the spawn_trees or spawn_objects to true. Doing both at the same time is possible but would make the environment too dense for navigation. Also adapt the spacings parameter in test_settings.yaml to the environment.

Speed Change:

rosed agile_autonomy default.yaml

Edit the test_time_velocity and maneuver_velocity to the required speed. Note that the ckpt we provide will work for all speeds in the range [1,10] m/s. However, to reach the best performance at a specific speed, please consider finetuning the ckpt at the desired speed (see code below).

Train your own navigation policy

There are two ways in which you can train your own policy. One easy and one more involved. The trained checkpoint can then be used to control a physical platform (if you have one!).

Use pre-collected dataset

The first method, requiring the least effort, is to use a dataset that we pre-collected. The dataset can be found at this link. This dataset was used to train the model we provide and collected at an average speed of 7 m/s. To do this, adapt the file train_settings.yaml to point to the train and test folder and run:

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
conda activate tf_24
python train.py --settings_file=config/train_settings.yaml

Feel free to ablate the impact of each parameter!

Collect your own dataset

You can use the following commands to generate data in simulation and train your model on it. Note that training a policy from scratch could require a lot of data, and depending on the speed of your machine this could take several days. Therefore, we always recommend finetuning the provided checkpoint to your use case. As a general rule of thumb, you need a dataset with comparable size to ours to train a policy from scratch, but only 1/10th of it to finetune.

Generate data

To train or finetune a policy, use the following commands: Launch the simulation in one terminal

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
roslaunch agile_autonomy simulation.launch

Launch data collection (with dagger) in an other terminal

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
conda activate tf_24
python dagger_training.py --settings_file=config/dagger_settings.yaml

It is possible to change parameters (number of rollouts, dagger constants, tracking a global trajectory, etc. ) in the file dagger_settings.yaml. Keep in mind that if you change the network or input, you will need to adapt the file test_settings.yaml for compatibility.

When training from scratch, follow a pre-computed global trajectory to give consistent labels. To activate this, you need to put to true the flag perform_global_planning in default.yaml and label_generation.yaml. Note that this will make the simulation slower (a global plan has to be computed at each iteration). The network will not have access to this global plan, but only to the straight (possibly in collision) reference.

Visualize the Data

You can visualize the generated trajectories in open3d using the visualize_trajectories.py script.

python visualize_trajectories.py --data_dir /PATH/TO/rollout_21-09-21-xxxx --start_idx 0 --time_steps 100 --pc_cutoff_z 2.0 --max_traj_to_plot 100

The result should more or less look as the following:

Labels

Test the Network

To test the network you trained, adapt the test_settings.yaml with the new checkpoint path. You might consider putting back the flag perform_global_planning in default.yaml to false to make the simulation faster. Then follow the instructions in the above section (Let's Fly!) to test.

Ackowledgements

We would like to thank Yunlong Song and Selim Naji for their help with the implementations of the simulation environment. The code for global planning is strongly inspired by the one of Search-based Motion Planning for Aggressive Flight in SE(3).

Owner
Robotics and Perception Group
Robotics and Perception Group
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Don’t be Contradicted with Anything!CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System This repository contains the PyTorch im

Libo Qin 25 Sep 06, 2022
This project is for a Twitter bot that monitors a bird feeder in my backyard. Any detected birds are identified and posted to Twitter.

Backyard Birdbot Introduction This is a silly hobby project to use existing ML models to: Detect any birds sighted by a webcam Identify whic

Chi Young Moon 71 Dec 25, 2022
Code for "Learning to Segment Rigid Motions from Two Frames".

rigidmask Code for "Learning to Segment Rigid Motions from Two Frames". ** This is a partial release with inference and evaluation code.

Gengshan Yang 157 Nov 21, 2022
Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing

Notice: Support for Python 3.6 will be dropped in v.0.2.1, please plan accordingly! Efficient and Scalable Physics-Informed Deep Learning Collocation-

tensordiffeq 74 Dec 09, 2022
GAN-STEM-Conv2MultiSlice - Exploring Generative Adversarial Networks for Image-to-Image Translation in STEM Simulation

GAN-STEM-Conv2MultiSlice GAN method to help covert lower resolution STEM images generated by convolution methods to higher resolution STEM images gene

UW-Madison Computational Materials Group 2 Feb 10, 2021
AI4Good project for detecting waste in the environment

Detect waste AI4Good project for detecting waste in environment. www.detectwaste.ml. Our latest results were published in Waste Management journal in

108 Dec 25, 2022
Barlow Twins and HSIC

Barlow Twins and HSIC Unofficial Pytorch implementation for Barlow Twins and HSIC_SSL on small datasets (CIFAR10, STL10, and Tiny ImageNet). Correspon

Yao-Hung Hubert Tsai 49 Nov 24, 2022
Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation

Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation This paper has been accepted and early accessed

Yun Liu 39 Sep 20, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

63 Oct 17, 2022
Pytorch modules for paralel models with same architecture. Ideal for multi agent-based systems

WideLinears Pytorch parallel Neural Networks A package of pytorch modules for fast paralellization of separate deep neural networks. Ideal for agent-b

1 Dec 17, 2021
Safe Control for Black-box Dynamical Systems via Neural Barrier Certificates

Safe Control for Black-box Dynamical Systems via Neural Barrier Certificates Installation Clone the repository: git clone https://github.com/Zengyi-Qi

Zengyi Qin 3 Oct 18, 2022
WatermarkRemoval-WDNet-WACV2021

WatermarkRemoval-WDNet-WACV2021 Thank you for your attention. Citation Please cite the related works in your publications if it helps your research: @

LUYI 63 Dec 05, 2022
DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation

DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation This repository is the implementation of DynaTune paper. This folder

4 Nov 02, 2022
Semantic similarity computation with different state-of-the-art metrics

Semantic similarity computation with different state-of-the-art metrics Description • Installation • Usage • License Description TaxoSS is a semantic

6 Jun 22, 2022
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques. arXiv: Colossal-AI: A Unified Deep Learning Syst

HPC-AI Tech 7.9k Jan 08, 2023
Wide Residual Networks (WideResNets) in PyTorch

Wide Residual Networks (WideResNets) in PyTorch WideResNets for CIFAR10/100 implemented in PyTorch. This implementation requires less GPU memory than

Jason Kuen 296 Dec 27, 2022
Anonymous implementation of KSL

k-Step Latent (KSL) Implementation of k-Step Latent (KSL) in PyTorch. Representation Learning for Data-Efficient Reinforcement Learning [Paper] Code i

1 Nov 10, 2021
这是一个利用facenet和retinaface实现人脸识别的库,可以进行在线的人脸识别。

Facenet+Retinaface:人脸识别模型在Keras当中的实现 目录 注意事项 Attention 所需环境 Environment 文件下载 Download 预测步骤 How2predict 参考资料 Reference 注意事项 该库中包含了两个网络,分别是retinaface和fa

Bubbliiiing 31 Nov 15, 2022
Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Phillip Lippe 1.1k Jan 07, 2023
Supporting code for the Neograd algorithm

Neograd This repo supports the paper Neograd: Gradient Descent with a Near-Ideal Learning Rate, which introduces the algorithm "Neograd". The paper an

Michael Zimmer 12 May 01, 2022