Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks

Overview

Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks

This is a Pytorch-Lightning implementation of the paper "Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks".

Given a sequence of P past point clouds (left in red) at time T, the goal is to predict the F future scans (right in blue).

Table of Contents

  1. Publication
  2. Data
  3. Installation
  4. Download
  5. License

Overview of our architecture

Publication

If you use our code in your academic work, please cite the corresponding paper:

@inproceedings{mersch2021corl,
  author = {B. Mersch and X. Chen and J. Behley and C. Stachniss},
  title = {{Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks}},
  booktitle = {Proc.~of the Conf.~on Robot Learning (CoRL)},
  year = {2021},
}

Data

Download the Kitti Odometry data from the official website.

Installation

Source Code

Clone this repository and run

cd point-cloud-prediction
git submodule update --init

to install the Chamfer distance submodule. The Chamfer distance submodule is originally taken from here with some modifications to use it as a submodule. All parameters are stored in config/parameters.yaml.

Dependencies

In this project, we use CUDA 10.2. All other dependencies are managed with Python Poetry and can be found in the poetry.lock file. If you want to use Python Poetry (recommended), install it with:

curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/install-poetry.py | python -

Install Python dependencies with Python Poetry

poetry install

and activate the virtual environment in the shell with

poetry shell

Export Environment Variables to dataset

We process the data in advance to speed up training. The preprocessing is automatically done if GENERATE_FILES is set to true in config/parameters.yaml. The environment variable PCF_DATA_RAW points to the directory containing the train/val/test sequences specified in the config file. It can be set with

export PCF_DATA_RAW=/path/to/kitti-odometry/dataset/sequences

and the destination of the processed files PCF_DATA_PROCESSED is set with

export PCF_DATA_PROCESSED=/desired/path/to/processed/data/

Training

Note If you have not pre-processed the data yet, you need to set GENERATE_FILES: True in config/parameters.yaml. After that, you can set GENERATE_FILES: False to skip this step.

The training script can be run by

python pcf/train.py

using the parameters defined in config/parameters.yaml. Pass the flag --help if you want to see more options like resuming from a checkpoint or initializing the weights from a pre-trained model. A directory will be created in pcf/runs which makes it easier to discriminate between different runs and to avoid overwriting existing logs. The script saves everything like the used config, logs and checkpoints into a path pcf/runs/COMMIT/EXPERIMENT_DATE_TIME consisting of the current git commit ID (this allows you to checkout at the last git commit used for training), the specified experiment ID (pcf by default) and the date and time.

Example: pcf/runs/7f1f6d4/pcf_20211106_140014

7f1f6d4: Git commit ID

pcf_20211106_140014: Experiment ID, date and time

Testing

Test your model by running

python pcf/test.py -m COMMIT/EXPERIMENT_DATE_TIME

where COMMIT/EXPERIMENT_DATE_TIME is the relative path to your model in pcf/runs. Note: Use the flag -s if you want to save the predicted point clouds for visualiztion and -l if you want to test the model on a smaller amount of data.

Example

python pcf/test.py -m 7f1f6d4/pcf_20211106_140014

or

python pcf/test.py -m 7f1f6d4/pcf_20211106_140014 -l 5 -s

if you want to test the model on 5 batches and save the resulting point clouds.

Visualization

After passing the -s flag to the testing script, the predicted range images will be saved as .svg files in /pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/range_view_predictions. The predicted point clouds are saved to /pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/test/point_clouds. You can visualize them by running

python pcf/visualize.py -p /pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/test/point_clouds

Five past and five future ground truth and our five predicted future range images.

Last received point cloud at time T and the predicted next 5 future point clouds. Ground truth points are shown in red and predicted points in blue.

Download

You can download our best performing model from the paper here. Just extract the zip file into pcf/runs.

License

This project is free software made available under the MIT License. For details see the LICENSE file.

Owner
Photogrammetry & Robotics Bonn
Photogrammetry & Robotics Lab at the University of Bonn
Photogrammetry & Robotics Bonn
This is a collection of our NAS and Vision Transformer work.

AutoML - Neural Architecture Search This is a collection of our AutoML-NAS work iRPE (NEW): Rethinking and Improving Relative Position Encoding for Vi

Microsoft 828 Dec 28, 2022
Deep GPs built on top of TensorFlow/Keras and GPflow

GPflux Documentation | Tutorials | API reference | Slack What does GPflux do? GPflux is a toolbox dedicated to Deep Gaussian processes (DGP), the hier

Secondmind Labs 107 Nov 02, 2022
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work 🌟 Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 09, 2022
This repository implements WGAN_GP.

Image_WGAN_GP This repository implements WGAN_GP. Image_WGAN_GP This repository uses wgan to generate mnist and fashionmnist pictures. Firstly, you ca

Lieon 6 Dec 10, 2021
Implementation for "Seamless Manga Inpainting with Semantics Awareness" (SIGGRAPH 2021 issue)

Seamless Manga Inpainting with Semantics Awareness [SIGGRAPH 2021](To appear) | Project Website | BibTex Introduction: Manga inpainting fills up the d

101 Jan 01, 2023
Spatial Single-Cell Analysis Toolkit

Single-Cell Image Analysis Package Scimap is a scalable toolkit for analyzing spatial molecular data. The underlying framework is generalizable to spa

Laboratory of Systems Pharmacology @ Harvard 30 Nov 08, 2022
Using VapourSynth with super resolution models and speeding them up with TensorRT.

VSGAN-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Using NVIDIA/Torch-TensorRT combined wi

111 Jan 05, 2023
Styled Augmented Translation

SAT Style Augmented Translation Introduction By collecting high-quality data, we were able to train a model that outperforms Google Translate on 6 dif

139 Dec 29, 2022
thundernet ncnn

MMDetection_Lite 基于mmdetection 实现一些轻量级检测模型,安装方式和mmdeteciton相同 voc0712 voc 0712训练 voc2007测试 coco预训练 thundernet_voc_shufflenetv2_1.5 input shape mAP 320

DayBreak 39 Dec 05, 2022
An unsupervised learning framework for depth and ego-motion estimation from monocular videos

SfMLearner This codebase implements the system described in the paper: Unsupervised Learning of Depth and Ego-Motion from Video Tinghui Zhou, Matthew

Tinghui Zhou 1.8k Dec 30, 2022
[NeurIPS 2021] ORL: Unsupervised Object-Level Representation Learning from Scene Images

Unsupervised Object-Level Representation Learning from Scene Images This repository contains the official PyTorch implementation of the ORL algorithm

Jiahao Xie 55 Dec 03, 2022
Project Tugas Besar pertama Pengenalan Komputasi Institut Teknologi Bandung

Vending_Machine_(Mesin_Penjual_Minuman) Project Tugas Besar pertama Pengenalan Komputasi Institut Teknologi Bandung Raw Sketch untuk Essay Ringkasan P

QueenLy 1 Nov 08, 2021
Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers.

Contra-OOD Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers. Requirements PyTorch Transformers datasets

Wenxuan Zhou 27 Oct 28, 2022
Block-wisely Supervised Neural Architecture Search with Knowledge Distillation (CVPR 2020)

DNA This repository provides the code of our paper: Blockwisely Supervised Neural Architecture Search with Knowledge Distillation. Illustration of DNA

Changlin Li 215 Dec 19, 2022
Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer"

SCGAN Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer" Prepare The pre-trained model is avaiable at http

118 Dec 12, 2022
Learning-based agent for Google Research Football

TiKick 1.Introduction Learning-based agent for Google Research Football Code accompanying the paper "TiKick: Towards Playing Multi-agent Football Full

Tsinghua AI Research Team for Reinforcement Learning 90 Dec 26, 2022
A Number Recognition algorithm

Paddle-VisualAttention Results_Compared SVHN Dataset Methods Steps GPU Batch Size Learning Rate Patience Decay Step Decay Rate Training Speed (FPS) Ac

1 Nov 12, 2021
LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021

LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021 We propose a cross encoder model (LTR_CrossEncoder) for information retrieval, re-retrie

Xuan Hieu Duong 7 Jan 12, 2022
[NeurIPS'21] Shape As Points: A Differentiable Poisson Solver

Shape As Points (SAP) Paper | Project Page | Short Video (6 min) | Long Video (12 min) This repository contains the implementation of the paper: Shape

394 Dec 30, 2022