A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

Overview

Open3DSOT

A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

The official code release of BAT and MM Track.

Features

  • Modular design. It is easy to config the model and training/testing behaviors through just a .yaml file.
  • DDP support for both training and testing.
  • Support all common tracking datasets (KITTI, NuScenes, Waymo Open Dataset).

📣 One tracking paper is accepted by CVPR2022 (Oral)! 👇

Trackers

This repository includes the implementation of the following models:

MM-Track (CVPR2022 Oral)

[Paper] [Project Page]

MM-Track is the first motion-centric tracker in LiDAR SOT, which robustly handles distractors and drastic appearance changes in complex driving scenes. Unlike previous methods, MM-Track is a matching-free two-stage tracker which localizes the targets by explicitly modeling the "relative target motion" among frames.

BAT (ICCV2021)

[Paper] [Results]

Official implementation of BAT. BAT uses the BBox information to compensate the information loss of incomplete scans. It augments the target template with box-aware features that efficiently and effectively improve appearance matching.

P2B (CVPR2020)

[Paper] [Official implementation]

Third party implementation of P2B. Our implementation achieves better results than the official code release. P2B adapts SiamRPN to 3D point clouds by integrating a pointwise correlation operator with a point-based RPN (VoteNet).

Setup

Installation

  • Create the environment

    git clone https://github.com/Ghostish/Open3DSOT.git
    cd Open3DSOT
    conda create -n Open3DSOT  python=3.6
    conda activate Open3DSOT
    
  • Install pytorch

    conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
    

    Our code is well tested with pytorch 1.4.0 and CUDA 10.1. But other platforms may also work. Follow this to install another version of pytorch. Note: In order to reproduce the reported results with the provided checkpoints, please use CUDA 10.x.

  • Install other dependencies:

    pip install -r requirement.txt
    

    Install the nuscenes-devkit if you use want to use NuScenes dataset:

    pip install nuscenes-devkit
    

KITTI dataset

  • Download the data for velodyne, calib and label_02 from KITTI Tracking.
  • Unzip the downloaded files.
  • Put the unzipped files under the same folder as following.
    [Parent Folder]
    --> [calib]
        --> {0000-0020}.txt
    --> [label_02]
        --> {0000-0020}.txt
    --> [velodyne]
        --> [0000-0020] folders with velodynes .bin files
    

NuScenes dataset

  • Download the dataset from the download page
  • Extract the downloaded files and make sure you have the following structure:
    [Parent Folder]
      samples	-	Sensor data for keyframes.
      sweeps	-	Sensor data for intermediate frames.
      maps	        -	Folder for all map files: rasterized .png images and vectorized .json files.
      v1.0-*	-	JSON tables that include all the meta data and annotations. Each split (trainval, test, mini) is provided in a separate folder.
    

Note: We use the train_track split to train our model and test it with the val split. Both splits are officially provided by NuScenes. During testing, we ignore the sequences where there is no point in the first given bbox.

Waymo dataset

  • Download and prepare dataset by the instruction of CenterPoint.
    [Parent Folder]
      tfrecord_training	                    
      tfrecord_validation	                 
      train 	                                    -	all training frames and annotations 
      val   	                                    -	all validation frames and annotations 
      infos_train_01sweeps_filter_zero_gt.pkl
      infos_val_01sweeps_filter_zero_gt.pkl
    
  • Prepare SOT dataset. Data from specific category and split will be merged (e.g., sot_infos_vehicle_train.pkl).
  python datasets/generate_waymo_sot.py

Quick Start

Training

To train a model, you must specify the .yaml file with --cfg argument. The .yaml file contains all the configurations of the dataset and the model. Currently, we provide four .yaml files under the cfgs directory. Note: Before running the code, you will need to edit the .yaml file by setting the path argument as the correct root of the dataset.

python main.py --gpu 0 1 --cfg cfgs/BAT_Car.yaml  --batch_size 50 --epoch 60 --preloading

After you start training, you can start Tensorboard to monitor the training process:

tensorboard --logdir=./ --port=6006

By default, the trainer runs a full evaluation on the full test split after training every epoch. You can set --check_val_every_n_epoch to a larger number to speed up the training. The --preloading flag is used to preload the training samples into the memory to save traning time. Remove this flag if you don't have enough memory.

Testing

To test a trained model, specify the checkpoint location with --checkpoint argument and send the --test flag to the command.

python main.py --gpu 0 1 --cfg cfgs/BAT_Car.yaml  --checkpoint /path/to/checkpoint/xxx.ckpt --test

Reproduction

Model Category Success Precision Checkpoint
BAT-KITTI Car 65.37 78.88 pretrained_models/bat_kitti_car.ckpt
BAT-NuScenes Car 40.73 43.29 pretrained_models/bat_nuscenes_car.ckpt
BAT-KITTI Pedestrian 45.74 74.53 pretrained_models/bat_kitti_pedestrian.ckpt

Three trained BAT models for KITTI and NuScenes datasets are provided in the pretrained_models directory. To reproduce the results, simply run the code with the corresponding .yaml file and checkpoint. For example, to reproduce the tracking results on KITTI Car, just run:

python main.py --gpu 0 1 --cfg cfgs/BAT_Car.yaml  --checkpoint ./pretrained_models/bat_kitti_car.ckpt --test

Acknowledgment

  • This repo is built upon P2B and SC3D.
  • Thank Erik Wijmans for his pytorch implementation of PointNet++

License

This repository is released under MIT License (see LICENSE file for details).

Owner
Kangel Zenn
Ph.D. Student in CUHKSZ.
Kangel Zenn
SAAVN - Sound Adversarial Audio-Visual Navigation,ICLR2022 (In PyTorch)

SAAVN SAAVN Code release for paper "Sound Adversarial Audio-Visual Navigation,IC

YinfengYu 10 Aug 30, 2022
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 883 Jan 07, 2023
A framework that allows people to write their own Rocket League bots.

YOU PROBABLY SHOULDN'T PULL THIS REPO Bot Makers Read This! If you just want to make a bot, you don't need to be here. Instead, start with one of thes

543 Dec 20, 2022
Decompose to Adapt: Cross-domain Object Detection via Feature Disentanglement

Decompose to Adapt: Cross-domain Object Detection via Feature Disentanglement In this project, we proposed a Domain Disentanglement Faster-RCNN (DDF)

19 Nov 24, 2022
PyTorch Kafka Dataset: A definition of a dataset to get training data from Kafka.

PyTorch Kafka Dataset: A definition of a dataset to get training data from Kafka.

ERTIS Research Group 7 Aug 01, 2022
Code for "Optimizing risk-based breast cancer screening policies with reinforcement learning"

Tempo: Optimizing risk-based breast cancer screening policies with reinforcement learning Introduction This repository was used to develop Tempo, as d

Adam Yala 12 Oct 11, 2022
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

143 Dec 28, 2022
BaseCls BaseCls 是一个基于 MegEngine 的预训练模型库,帮助大家挑选或训练出更适合自己科研或者业务的模型结构

BaseCls BaseCls 是一个基于 MegEngine 的预训练模型库,帮助大家挑选或训练出更适合自己科研或者业务的模型结构。 文档地址:https://basecls.readthedocs.io 安装 安装环境 BaseCls 需要 Python = 3.6。 BaseCls 依赖 M

MEGVII Research 28 Dec 23, 2022
Brain tumor detection using CNN (InceptionResNetV2 Model)

Brain-Tumor-Detection Building a detection model using a convolutional neural network in Tensorflow & Keras. Used brain MRI images. InceptionResNetV2

1 Feb 13, 2022
Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk

Annoy Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given quer

Spotify 10.6k Jan 04, 2023
Implementation of "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings" in PyTorch

PyGAS: Auto-Scaling GNNs in PyG PyGAS is the practical realization of our G NN A uto S cale (GAS) framework, which scales arbitrary message-passing GN

Matthias Fey 139 Dec 25, 2022
NeuralTalk is a Python+numpy project for learning Multimodal Recurrent Neural Networks that describe images with sentences.

#NeuralTalk Warning: Deprecated. Hi there, this code is now quite old and inefficient, and now deprecated. I am leaving it on Github for educational p

Andrej 5.3k Jan 07, 2023
Pytorch implementation of DeePSiM

Pytorch implementation of DeePSiM

1 Nov 05, 2021
Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network

DeepCDR Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network This work has been accepted to ECCB2020 and was also published in the

Qiao Liu 50 Dec 18, 2022
Simple ONNX operation generator. Simple Operation Generator for ONNX.

sog4onnx Simple ONNX operation generator. Simple Operation Generator for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools Key concept V

Katsuya Hyodo 6 May 15, 2022
OpenMMLab 3D Human Parametric Model Toolbox and Benchmark

Introduction English | 简体中文 MMHuman3D is an open source PyTorch-based codebase for the use of 3D human parametric models in computer vision and comput

OpenMMLab 782 Jan 04, 2023
ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.

This repo contains some of the codes for the following paper Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code

Xuewen Yang 56 Dec 08, 2022
Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Philipp Erler 329 Jan 06, 2023
How to Become More Salient? Surfacing Representation Biases of the Saliency Prediction Model

How to Become More Salient? Surfacing Representation Biases of the Saliency Prediction Model

Bogdan Kulynych 49 Nov 05, 2022
Diverse Image Generation via Self-Conditioned GANs

Diverse Image Generation via Self-Conditioned GANs Project | Paper Diverse Image Generation via Self-Conditioned GANs Steven Liu, Tongzhou Wang, David

Steven Liu 147 Dec 03, 2022