Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences

Overview

Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences

1. Introduction

This project is for paper Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences. It concerns the single object tracking (SOT) of objects in point cloud sequences.

The input to the algorithm is the starting location (in the form of a 3D bounding box) of an object and the point cloud sequences for the scene. Our tracker then (1) provides the bounding box on each subsequent point cloud frame, (2) gets the dense shapes by aggregating the point clouds along with tracking.We also explore the usages on other applications, such as simulating LiDAR scans for data augmentation.

Please check our youtube video below for a 1-minute demonstration, and this link to the bilibili version. Youtube Video for Our Project

This README file describes the most basic usages of our code base. For more details, please refer to:

  • Data Preprocessing: It describes how to convert the raw data in Waymo dataset into more handy forms, which can be used by our algorithms.
  • Benchmark: It explains the selection of tracklets and construction of our benchmark. Note that the benchmark information is already in the ./benchmark/ and you may directly use it. The code in this part is for the purpose of verification.
  • Design: This documentation explains our design for the implementation. Reading this would be useful for understanding our tracker implementation and modifying it for your own purpose.
  • Model Configs: We use the config.yaml to specify the behaviour of the tracker. Please refer to this documentation for detailed explanation.
  • Toolkit: Along this with project, we also provide several code snippets for visualizing the tracking results. This file discusses these toolkits we have created.

2. SOT API and Inference

2.1 Installation

Our code has been thoroughly tested using the environment of python=3.6. For more detailed dependencies, please refer to the Environment section below.

We wrap the usages of our code into a library sot_3d, and the users may install it via the following command. The advantage of this installation command is that the behaviors of sot_3d will keep synchronized with your modifications.

pip install -e ./

2.2 Tracking API

The main API tracker_api is in main.py. In the default case, it takes the model configuration, the beginning bounding box, and a data loader as input, output the tracking result as specified below. Some additional guidelines on this API are:

  • data_loader is an iterator reading the data. On each iteration, it returns a dictionary, with the keys pc (point cloud) and ego (the transformation matrix to the world coordinate) as compulsory. An example of data_loader is in example_loader.
  • When you want to compare the tracking results with the ground truth along with tracking, please provide the input argument gts and import the function compare_to_gt, the data type sot_3d.data_protos.BBox . The gts are a list of BBox.
  • We also provide a handy tool for visualization. Please import from sot_3d.visualization import Visualizer2D and frame_result_visualization for a frame-level BEV visualization.
import sot_3d
from sot_3d.data_protos import BBox
from sot_3d.visualization import Visualizer2D


def tracker_api(configs, id, start_bbox, start_frame, data_loader, track_len, gts=None, visualize=False):
""" 
    Args:
        configs: model configuration read from config.yaml
        id (str): each tracklet has an id
        start_bbox ([x, y, z, yaw, l, w, h]): the beginning location of this id
        data_loader (an iterator): iterator returning data of each incoming frame
        track_len: number of frames in the tracklet
    Return:
        {
            frame_number0: {'bbox0': previous frame result, 'bbox1': current frame result, 'motion': estimated motion}
            frame_number1: ...
            ...
            frame_numberN: ...
        }
"""

2.3 Evaluation API

The API for evaluation is in evaluation/evaluation.py. tracklet_acc and tracklet_rob compute the accuracy and robustness given the ious in a tracklet, and metrics_from_bboxes deals with the cases when the inputs are raw bounding boxes. Note that the bounding boxes are in the format of sot_3d.data_protos.BBox.

def tracklet_acc(ious):
    ...
    """ the accuracy for a tracklet
    """

def tracklet_rob(ious, thresholds):
    ...
    """ compute the robustness of a tracklet
    """

def metrics_from_bboxes(pred_bboxes, gts):
    ...
    """ Compute the accuracy and robustness of a tracklet
    Args:
        pred_bboxes (list of BBox)
        gts (list of BBox)
    Return:
        accuracy, robustness, length of tracklet
    """

3 Building Up the Benchmark

Our LiDAR-SOT benchmark selects 1172 tracklets from the validation set of Waymo Open Dataset. These tracklets satisfy the requirements of mobility, length, and meaningful initialization.

The information of selected tracklets is in the ./benchmark/. Each json file stores the ids, segment names, and the frame intervals for each selected tracklet. For replicating the construction of this benchmark, please refer to this documentation.

4. Steps for Inference/Evaluation on the Benchmark

4.1 Data Preparation

Please follow the guidelines in Data Preprocessing. Suppose your root directory is DATA_ROOT.

4.2 Running on the benchmark

The command for running on the inference is as follows. Note that there are also some other arguments, please refer to the main.py for more details.

python main.py \
    --name NAME \                         # The NAME for your experiment.
    --bench_list your_tracklet_list \     # The path for your benchmark tracklets. By default at ./benchmark/bench_list.json.
    --data_folder DATA_ROOT \             # The location to store your datasets.
    --result_folder result_folder \       # Where you store the results of each tracklet.
    --process process_number \            # Use mutiple processes to split the dataset and accelerate inference.

After this, you may access the result for tracklet ID as demonstrated below. Inside the json files, bbox0 and bbox1 indicates the estimated bounding boxes in frame frame_index - 1 and frame_index.

-- result_folder
   -- NAME
       -- summary
           -- ID.json
               {
                   frame_index0: {'bbox0': ..., 'bbox1': ..., 'motion': ..., 
                                  'gt_bbox0': ..., 'gt_bbox1': ..., 'gt_motion': ..., 
                                  'iou2d': ..., 'iou3d': ...}
                   frame_index1: ...
                   frame_indexN: ...
               }

4.3 Evaluation

For computing the accuracy and robustness of tracklets, use the following code:

cd evaluation
python evaluation.py \
    --name NAME \                                 # the name of the experiment
    --result_folder result_folder \               # result folder
    --data_folder DATA_ROOT \                     # root directory storing the dataset
    --bench_list_folder benchmark_list_folder \   # directory for benchmark tracklet information, by default the ./benchmark/
    --iou                                         # use this if already computes the iou during inference
    --process process_number                      # use multiprocessing to accelerate the evaluation, especially in cases of computing iou

For the evaluation of shapes, use the following code:

cd evaluation
python evaluation.py \
    --name NAME \                                 # the name of the experiment
    --result_folder result_folder \               # result folder
    --data_folder DATA_ROOT \                     # root directory storing the dataset
    --bench_list_folder benchmark_list_folder \   # directory for benchmark tracklet information, by default the ./benchmark/
    --process process_number                      # Use mutiple processes to split the dataset and accelerate evaluation.

5. Environment

This repository has been tested and run using python=3.6.

For inference on the dataset using our tracker, the following libraries are compulsory:

numpy, scikit-learn, numba, scipy

If the evaluation with ground-truth is involved, please install the shapely library for the computation of iou.

shapely (for iou computation)

The data preprocessing on Waymo needs.

waymo_open_dataset

Our visualization toolkit needs.

matplotlib, open3d, pangolin

6. Citation

If you find our paper or repository useful, please consider citing

@article{pang2021model,
    title={Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences},
    author={Pang, Ziqi and Li, Zhichao and Wang, Naiyan},
    journal={arXiv preprint arXiv:2103.06028},
    year={2021}
}
Owner
TuSimple
The Future of Trucking
TuSimple
Hack Camera, Microphone, Location, Clipboard With Just a Link. Also, Get Many Details About Victim's Device. And So On...

An Automated Tool to Hack Victim's Camera, Microphone, Location, Clipboard. Has 2 Extra Features. Version 1.1 Update Fixed Some Major Bugs Data Saving

ToxicNoob 36 Jan 07, 2023
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron,

Pratul Srinivasan 65 Dec 14, 2022
Source code for Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning

Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning Official implementation of ACC, described in the paper "Adaptively Calibrated C

3 Sep 16, 2022
Official implementation for the paper: Permutation Invariant Graph Generation via Score-Based Generative Modeling

Permutation Invariant Graph Generation via Score-Based Generative Modeling This repo contains the official implementation for the paper Permutation In

64 Dec 29, 2022
VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition

VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition Usage First, install PyTorch 1.7.1+, torchvision 0.8.2

40 Dec 12, 2022
Implementation of TimeSformer, a pure attention-based solution for video classification

TimeSformer - Pytorch Implementation of TimeSformer, a pure and simple attention-based solution for reaching SOTA on video classification.

Phil Wang 602 Jan 03, 2023
ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation

ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation This repository contains the source code of our paper, ESPNet (acc

Sachin Mehta 515 Dec 13, 2022
Spectral Tensor Train Parameterization of Deep Learning Layers

Spectral Tensor Train Parameterization of Deep Learning Layers This repository is the official implementation of our AISTATS 2021 paper titled "Spectr

Anton Obukhov 12 Oct 23, 2022
Domain Generalization for Mammography Detection via Multi-style and Multi-view Contrastive Learning

MSVCL_MICCAI2021 Installation Please follow the instruction in pytorch-CycleGAN-and-pix2pix to install. Example Usage An example of vendor-styles tran

Jaron Lee 11 Oct 19, 2022
load .txt to train YOLOX, same as Yolo others

YOLOX train your data you need generate data.txt like follow format (per line- one image). prepare one data.txt like this: img_path1 x1,y1,x2,y2,clas

LiMingf 18 Aug 18, 2022
Code for reproducing our paper: LMSOC: An Approach for Socially Sensitive Pretraining

LMSOC: An Approach for Socially Sensitive Pretraining Code for reproducing the paper LMSOC: An Approach for Socially Sensitive Pretraining to appear a

Twitter Research 11 Dec 20, 2022
GPU Programming with Julia - course at the Swiss National Supercomputing Centre (CSCS), ETH Zurich

Course Description The programming language Julia is being more and more adopted in High Performance Computing (HPC) due to its unique way to combine

Samuel Omlin 192 Jan 03, 2023
Boosted neural network for tabular data

XBNet - Xtremely Boosted Network Boosted neural network for tabular data XBNet is an open source project which is built with PyTorch which tries to co

Tushar Sarkar 175 Jan 04, 2023
CondenseNet: Light weighted CNN for mobile devices

CondenseNets This repository contains the code (in PyTorch) for "CondenseNet: An Efficient DenseNet using Learned Group Convolutions" paper by Gao Hua

Shichen Liu 690 Nov 30, 2022
Deep learning models for change detection of remote sensing images

Change Detection Models (Remote Sensing) Python library with Neural Networks for Change Detection based on PyTorch. ⚡ ⚡ ⚡ I am trying to build this pr

Kaiyu Li 176 Dec 24, 2022
Implementation of the CVPR 2021 paper "Online Multiple Object Tracking with Cross-Task Synergy"

Online Multiple Object Tracking with Cross-Task Synergy This repository is the implementation of the CVPR 2021 paper "Online Multiple Object Tracking

54 Oct 15, 2022
Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning

Automated Side Channel Analysis of Media Software with Manifold Learning Official implementation of USENIX Security 2022 paper: Automated Side Channel

Yuanyuan Yuan 175 Jan 07, 2023
Code to produce syntactic representations that can be used to study syntax processing in the human brain

Can fMRI reveal the representation of syntactic structure in the brain? The code base for our paper on understanding syntactic representations in the

Aniketh Janardhan Reddy 4 Dec 18, 2022
Anti-UAV base on PaddleDetection

Paddle-Anti-UAV Anti-UAV base on PaddleDetection Background UAVs are very popular and we can see them in many public spaces, such as parks and playgro

Qingzhong Wang 2 Apr 20, 2022
Learning Synthetic Environments and Reward Networks for Reinforcement Learning

Learning Synthetic Environments and Reward Networks for Reinforcement Learning We explore meta-learning agent-agnostic neural Synthetic Environments (

AutoML-Freiburg-Hannover 16 Sep 02, 2022