Baselines for TrajNet++

Overview

TrajNet++ : The Trajectory Forecasting Framework

PyTorch implementation of Human Trajectory Forecasting in Crowds: A Deep Learning Perspective

docs/train/cover.png

TrajNet++ is a large scale interaction-centric trajectory forecasting benchmark comprising explicit agent-agent scenarios. Our framework provides proper indexing of trajectories by defining a hierarchy of trajectory categorization. In addition, we provide an extensive evaluation system to test the gathered methods for a fair comparison. In our evaluation, we go beyond the standard distance-based metrics and introduce novel metrics that measure the capability of a model to emulate pedestrian behavior in crowds. Finally, we provide code implementations of > 10 popular human trajectory forecasting baselines.

Data Setup

The detailed step-by-step procedure for setting up the TrajNet++ framework can be found here

Converting External Datasets

To convert external datasets into the TrajNet++ framework, refer to this guide

Training Models

LSTM

The training script and its help menu: python -m trajnetbaselines.lstm.trainer --help

Run Example

## Our Proposed D-LSTM
python -m trajnetbaselines.lstm.trainer --type directional --augment

## Social LSTM
python -m trajnetbaselines.lstm.trainer --type social --augment --n 16 --embedding_arch two_layer --layer_dims 1024

GAN

The training script and its help menu: python -m trajnetbaselines.sgan.trainer --help

Run Example

## Social GAN (L2 Loss + Adversarial Loss)
python -m trajnetbaselines.sgan.trainer --type directional --augment

## Social GAN (Variety Loss only)
python -m trajnetbaselines.sgan.trainer --type directional --augment --d_steps 0 --k 3

Evaluation

The evaluation script and its help menu: python -m evaluator.trajnet_evaluator --help

Run Example

## TrajNet++ evaluator (saves model predictions. Useful for submission to TrajNet++ benchmark)
python -m evaluator.trajnet_evaluator --output OUTPUT_BLOCK/trajdata/lstm_directional_None.pkl --path <path_to_test_file>

## Fast Evaluator (does not save model predictions)
python -m evaluator.fast_evaluator --output OUTPUT_BLOCK/trajdata/lstm_directional_None.pkl --path <path_to_test_file>

More details regarding TrajNet++ evaluator are provided here

Evaluation on datasplits is based on the following categorization

Results

Unimodal Comparison of interaction encoder designs on interacting trajectories of TrajNet++ real world dataset. Errors reported are ADE / FDE in meters, collisions in mean % (std. dev. %) across 5 independent runs. Our goal is to reduce collisions in model predictions without compromising distance-based metrics.

Method ADE/FDE Collisions
LSTM 0.60/1.30 13.6 (0.2)
S-LSTM 0.53/1.14 6.7 (0.2)
S-Attn 0.56/1.21 9.0 (0.3)
S-GAN 0.64/1.40 6.9 (0.5)
D-LSTM (ours) 0.56/1.22 5.4 (0.3)

Interpreting Forecasting Models

docs/train/LRP.gif

Visualizations of the decision-making of social interaction modules using layer-wise relevance propagation (LRP). The darker the yellow circles, the more is the weight provided by the primary pedestrian (blue) to the corresponding neighbour (yellow).

Code implementation for explaining trajectory forecasting models using LRP can be found here

Benchmarking Models

We host the Trajnet++ Challenge on AICrowd allowing researchers to objectively evaluate and benchmark trajectory forecasting models on interaction-centric data. We rely on the spirit of crowdsourcing, and encourage researchers to submit their sequences to our benchmark, so the quality of trajectory forecasting models can keep increasing in tackling more challenging scenarios.

Citation

If you find this code useful in your research then please cite

@article{Kothari2020HumanTF,
  title={Human Trajectory Forecasting in Crowds: A Deep Learning Perspective},
  author={Parth Kothari and S. Kreiss and Alexandre Alahi},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.03639}
}
Comments
  • Problem training lstm

    Problem training lstm

    Hi, while trying to train social Lstm I encountered this error UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)

    Which is weird because the older version of the repo works fine with the same dataset.

    Also I tried switching to Pytorch 1.0.0 but it doesn't work either because of Flatten. AttributeError: module 'torch.nn' has no attribute 'Flatten'

    Can you please tell me what's going wrong? Thanks

    opened by sanmoh99 8
  • Data normalization? Minor errors and minor suggestions

    Data normalization? Minor errors and minor suggestions

    Hi,

    First of all congratulations on this fruitful work.

    Then, I have a technical question. It seems that you don't normalize the data in any of the steps. Thus, why did you choose the standard Gaussian noise? It should provide samples with high variance wrt to k.

    After downloading and installing the social force simulator, I ran the trainer and it threw an error: ModuleNotFoundError: No module named 'socialforce.fieldofview'

    After changing to: from socialforce.field_of_view import FieldOfView

    Everything worked fine.

    opened by tmralmeida 2
  • Can't compute collision percentages for Kalman Filter baseline

    Can't compute collision percentages for Kalman Filter baseline

    Hello. Hope everyone that is reading this is doing well.

    I was trying to run the trajnet evaluation code for the Kalman filter implementation, but I get "-1" for the Col-I metric.

    From what I read in #15 , this is because the number of predicted tracks for the neighbours is not equal to the number of ground truth tracks. Upon closer inspection, I was obtaining additional elements on the list of tracks, that corresponded to empty lists (no actual positions).

    While I'm not sure why this happened, I think it might be related to this issue, where the start and end frames for different scenes are not completely separate, for the converted data using the [Trajnet++ dataset]((https://github.com/vita-epfl/trajnetplusplusdataset) code.

    Can someone confirm that that is the case? I'm assuming I'm not the only one to have come accross this issue. I could make a script to perform such separation, and see if that is the actual problem. If I don't find any existing code to do so, I suppose that's my best option.

    opened by pedro-mgb 2
  • Issue about plot_log.py

    Issue about plot_log.py

    Dear Author, When I use plot_log.py,only the resulting accuracy picture is blank.The name is xx.val.png. As shown in the figure below: image What should I do to make the accuracy show up correctly? Thank you for your reply.

    opened by xieyunjiao 2
  • Issue about fast_evaluator and trajnet_evaluator

    Issue about fast_evaluator and trajnet_evaluator

    Hello,I've been using Trajnet ++ to evaluate trained models recently, Whether I use fast_evaluator or trajner_evaluator, my col-I is always -1. I read that part of the code, and the condition for col-I to occur isnum_gt_neigh ==num_predicted_neigh. But I don't know how I can modify the code to compute COL-I. Thank you very much for answering my questions.

    opened by xieyunjiao 2
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    Hi, When I run trajnet_evaluator.py after training with cuda. RuntimeError: CUDA error: out of memory

    Is it my personal problem? or Only can I train this code on CPU?

    opened by 396559551 2
  • No module named 'socialforce' ??

    No module named 'socialforce' ??

    Hi, first of all, thank you for sharing this great work.

    "python -m trajnetbaselines.lstm.trainer --type directional --augment" I just ran this command but I have faced the below error. No module named 'socialforce'

    Is there something I should do install or include? Thank you,

    opened by moonsh 2
  • Problem running Sgan model

    Problem running Sgan model

    Hello, I've tried to run the code and encountered error regarding to layer_dims parameter. In the help section it's said to give it like an array [--layer_dims [LAYER_DIMS [LAYER_DIMS ...]]] but again I can't train the model.

    I run the following command: python -m trajnetbaselines.sgan.trainer --batch_size 1 --lr 1e-3 --obs_length 9 --pred_length 12 --type 'social' --norm_pool --layer_dims 10 10

    and get this error:

    Traceback (most recent call last): File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 533, in main() File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 529, in main trainer.loop(train_scenes, val_scenes, train_goals, val_goals, args.output, epochs=args.epochs, start_epoch=start_epoch) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 73, in loop self.train(train_scenes, train_goals, epoch) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 141, in train loss, _ = self.train_batch(scene, scene_goal, step_type) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 210, in train_batch rel_output_list, outputs, scores_real, scores_fake = self.model(observed, goals, prediction_truth, step_type=step_type) File "/Users/sasa/Desktop/trajnetplusplusbaselines/venv/trajnet3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 77, in forward rel_pred_scene, pred_scene = self.generator(observed, goals, prediction_truth, n_predict) File "/Users/sasa/Desktop/trajnetplusplusbaselines/venv/trajnet3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 283, in forward hidden_cell_state = self.adding_noise(hidden_cell_state) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 154, in adding_noise noise = torch.zeros(self.noise_dim, device=hidden_cell_state.device) AttributeError: 'tuple' object has no attribute 'device'

    I appreciate it if you can tell me where i went wrong or give an example command that trains the model.

    Thanks in advance

    opened by sanmoh99 2
  • Challenge submission:

    Challenge submission: "No more retries left"

    I just submitted a zip file with my predictions to the AIcrowd challenge. However, the submission failed with the message: "No more retries left". What does this mean?

    opened by S-Hauri 1
  •  FDE score of 1.14 with social LSTM

    FDE score of 1.14 with social LSTM

    Hi! I am trying to get the FDE score of 1.14 with social LSTM.

    Did you train on the whole (with cff) training dataset? How many epochs? And with which parameters?

    Thanks in advance Many greetings

    opened by Mirorrn 1
  • Fix for Kalman filter to also output trajectories of neighbours

    Fix for Kalman filter to also output trajectories of neighbours

    Summary

    Minor but important fix in Kalman filter model, to also output trajectories of neighbours

    Content

    The variable that contained the neighbour predictions (neighbour_tracks - see line 8 of the file that was changed for initialization) was being overwriten, and so those tracks ended up being lost. This PR involves removing the line in which the variable is overwritten.

    Effect

    This was causing KF to only output trajectory of primary pedestrian, which made the computation of Col-I metric impossible.

    Related PRs/Issues

    This (partially) addresses #19.

    opened by pedro-mgb 1
  • Generative loss stuck

    Generative loss stuck

    Hi,

    Regarding the Social GAN model and while playing with your code, I found something that I couldn't understand.

    E.g while running:

    python -m trajnetbaselines.sgan.trainer --k 1
    

    It means that we are running a vanilla GAN where the generator outputs one sample (the most common GAN setting without the L2 loss); In doing so, the GAN loss is always 1.38 throughout the training. Thus, the vanilla GAN (with only the adversarial loss) is not capable of modeling the data.

    My question is to what extent are we taking advantage of a GAN framework? It seems that we are only training an LSTM predictor (when running under the aforementioned conditions).

    opened by tmralmeida 0
  • Inclusion of Social Anchor model as baseline?

    Inclusion of Social Anchor model as baseline?

    Hello!

    I just saw a release of a recent paper for Interpretable Social Anchors for Human Trajectory Forecasting in Crowds, and it seems like a very intuitive idea for modelling crowd behaviour.

    I was wondering if there will be any open source version of the model available in the future, and if it may be added to this repository as a list of baselines?

    Thank you!

    opened by pedro-mgb 2
Releases(v1.0)
Owner
VITA lab at EPFL
Visual Intelligence for Transportation
VITA lab at EPFL
Dahua Camera and Doorbell Home Assistant Integration

Home Assistant Dahua Integration The Dahua Home Assistant integration allows you to integrate your Dahua cameras and doorbells in Home Assistant. It's

Ronnie 216 Dec 26, 2022
TransPrompt - Towards an Automatic Transferable Prompting Framework for Few-shot Text Classification

TransPrompt This code is implement for our EMNLP 2021's paper 《TransPrompt:Towards an Automatic Transferable Prompting Framework for Few-shot Text Cla

WangJianing 23 Dec 21, 2022
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
Immortal tracker

Immortal_tracker Prerequisite Our code is tested for Python 3.6. To install required liabraries: pip install -r requirements.txt Waymo Open Dataset P

74 Dec 03, 2022
Local-Global Stratified Transformer for Efficient Video Recognition

DualFormer This repo is the implementation of our manuscript entitled "Local-Global Stratified Transformer for Efficient Video Recognition". Our model

Sea AI Lab 19 Dec 07, 2022
PartImageNet is a large, high-quality dataset with part segmentation annotations

PartImageNet: A Large, High-Quality Dataset of Parts We will release our dataset and scripts soon after cleaning and approval. Introduction PartImageN

Ju He 77 Nov 30, 2022
An official source code for "Augmentation-Free Self-Supervised Learning on Graphs"

Augmentation-Free Self-Supervised Learning on Graphs An official source code for Augmentation-Free Self-Supervised Learning on Graphs paper, accepted

Namkyeong Lee 59 Dec 01, 2022
Distributed Deep learning with Keras & Spark

Elephas: Distributed Deep Learning with Keras & Spark Elephas is an extension of Keras, which allows you to run distributed deep learning models at sc

Max Pumperla 1.6k Jan 05, 2023
Towards End-to-end Video-based Eye Tracking

Towards End-to-end Video-based Eye Tracking The code accompanying our ECCV 2020 publication and dataset, EVE. Authors: Seonwook Park, Emre Aksan, Xuco

Seonwook Park 76 Dec 12, 2022
Make a surveillance camera from your raspberry pi!

rpi-surveillance Make a surveillance camera from your Raspberry Pi 4! The surveillance is built as following: the camera records 10 seconds video and

Vladyslav 62 Feb 03, 2022
E2e music remastering system - End-to-end Music Remastering System Using Self-supervised and Adversarial Training

End-to-end Music Remastering System This repository includes source code and pre

Junghyun (Tony) Koo 37 Dec 15, 2022
Python library for tracking human heads with FLAME (a 3D morphable head model)

Video Head Tracker 3D tracking library for human heads based on FLAME (a 3D morphable head model). The tracking algorithm is inspired by face2face. It

61 Dec 25, 2022
Sign Language Translation with Transformers (COLING'2020, ECCV'20 SLRTP Workshop)

transformer-slt This repository gathers data and code supporting the experiments in the paper Better Sign Language Translation with STMC-Transformer.

Kayo Yin 107 Dec 27, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 06, 2023
DFFNet: An IoT-perceptive Dual Feature Fusion Network for General Real-time Semantic Segmentation

DFFNet Paper DFFNet: An IoT-perceptive Dual Feature Fusion Network for General Real-time Semantic Segmentation. Xiangyan Tang, Wenxuan Tu, Keqiu Li, J

4 Sep 23, 2022
Official repository for "Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring".

RNN-MBP Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring (AAAI-2022) by Chao Zhu, Hang Dong, Jinshan Pan

SIV-LAB 22 Aug 31, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022
[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

BCMI 49 Jul 27, 2022
Machine learning notebooks in different subjects optimized to run in google collaboratory

Notebooks Name Description Category Link Training pix2pix This notebook shows a simple pipeline for training pix2pix on a simple dataset. Most of the

Zaid Alyafeai 363 Dec 06, 2022