PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

Related tags

Deep Learningfiery
Overview

FIERY

This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in:

FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras

Anthony Hu, Zak Murez, Nikhil Mohan, Sofía Dudas, Jeffrey Hawke, ‪Vijay Badrinarayanan, Roberto Cipolla and Alex Kendall

preprint (2021)
Blog post

FIERY future prediction
Multimodal future predictions by our bird’s-eye view network.
Top two rows: RGB camera inputs. The predicted future trajectories and segmentations are projected to the ground plane in the images.
Bottom row: future instance prediction in bird’s-eye view in a 100m×100m capture size around the ego-vehicle, which is indicated by a black rectangle in the center.

If you find our work useful, please consider citing:

@inproceedings{fiery2021,
  title     = {{FIERY}: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras},
  author    = {Anthony Hu and Zak Murez and Nikhil Mohan and Sofía Dudas and 
               Jeffrey Hawke and Vijay Badrinarayanan and Roberto Cipolla and Alex Kendall},
  booktitle = {arXiv preprint},
  year = {2021}
}

Setup

  • Create the conda environment by running conda env create.

🏄 Prediction

🔥 Pre-trained models

All the configs are in the folder fiery/configs

Config Dataset Past context Future horizon BEV size IoU VPQ
baseline.yml NuScenes 1.0s 2.0s 100mx100m (50cm res.) 37.0 29.5
lyft/baseline.yml Lyft 0.8s 2.0s 100mx100m (50cm res.) 36.6 29.5
literature/pon_setting.yml NuScenes 0.0s 0.0s 100mx50m (25cm res.) 39.9 -
literature/lift_splat_setting.yml NuScenes 0.0s 0.0s 100mx100m (50cm res.) 36.7 -
literature/fishing_setting.yml NuScenes 1.0s 2.0s 32.0mx19.2m (10cm res.) 58.5 -

🏊 Training

To train the model from scratch on NuScenes:

  • Run python train.py --config fiery/configs/baseline.yml DATASET.DATAROOT ${NUSCENES_DATAROOT}

🙌 Credits

Big thanks to Piotr Sokólski (@pyetras) for the panoptic metric implementation, and to Hannes Liik (@hannesliik) for the awesome future trajectory visualisation on the ground plane.

Comments
  • loss < 0

    loss < 0

    Hi, thanks for your great work. I have a question about loss. When i trained model for my data, the loss < 0 at epoch_0, is this normal? Config: baseline.yaml in the project image

    opened by YiJiangYue 6
  • All losses become NaN after about 1 epoch of training

    All losses become NaN after about 1 epoch of training

    Hi,

    Thank you for sharing this great work!

    When I ran the training code, I got NaN for all losses after about 1 epoch of training. This problem is reproduced whenever I run the training code. (I have tested it three times.)

    I followed the same environment setting with anaconda, and also used the same hyper-parameters. (The only difference is that our PyTorch version is 1.7.1 and yours is 1.7.0, and all other modules are the same as yours.)

    Please share your idea about this problem, if you have any. Thanks!

    opened by jwookyoo 6
  • Question about the projection_to_birds_eye_view function

    Question about the projection_to_birds_eye_view function

    Congratulations on your great work!

    I want to follow your work for future research and I have some questions about your released code below:

    In the fiery.py file of your code, can you provide more details about the get_geometry function and the projection_to_birds_eye_view function? I'm so confused about how they actually work, especially, the code shown in the red box below. 112a26f400a5c24af541a6423977362

    Thank you very much. Looking forward to your reply!

    opened by taylover-pei 5
  • AttributeError: 'FigureCanvasTkAgg' object has no attribute 'renderer'

    AttributeError: 'FigureCanvasTkAgg' object has no attribute 'renderer'

    Hello, recently I found your great work and I want to try the "Visualisation" part locally to check the results, but after I run the command of python visualise.py --checkpoint ${CHECKPOINT_PATH} my terminal pop out an error like the following: image

    I try to solve it by searching on google but it does not help, could you help me if you know how to solve it. Many thanks.

    opened by Ianpengg 3
  • May I know where is the checkpoint getting saved?

    May I know where is the checkpoint getting saved?

    I dont see anywhere that the checkpoint is getting saved and while resuming the training, I am getting an error that "size mismatch for model.temporal_model.model.1.aggregation.0.conv.weight"

    opened by pranavi77 2
  • The result of fiery static

    The result of fiery static

    If I want to get the result of Fiery Static of Setting2 in Table I of your paper, should I use the config in "configs/single_timeframe.yml"? When I train the network using this config file from scratch, the IOU is 39.2 when I use the "evaluate.py". However, in the paper, the result is 35.8. Is there another parameter needed to be modified, when I want the network to be one frame as input and the segmentation result of the present frame as output?

    opened by DFLyan 2
  • Question about panoptic_metrics function

    Question about panoptic_metrics function

    Hi,

    Would you be able to explain how the panoptic_metrics function works? (Code linked here: https://github.com/wayveai/fiery/blob/master/fiery/metrics.py#L137) Especially, I wonder why 'void' is included for 'combine_mask', and why 'background' should be changed from 0 to 1.

    Also, It is hard to understand the code under the comment "# hack for bincounting 2 arrays together". (Code linked here: https://github.com/wayveai/fiery/blob/master/fiery/metrics.py#L168)

    Thank you!

    opened by jwookyoo 2
  • Is future_egopose necessary for inference?

    Is future_egopose necessary for inference?

    Thanks for your great work. I have a little question about future ego pose during inference? I may find a little tricky because flow prediction is a module before motion planning. In real cases, the flow prediction module has no chance of getting future ego pose. But the code may show future ego pose is irreplacable in inference. When I turn to None, the inference doesn't work.

    opened by synsin0 1
  • clarification evaluation

    clarification evaluation

    Hello and many thanks for your work and sharing your code.

    I have a question regarding the way you compute your IoU metric and how it compares against Lift-splat.

    You use stat_scores_multiple_classes from PLmetrics to compute the iou. Correct me if I am wrong, but by default the threshold of this method is 0.5

    On the other hand, in get_batch_iou of LFS they use a threshold of 0: pred = (preds > 0) https://github.com/nv-tlabs/lift-splat-shoot/blob/master/src/tools.py

    Wouldn't this have an impact on the evaluation results ,and thus, on how you compare to them ?

    opened by F-Barto 1
  • Question on deleting unused layers and self.downsample

    Question on deleting unused layers and self.downsample

    Hi, I couldn't understand how the self.downsample parameter was set (why 8 and 16 and how it affects upsampling_in_channels) and why delete_unused_layers is required in the encoder model. I tried to search the efficientnet-pytorch implementation and couldn't find any reference for this operation. Could you explain briefly why this is required? Thank you!

    opened by benhgm 1
  • question about instance_flow

    question about instance_flow

    Thanks for your excellent work! I have some questions about instance_flow. warped_instance_seg = {} # t0,f01-->t1; t1,f12-->t2; t2,f23-->t3 # t1,f10-->t0; t2,f21-->t1 for t in range(1, seq_len): warped_inst_t = warp_features(instance_img[t].unsqueeze(0).unsqueeze(1).float(), # 1, 1, 200, 200 future_egomotion_inv[t - 1].unsqueeze(0), mode='nearest', spatial_extent=spatial_extent) warped_instance_seg[t] = warped_inst_t[0, 0] In your paper, "Finally, we obtain feature flow labels by comparing the position of the instance centers of gravity between two consecutive timesteps".I think the code should convert t to t-1, not t-1 to t. How can it get the feature flow? I'm really confuesd about it. I'm looking forward your replying.

    opened by qfwysw 1
  • Bad results when evaluating pretrained checkpoints

    Bad results when evaluating pretrained checkpoints

    Hi. Thanks for your great work. I followed your instructions in README.md to extract nuscenes dataset. I ran evaluate.py with official pretrained checkpoint (https://github.com/wayveai/fiery/releases/download/v1.0/fiery.ckpt) but got the output as follows: iou 53.5 & 28.6 pq 39.8 & 18.0 sq 69.4 & 66.3 rq 57.4 & 27.1 Is there something wrong? It seems to be much lower than the results you got.

    opened by huangzhengxiang 1
  • Dear author,the total loss value <0 ,is it normal?

    Dear author,the total loss value <0 ,is it normal?

    Dear author, I just run the code without no change, during the training ,I got the total sum loss with the value <0 .

    It looks so weird. Is that caused by the setting of the "uncertainty" ? Is that normal? Really thanks.

    opened by emilyemliyM 0
  • Pytorch Lightning stuck the computer and finally killed

    Pytorch Lightning stuck the computer and finally killed

    Thanks for your great work. I'd like to reproduce the training process, but I encountered an error. That is when I use multi-GPU distributed training process, the logging information seems normal, but afterwards the remote server stuck and connection reset and finally the process is killed. My remote server is an independent machine with 4xRTX3090. Is there any issues with the pytorch lightning distributed training that may cause my failure?

    opened by synsin0 1
A general-purpose programming language, focused on simplicity, safety and stability.

The Rivet programming language A general-purpose programming language, focused on simplicity, safety and stability. Rivet's goal is to be a very power

The Rivet programming language 17 Dec 29, 2022
PyTorch Implementation of our paper Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation

PyTorch Implementation of our paper Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation

Zechen Bai 12 Jul 08, 2022
TensorFlow implementation of Deep Reinforcement Learning papers

Deep Reinforcement Learning in TensorFlow TensorFlow implementation of Deep Reinforcement Learning papers. This implementation contains: [1] Playing A

Taehoon Kim 1.6k Jan 03, 2023
Official PyTorch Implementation of SSMix (Findings of ACL 2021)

SSMix: Saliency-based Span Mixup for Text Classification (Findings of ACL 2021) Official PyTorch Implementation of SSMix | Paper Abstract Data augment

Clova AI Research 52 Dec 27, 2022
Official implementation of "Learning Not to Reconstruct" (BMVC 2021)

Official PyTorch implementation of "Learning Not to Reconstruct Anomalies" This is the implementation of the paper "Learning Not to Reconstruct Anomal

Marcella Astrid 13 Dec 04, 2022
A PyTorch implementation of "DGC-Net: Dense Geometric Correspondence Network"

DGC-Net: Dense Geometric Correspondence Network This is a PyTorch implementation of our work "DGC-Net: Dense Geometric Correspondence Network" TL;DR A

191 Dec 16, 2022
Paddle pit - Rethinking Spatial Dimensions of Vision Transformers

基于Paddle实现PiT ——Rethinking Spatial Dimensions of Vision Transformers,arxiv 官方原版代

Hongtao Wen 4 Jan 15, 2022
Python project to take sound as input and output as RGB + Brightness values suitable for DMX

sound-to-light Python project to take sound as input and output as RGB + Brightness values suitable for DMX Current goals: Get one pixel working: Vary

Bobby Cox 1 Nov 17, 2021
Official pytorch implementation of the AAAI 2021 paper Semantic Grouping Network for Video Captioning

Semantic Grouping Network for Video Captioning Hobin Ryu, Sunghun Kang, Haeyong Kang, and Chang D. Yoo. AAAI 2021. [arxiv] Environment Ubuntu 16.04 CU

Hobin Ryu 43 Nov 25, 2022
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo

TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo Lukas Koestler1*    Nan Yang1,2*,†    Niclas Zeller2,3    Daniel Cremers1

TUM Computer Vision Group 744 Jan 04, 2023
This repository contains the code for TACL2021 paper: SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization

SummaC: Summary Consistency Detection This repository contains the code for TACL2021 paper: SummaC: Re-Visiting NLI-based Models for Inconsistency Det

Philippe Laban 24 Jan 03, 2023
Official repository of ICCV21 paper "Viewpoint Invariant Dense Matching for Visual Geolocalization"

Viewpoint Invariant Dense Matching for Visual Geolocalization: PyTorch implementation This is the implementation of the ICCV21 paper: G Berton, C. Mas

Gabriele Berton 44 Jan 03, 2023
STEAL - Learning Semantic Boundaries from Noisy Annotations (CVPR 2019)

STEAL This is the official inference code for: Devil Is in the Edges: Learning Semantic Boundaries from Noisy Annotations David Acuna, Amlan Kar, Sanj

469 Dec 26, 2022
General Multi-label Image Classification with Transformers

General Multi-label Image Classification with Transformers Jack Lanchantin, Tianlu Wang, Vicente Ordóñez Román, Yanjun Qi Conference on Computer Visio

QData 154 Dec 21, 2022
Dense Prediction Transformers

Vision Transformers for Dense Prediction This repository contains code and models for our paper: Vision Transformers for Dense Prediction René Ranftl,

Intelligent Systems Lab Org 1.3k Jan 02, 2023
Training PSPNet in Tensorflow. Reproduce the performance from the paper.

Training Reproduce of PSPNet. (Updated 2021/04/09. Authors of PSPNet have provided a Pytorch implementation for PSPNet and their new work with support

Li Xuhong 126 Jul 13, 2022
A tf.keras implementation of Facebook AI's MadGrad optimization algorithm

MADGRAD Optimization Algorithm For Tensorflow This package implements the MadGrad Algorithm proposed in Adaptivity without Compromise: A Momentumized,

20 Aug 18, 2022
PyTorch implementation of "Learn to Dance with AIST++: Music Conditioned 3D Dance Generation."

Learn to Dance with AIST++: Music Conditioned 3D Dance Generation. Installation pip install -r requirements.txt Prepare Dataset bash data/scripts/pre

Zj Li 8 Sep 07, 2021
TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling

TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling This is the official code release for the paper 'TiP-Adapter: Training-fre

peng gao 189 Jan 04, 2023
An official implementation of the paper Exploring Sequence Feature Alignment for Domain Adaptive Detection Transformers

Sequence Feature Alignment (SFA) By Wen Wang, Yang Cao, Jing Zhang, Fengxiang He, Zheng-jun Zha, Yonggang Wen, and Dacheng Tao This repository is an o

WangWen 79 Dec 24, 2022