The official GitHub repository for the Argoverse 2 dataset.

Overview

PyPI Versions CI Status License: MIT

Argoverse 2 API

Official GitHub repository for the Argoverse 2 family of datasets.

If you have any questions or run into any problems with either the data or API, please feel free to open a GitHub issue!

TL;DR

  • Install the API: pip install av2
  • Read the instructions to download the data.

Overview

Getting Started

Setup

The easiest way to install the API is via pip by running the following command:

pip install av2

Datasets

The Argoverse 2 family consists of four distinct datasets:

Dataset Name Scenarios Camera Imagery Lidar Maps Additional Information
Sensor 1,000 Sensor Dataset README
Lidar 20,000 Lidar Dataset README
Motion Forecasting 250,000 Motion Forecasting Dataset README
Map Change (Trust, but Verify) 1,045 Map Change Dataset README

Please see DOWNLOAD.md for detailed instructions on how to download each dataset.

Map API

Please refer to the map README for additional details about the common format for vector and raster maps that we employ across all AV2 datasets.

Compatibility Matrix

Python Version linux macOS windows
3.8
3.9
3.10

Testing

All incoming pull requests are tested using nox as part of the CI process. This ensures that the latest version of the API is always stable on all supported platforms. You can run the full suite of automated checks and tests locally using the following command:

nox -r

Contributing

Have a cool feature you'd like to add? Found an unhandled corner case? The Argoverse team welcomes contributions from the open source community - please open a PR using the following template!

Citing

Please use the following citation when referencing the Argoverse 2 Sensor, Lidar, or Motion Forecasting Datasets:

@INPROCEEDINGS { Argoverse2,
  author = {Benjamin Wilson and William Qi and Tanmay Agarwal and John Lambert and Jagjeet Singh and Siddhesh Khandelwal and Bowen Pan and Ratnesh Kumar and Andrew Hartnett and Jhony Kaesemodel Pontes and Deva Ramanan and Peter Carr and James Hays},
  title = {Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
  year = {2021}
}

Use the following citation when referencing the Argoverse 2 Map Change Dataset:

@INPROCEEDINGS { TrustButVerify,
  author = {John Lambert and James Hays},
  title = {Trust, but Verify: Cross-Modality Fusion for HD Map Change Detection},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
  year = {2021}
}

License

All code provided within this repository is released under the MIT license and bound by the Argoverse terms of use, please see LICENSE and NOTICE for additional details.

Comments
  • Downloading the tbv dataset.

    Downloading the tbv dataset.

    I'm trying to download the tbv dataset and it seems there are two instructions to do so. Do these two methods produce the same result?

    One here:

    1. https://github.com/argoai/argoverse2-api/blob/main/DOWNLOAD.md s5cmd --no-sign-request cp s3://argoai-argoverse/av2/tbv/* target-directory

    And another here: 2. https://github.com/argoai/argoverse2-api/blob/main/src/av2/datasets/tbv/README.md SHARD_DIR={DESIRED PATH FOR TAR.GZ files} s5cmd cp s3://argoai-argoverse/av2/tars/tbv/*.tar.gz ${SHARD_DIR}

    When I try 1, I get an error "s5cmd is hitting the max open file limit allowed by your OS. Either increase the open file limit or try to decrease the number of workers with '-numworkers' parameter'.

    When I try 2, I get an error "Error session: fetching region failed: NoCredentialProviders: no valid providers in chain. Deprecated."

    1. probably downloads half of the dataset, while 2. doesn't initiate the download. I will probably continue with 1, but 2. probably is faster. I'm using Linux Ubuntu 18.04.
    opened by tom-bu 13
  • What is the format of the submission for 3D object detection competition?

    What is the format of the submission for 3D object detection competition?

    The Submission Guidelines have nothing about the submission format, could you give more details? Or could you provide a submission sample? Thank you very much!

    opened by fangjin-cool 7
  • questions for visualization

    questions for visualization

    Dear all:

    When I run the 'generate_sensor_dataset_visualizations.py' file, it alway report the error that: No such file or directory. I check the difference and found that the error path is '/.../argv2/SensorDataset/sensor/SensorDataset_val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather' and the true path is '/.../argv2/SensorDataset/sensor/val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather'. Is any parameter in the program that needs to be debugged or something else? Hoping for your reply and thanks so much.

    opened by tommygojerry 5
  • Argoverse 2.0 vs Argoverse 1.1 API

    Argoverse 2.0 vs Argoverse 1.1 API

    Hi folks,

    I am trying to run my model in Argoverse 2.0, which was previously trained using 1.1 and its corresponding API. Nevertheless, after installing and cloning the API, in order to check the tutorials, dataloaders etc, this api looks quite smaller than Argoverse 1.1. and the organization also seems to be different (e.g. where are the csvs with the trajectories?). Where could I see all the required documentation?

    opened by Cram3r95 5
  • lane label annotation method inquery.

    lane label annotation method inquery.

    hi, since there is no information about how the lane markings are labeled in the argvoerse-v2 dataset. I wonder if these lane marking labels are annotated in the originally collected point cloud (labeling in 3D space), or if it is annotated on the image by projecting the point cloud onto the corresponding image.

    hope you can help me figure out and thanks in advance :)

    question 
    opened by Mollylulu 4
  • Similarity argoverse 1 / argoverse 2

    Similarity argoverse 1 / argoverse 2

    Hey, the argoverse 2 dataset comes with new and richer scenes. Comparing the scenes of av1 to av2 in the respective cities: How similar what you consider them? So, in short: Would you say training with argoverse 2 includes all the relevant data to perform well on argoverse 1? I would be particularly interested in the motion forecasting dataset. Looking forward to your answer! Thanks a lot!

    question 
    opened by odunkel 4
  • Motion forecasting: Focal agent not always observered over the full scenario length

    Motion forecasting: Focal agent not always observered over the full scenario length

    Hey everyone,

    I had a look into the motion forecasting dataset and there seems to be an issue with the trajectories of the focal agent. According to the paper, the focal agent should always be observed over the full 11 seconds, which then corresponds to 110 observations: "Within each scenario, we mark a single track as the “focal agent". Focal tracks are guaranteed to be fully observed throughout the duration of the scenario and have been specifically selected to maximize interesting interactions with map features and other nearby actors (see Section 3.3.2)"

    However, this is not the case for some scenarios (~3% of the scenarios). One example: Scenario '0215552f-6951-47e5-8cf6-3d1351d28957' of the validation set has a trajectory with only 104 observations.

    Can you reproduce my problem? Is this intended or can we expect this to be fixed in the near future?

    Looking forward hearing from you!

    Best regards

    SchDevel

    bug 
    opened by SchDevel 4
  • How to evaluate 3D object detection on validation split?

    How to evaluate 3D object detection on validation split?

    Thanks for your excellent work! I would like to know how to do the evaluation of 3D object detection on validation split. And I notice there is PR about this. When will the stable version be released? I am looking forward to it!

    opened by Abyssaledge 4
  • Is it possible to extract the route information ?

    Is it possible to extract the route information ?

    Hi, thank you for providing the outstanding dataset.

    I am particularly interested in the motion dataset, and have a question that is it possible to extract the route of the self-driving vehicles in each scenario?

    opened by panda2020-sky 4
  • Error with generate_sensor_dataset_visualizations.py

    Error with generate_sensor_dataset_visualizations.py

    Hi, when i run python tutorials/generate_sensor_dataset_visualizations.py -d /xxx/av2, I got the error: FileNotFoundError: [Errno 2] Failed to open local file '/xxx/av2/test/0c6e62d7-bdfa-3061-8d3d-03b13aa21f68/annotations.feather'. Detail: [errno 2] No such file or directory. The test set has no label. Why is it not filtered out in the code? What is the correct command to run this py file? Thanks.

    question 
    opened by DuZzzs 3
  • Follow up for https://github.com/argoai/av2-api/issues/77

    Follow up for https://github.com/argoai/av2-api/issues/77

    Hi,

    Sorry for the delay. Thank you for your help! I went through the dataset API and was able to isolate individual point clouds.

    Joint(L), Top (R) image

    Top(L), Bottom (R) image

    Does this look sensible? Here is the code snippet. ` dataset = SensorDataloader(Path(settings.argoverse_dataset_root), with_annotations=True, with_cache=True) for index, data_frame in enumerate(dataset): sweep = data_frame.sweep # has lidar info annotations = data_frame.annotations # has boxes pose = data_frame.timestamp_city_SE3_ego_dict

        # get the lidar - both combined into single pcl
        pcl_joint = sweep.xyz
    
        # append reflectances and laser numbers
        pcl_joint = np.hstack([pcl_joint, np.expand_dims(sweep.intensity, -1), 
                                            np.expand_dims(sweep.laser_number, -1)])
    
        # laser number [0, 31] -> top lidar, [32, 63] -> bottom lidar
        r_up = np.where(pcl_joint[:, -1] < 32)
        pcl_up = pcl_joint[r_up]  # get top lidar point cloud
    
        r_down = np.where(pcl_joint[:, -1] >= 32)
        pcl_down = pcl_joint[r_down]
    

    `

    Please let me know if this is the correct way, just to be sure.

    Best Regards Sambit

    opened by SM1991CODES 2
  • centerline of static map

    centerline of static map

    I noticed that we have two methods to get the centerline of lane_segment. First, we just get the data from raw map file. Second, we can use the function of class ArgoverseStaticMap which is "get_lane_segment_centerline" to get the centerline. I wanna know the difference of these two methods.

    opened by ChevinB 0
  • Interestingness score

    Interestingness score

    Hey,

    you were roughly explaining the interestingness score in your paper and in the supplementaries. Are you planning to share more details about the process of selecting interesting scenarios or is this confidential functionality?

    I am looking forward to your answer.

    Best regards

    opened by odunkel 0
  • Path issue in from_map_dir function of map_api

    Path issue in from_map_dir function of map_api

    The vector_data_json_path variable seems to extract the wrong path definition (with relative path being passed in the Map_Tutorial notebook)

    Setting it to just vector_data_fname seems to be working for me -- instead of log_map_dirpath / vector_data_fname

    Check it out please?

    Thanks!

    opened by Shivanshu17 1
  • Pytorch Dataloader.

    Pytorch Dataloader.

    PR Summary

    Testing

    In order to ensure this PR works as intended, it is:

    • [ ] unit tested.
    • [ ] other or not applicable (additional detail/rationale required)

    Compliance with Standards

    As the author, I certify that this PR conforms to the following standards:

    • [ ] Code changes conform to PEP8 and docstrings conform to the Google Python style guide.
    • [ ] A well-written summary explains what was done and why it was done.
    • [ ] The PR is adequately tested and the testing details and links to external results are included.
    opened by benjaminrwilson 0
  • timestamps_ns in motion forecast dataset

    timestamps_ns in motion forecast dataset

    I tried to convert timestamps_ns assuming epoch format and all scenarios seem to refer to date and time in the year 1980. Has there been any deliberate anonymization of the timestamp or am I doing the conversion wrong?

    Thanks in advance!

    opened by sun1612 0
Releases(v0.2.1)
  • v0.2.1(Jun 2, 2022)

    What's Changed

    • Add UNKNOWN lane mark type to map schema by @wqi in https://github.com/argoai/av2-api/pull/58
    • Competition announcements by @benjaminrwilson in https://github.com/argoai/av2-api/pull/57
    • Add additional 3D object detection submission details. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/63

    Full Changelog: https://github.com/argoai/av2-api/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(May 5, 2022)

    • Evaluation code is now available to 3D object detection and motion forecasting.

    What's Changed

    • Update README.md by @benjaminrwilson in https://github.com/argoai/av2-api/pull/6
    • Add gifs to TbV readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/10
    • Fix broken link to Argoverse website in motion forecasting readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/13
    • add support for rendering LaneMarkType.SOLID_DASH_WHITE in EgoViewMapRenderer by @senselessdev1 in https://github.com/argoai/av2-api/pull/9
    • Replace TbV gifs to illustrate map changes more clearly by @senselessdev1 in https://github.com/argoai/av2-api/pull/15
    • Update README.md by @benjaminrwilson in https://github.com/argoai/av2-api/pull/16
    • Fix typo in Sensor Dataset readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/19
    • Improve TbV Download Instructions by @senselessdev1 in https://github.com/argoai/av2-api/pull/14
    • Add city distribution for logs to Sensor Dataset Readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/22
    • Clarify which datasets certain tutorials apply to by @senselessdev1 in https://github.com/argoai/av2-api/pull/24
    • Add get_city_name() method to dataloader, to fetch name of city where a log was captured. by @senselessdev1 in https://github.com/argoai/av2-api/pull/27
    • Small formatting fixes. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/33
    • Fix map tutorial issues. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/35
    • Update ci.yml by @benjaminrwilson in https://github.com/argoai/av2-api/pull/5
    • 3D Object Detection Evaluation by @benjaminrwilson in https://github.com/argoai/av2-api/pull/31
    • Add converter between AV2 city coordinate systems, and WGS84 and UTM by @senselessdev1 in https://github.com/argoai/av2-api/pull/28
    • Add get_ordered_log_lidar_timestamps() method to Sensor / TbV dataloa… by @senselessdev1 in https://github.com/argoai/av2-api/pull/29
    • Add TbV log clustering by scene (i.e. spatial location). by @senselessdev1 in https://github.com/argoai/av2-api/pull/26
    • 3D Detection Eval docstrings + typing fixes. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/40
    • Add integration test to verify that TbV download was successful by @senselessdev1 in https://github.com/argoai/av2-api/pull/23
    • Sensor Dataset Visualization by @benjaminrwilson in https://github.com/argoai/av2-api/pull/39
    • Add dataclass for AV2 MF challenge submissions by @wqi in https://github.com/argoai/av2-api/pull/41
    • Add Brier metrics to motion forecasting evaluation module by @wqi in https://github.com/argoai/av2-api/pull/44
    • Detection evaluation tweaks by @benjaminrwilson in https://github.com/argoai/av2-api/pull/48
    • v0.1.0 -> v0.1.1 by @benjaminrwilson in https://github.com/argoai/av2-api/pull/49
    • Update setup.cfg to add pypi metadata by @wqi in https://github.com/argoai/av2-api/pull/51
    • Update init.py by @benjaminrwilson in https://github.com/argoai/av2-api/pull/52

    New Contributors

    • @benjaminrwilson made their first contribution in https://github.com/argoai/av2-api/pull/6
    • @senselessdev1 made their first contribution in https://github.com/argoai/av2-api/pull/10
    • @wqi made their first contribution in https://github.com/argoai/av2-api/pull/41

    Full Changelog: https://github.com/argoai/av2-api/compare/v0.1.0...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 17, 2022)

Learning Representational Invariances for Data-Efficient Action Recognition

Learning Representational Invariances for Data-Efficient Action Recognition Official PyTorch implementation for Learning Representational Invariances

Virginia Tech Vision and Learning Lab 27 Nov 22, 2022
To build a regression model to predict the concrete compressive strength based on the different features in the training data.

Cement-Strength-Prediction Problem Statement To build a regression model to predict the concrete compressive strength based on the different features

Ashish Kumar 4 Jun 11, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
A list of all named GANs!

The GAN Zoo Every week, new GAN papers are coming out and it's hard to keep track of them all, not to mention the incredibly creative ways in which re

Avinash Hindupur 12.9k Jan 08, 2023
Social Distancing Detector

Computer vision has opened up a lot of opportunities to explore into AI domain that were earlier highly limited. Here is an application of haarcascade classifier and OpenCV to develop a social distan

Ashish Pandey 2 Jul 18, 2022
Notspot robot simulation - Python version

Notspot robot simulation - Python version This repository contains all the files and code needed to simulate the notspot quadrupedal robot using Gazeb

50 Sep 26, 2022
SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data (AAAI 2021)

SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data (AAAI 2021) PyTorch implementation of SnapMix | paper Method Overview Cite

DavidHuang 126 Dec 30, 2022
PyTorch code for SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised DA

PyTorch Code for SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised Domain Adaptation Viraj Prabhu, Shivam Khare, Deeks

Viraj Prabhu 46 Dec 24, 2022
Simulation of moving particles under microscopic imaging

Simulation of moving particles under microscopic imaging Install scipy numpy scikit-image tiffile Run python simulation.py Read result https://imagej

Zehao Wang 2 Dec 14, 2021
Bayesian algorithm execution (BAX)

Bayesian Algorithm Execution (BAX) Code for the paper: Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mut

Willie Neiswanger 38 Dec 08, 2022
Full Stack Deep Learning Labs

Full Stack Deep Learning Labs Welcome! Project developed during lab sessions of the Full Stack Deep Learning Bootcamp. We will build a handwriting rec

Full Stack Deep Learning 1.2k Dec 31, 2022
EfficientDet (Scalable and Efficient Object Detection) implementation in Keras and Tensorflow

EfficientDet This is an implementation of EfficientDet for object detection on Keras and Tensorflow. The project is based on the official implementati

1.3k Dec 19, 2022
This repository is the official implementation of Open Rule Induction. This paper has been accepted to NeurIPS 2021.

Open Rule Induction This repository is the official implementation of Open Rule Induction. This paper has been accepted to NeurIPS 2021. Abstract Rule

Xingran Chen 16 Nov 14, 2022
This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Pruning Self-attentions into Convolutional Layers in Single Path This is the official repository for our paper: Pruning Self-attentions into Convoluti

Zhuang AI Group 77 Dec 26, 2022
A little Python application to auto tag your photos with the power of machine learning.

Tag Machine A little Python application to auto tag your photos with the power of machine learning. Report a bug or request a feature Table of Content

Florian Torres 14 Dec 21, 2022
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
NER for Indian languages

CL-NERIL: A Cross-Lingual Model for NER in Indian Languages Code for the paper - https://arxiv.org/abs/2111.11815 Setup Setup a virtual environment Th

Akshara P 0 Nov 24, 2021
UnpNet - Rethinking 3-D LiDAR Point Cloud Segmentation(IEEE TNNLS)

UnpNet Citation Please cite the following paper if you use this repository in your reseach. @article {PMID:34914599, Title = {Rethinking 3-D LiDAR Po

Shijie Li 4 Jul 15, 2022
Reproducing Results from A Hybrid Approach to Targeting Social Assistance

title author date output Reproducing Results from A Hybrid Approach to Targeting Social Assistance Lendie Follett and Heath Henderson 12/28/2021 html_

Lendie Follett 0 Jan 06, 2022
Representing Long-Range Context for Graph Neural Networks with Global Attention

Graph Augmentation Graph augmentation/self-supervision/etc. Algorithms gcn gcn+virtual node gin gin+virtual node PNA GraphTrans Augmentation methods N

UC Berkeley RISE 67 Dec 30, 2022