Code artifacts for the submission "Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems"

Overview

Code Artifacts

Code artifacts for the submission "Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems"

Demos

Testbed

Real-world Environment

Virtual Environment (Unity)

Sim2Real and Real2Sim translations by CycleGAN

Self-driving cars

The same DNN model deployed on a real-world electric vehicle and in a virtual simulated world

Visual Odometry

Real-time XTE predictions in the real-world with visual odometry

Corruptions (left) and Adversarial Examples (right)

Requisites

Python3, git 64 bit, miniconda 3.7 64 bit. To modify the simulator (optional): Unity 2019.3.0f1

Software setup: We adopted the PyCharm Professional 2020.3, a Python IDE by JetBrains, and Python 3.7.

Hardware setup: Training the DNN models (self-driving cars) and CycleGAN on our datasets is computationally expensive. Therefore, we recommend using a machine with a GPU. In our setting, we ran our experiments on a machine equipped with a AMD Ryzen 5 processor, 8 GB of memory, and an NVIDIA GPU GeForce RTX 2060 with 6 GB of dedicated memory. Our trained models are available here.

Donkey Car

We used Donkey Car v. 3.1.5. Make sure you correctly install the donkey car software, the necessary simulator software and our simulator (macOS only).

* git clone https://github.com/autorope/donkeycar.git
* git checkout a91f88d
* conda env remove -n donkey
* conda env create -f install/envs/mac.yml
* conda activate donkey
* pip install -e .\[pc\]

XTE Predictor for real-world driving images

Data collection for a XTE predictor must be collected manually (or our datasets can be used). Alternatively, data can be collected by:

  1. Launching the Simulator.
  2. Selecting a log directory by clicking the 'log dir' button
  3. Selecting a preferred resolution (default is 320x240)
  4. Launching the Sanddbox Track scene and drive the car with the 'Joystick/Keyboard w Rec' button
  5. Driving the car

This will generate a dataset of simulated images and respective XTEs (labels). The simulated images have then to be converted using a CycleGAN network trained to do sim2real translation.

Once the dataset of converted images and XTEs is collected, use the train_xte_predictor.py notebook to train the xte predictor.

Self-Driving Cars

Manual driving

Connection

Donkey Car needs a static IP so that we can connect onto the car

ssh jetsonnano@
   
    
Pwd: 
    

    
   

Joystick Pairing

ds4drv &

PS4 controller: press PS + share and hold; starts blinking and pairing If [error][bluetooth] Unable to connect to detected device: Failed to set operational mode: [Errno 104] Connection reset by peer Try again When LED is green, connection is ok

python manage.py drive —js  // does not open web UI
python manage.py drive  // does open web UI for settiong a maximum throttle value

X -> E-Stop (negative acceleration) Share -> change the mode [user, local, local_angle]

Enjoy!

press PS and hold for 10 s to turn it off

Training

python train.py --model 
   
    .h5 --tub 
     --type 
     
       --aug

     
   

Testing (nominal conditions)

For autonomus driving:

python manage.py drive --model [models/
   
    ]

   

Go to: http://10.21.13.35:8887/drive Select “Local Pilot (d)”

Testing (corrupted conditions)

python manage.py drive --model [models/
   
    ] [--corruption=
    
     ] [--severity=
     
      ] [--delay=
      
       ]

      
     
    
   

Testing (adversarial conditions)

python manage.py drive --model [models/
   
    ] [--useadversarial] [--advimage=
    
     ]  [--severity=
     
      ] [--delay=
      
       ]

      
     
    
   
Owner
Andrea Stocco
PostDoctoral researcher in Software Engineering. My interests concern devising techniques for testing web- and AI-based software systems.
Andrea Stocco
Fewshot-face-translation-GAN - Generative adversarial networks integrating modules from FUNIT and SPADE for face-swapping.

Few-shot face translation A GAN based approach for one model to swap them all. The table below shows our priliminary face-swapping results requiring o

768 Dec 24, 2022
Code for Motion Representations for Articulated Animation paper

Motion Representations for Articulated Animation This repository contains the source code for the CVPR'2021 paper Motion Representations for Articulat

Snap Research 851 Jan 09, 2023
Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)

E2FGVI (CVPR 2022) English | 简体中文 This repository contains the official implementation of the following paper: Towards An End-to-End Framework for Flo

Media Computing Group @ Nankai University 537 Jan 07, 2023
Deep Inertial Prediction (DIPr)

Deep Inertial Prediction For more information and context related to this repo, please refer to our website. Getting Started (non Docker) Note: you wi

Arcturus Industries 12 Nov 11, 2022
MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research

MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research.The pipeline is based on nn-UNet an

QIMP team 30 Jan 01, 2023
TakeInfoatNistforICS - Take Information in NIST NVD for ICS

Take Information in NIST NVD for ICS This project developed with Python. When yo

5 Sep 05, 2022
ECCV18 Workshops - Enhanced SRGAN. Champion PIRM Challenge on Perceptual Super-Resolution. The training codes are in BasicSR.

ESRGAN (Enhanced SRGAN) [ 🚀 BasicSR] [Real-ESRGAN] ✨ New Updates. We have extended ESRGAN to Real-ESRGAN, which is a more practical algorithm for rea

Xintao 4.7k Jan 02, 2023
A Temporal Extension Library for PyTorch Geometric

Documentation | External Resources | Datasets PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric. The library

Benedek Rozemberczki 1.9k Jan 07, 2023
Dynamic Token Normalization Improves Vision Transformers

Dynamic Token Normalization Improves Vision Transformers This is the PyTorch implementation of the paper Dynamic Token Normalization Improves Vision T

Wenqi Shao 20 Oct 09, 2022
Continual World is a benchmark for continual reinforcement learning

Continual World Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld. Th

41 Dec 24, 2022
AI assistant built in python.the features are it can display time,say weather,open-google,youtube,instagram.

AI assistant built in python.the features are it can display time,say weather,open-google,youtube,instagram.

AK-Shanmugananthan 1 Nov 29, 2021
PyTorch implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Erik Linder-Norén 13.4k Jan 08, 2023
Differentiable scientific computing library

xitorch: differentiable scientific computing library xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely

98 Dec 26, 2022
Experiments on Flood Segmentation on Sentinel-1 SAR Imagery with Cyclical Pseudo Labeling and Noisy Student Training

Flood Detection Challenge This repository contains code for our submission to the ETCI 2021 Competition on Flood Detection (Winning Solution #2). Acco

Siddha Ganju 108 Dec 28, 2022
An official repository for Paper "Uformer: A General U-Shaped Transformer for Image Restoration".

Uformer: A General U-Shaped Transformer for Image Restoration Zhendong Wang, Xiaodong Cun, Jianmin Bao and Jianzhuang Liu Paper: https://arxiv.org/abs

Zhendong Wang 497 Dec 22, 2022
Python Library for Signal/Image Data Analysis with Transport Methods

PyTransKit Python Transport Based Signal Processing Toolkit Website and documentation: https://pytranskit.readthedocs.io/ Installation The library cou

24 Dec 23, 2022
OMNIVORE is a single vision model for many different visual modalities

Omnivore: A Single Model for Many Visual Modalities [paper][website] OMNIVORE is a single vision model for many different visual modalities. It learns

Meta Research 451 Dec 27, 2022
OneShot Learning-based hotword detection.

EfficientWord-Net Hotword detection based on one-shot learning Home assistants require special phrases called hotwords to get activated (eg:"ok google

ANT-BRaiN 102 Dec 25, 2022
A library for uncertainty quantification based on PyTorch

Torchuq [logo here] TorchUQ is an extensive library for uncertainty quantification (UQ) based on pytorch. TorchUQ currently supports 10 representation

TorchUQ 96 Dec 12, 2022
keyframes-CNN-RNN(action recognition)

keyframes-CNN-RNN(action recognition) Environment: python=3.7 pytorch=1.2 Datasets: Following the format of UCF101 action recognition. Run steps: Mo

4 Feb 09, 2022