CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches

Overview

CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches

This document describes how to install and use CRISCE (CRItical SCEnario), the tool developed by Jasim Ahmed and others for automatically generate simulations of car crashed from accident sketches using BeamNG.tech.

Repo Organization

.
├── Datasets
│   ├── CIREN
│   └── SYNTH
├── LICENSE
├── PythonRobotics
├── README.md
├── beamngpy-1.18-py2.py3-none-any.whl
├── crisce
└── requirements.txt

The crisce folder contains the source code of the tool. The Datasets folder contains the sample accident sketches compatible with the tool: CIREN contains sketches from NHTSA; SYNT contains synthetic sketches that we manually created from pre-existing car crash simulations in BeamNG.tech.

beamngpy-1.18-py2.py3-none-any.whl is the wheel file necessary to manually install beamngpy the Python API to BeamNG.tech. Tl;DR: The package available on pypi is broken.

requirements.txt lists the python packages needed to install the tool. They are in the usual format accepted by pip.

Dependencies

CRISCE is a tool written in Python, therefore it requires a working Python installation. Specifically, we tested CRISCE with Python 3.7.10

CRISCE uses the BeamNG.tech simulator to run the driving simulations. Therefore, BeamNG.tech must be installed as well.

Note: the version of the simulator used by CRISCE is BeamNG.research v1.7.0.1

BeamNG.tech is free for research use and can be requested to BeamNG.GmbH by submitting the form at the following link: https://register.beamng.tech/

NOTE: BeamNG.tech currently supports only Windows, hence CRISCE cannot be used on other platforms (e.g., Linux/Mac) unless you can resort to full system virtualization. For example, we tested CRISCE using the commercial tool Parallels Desktop running on a Mac Book Pro. Performance will not be the same but at least it gets the job done.

Installation

Installing BeamNG.tech

After successfully registered to https://register.beamng.tech/, you should receive an email with the instructions to access the software and a registration key (i.e., tech.key).

Please download the BeamNG.research v1.7.0.1 and unzip it somewhere in your system.

ATTENTION: BeamNG cannot handle paths with spaces and special characters, so please install it in a location that matches these requirements. We suggest something simple, like C:\BeamNG.research_v1.7.0.1.

We refer to this folder as

Store a copy of the tech.key file in a folder somewhere in your system and rename this copy to research.key. BeamNG use this folder to cache the content and the simulation data.

ATTENTION: BeamNG cannot handle paths with spaces and special characters, so please store the registration file in a location that matches these requirements. We suggest something simple, like C:\BeamNG_user.

We refer to this folder as

Installing CRISCE and its Dependencies

We exemplify the installation and usage of CRISCE using Windows Powershell; you can use other systems (e.g., PyCharm) but in that case you need to adapt the commands below.

Before starting, check that you have installed the right version of Python:

python.exe -V
    Python 3.7.10

To install CRISCE we suggest creating a virtual environment using venv. You can also use conda or similar, but in this case you need to adapt the command below to fit your case.

Move to CRISCE's root folder (i.e., where this file is) and create a new virtual environment:

python.exe -m venv .venv

Activate the virtual environment and upgrade pip, setup tools and wheel.

.venv\Scripts\activate
py.exe -m pip install --upgrade pip
pip install setuptools wheel --upgrade

Install the python dependencies listed in the requirements.txt:

pip install -r requirements.txt

At this point, we need to install beamingly from the provided wheel file:

pip install beamngpy-1.18-py2.py3-none-any.whl

Finally, we need to make sure the code of PythonRobotics is there:

git submodule init
git submodule update

At this point, you should be ready to go.

Confirm that CRISCE is installed using the following command from the root folder of this repo:

py.exe crisce/app.py --help

This command must produce an output similar to:

Usage: app.py [OPTIONS] COMMAND [ARGS]...

Options:
  --log-to PATH  Location of the log file. If not specified logs appear on the
                 console
  --debug        Activate debugging (results in more logging)  [default:
                 (Disabled)]
  --help         Show this message and exit.

Commands:
  generate

Running CRISCE

The current release of CRISCE allows to generate a BeamNG simulation of a car crash from a single sketch using the command generate. This command accepts several parameters that you can list by invoking:

py.exe crisce/app.py generate --help

Usage: app.py generate [OPTIONS]

Options:
  --accident-sketch PATH        Input accident sketch for generating the
                                simulation  [required]
  --dataset-name [CIREN|SYNTH]  Name of the dataset the accident comes from.
                                [required]
  --output-to PATH              Folder to store outputs. It will created if
                                not present. If omitted we use the accident
                                folder.
  --beamng-home PATH            Home folder of the BeamNG.research simulator
                                [required]
  --beamng-user PATH            User folder of the BeamNG.research simulator
                                [required]
  --help                        Show this message and exit.

The following commands show how you can generate a simulation of a real car crash (i.e., from a sketch in the CIREN dataset) and from a simulated crash (i.e., from a sketch in the SYNTH dataset). The difference between the two dataset is that for sketches of real car crashes, we have information about the expected impact; while, for synthetic sketches the information is missing.

For example, to create a simulation form the following sketch (i.e., CIREN-99817): CIREN-99817

CIREN-99817

you can run the following command (after replacing and with the appropriate values:

py.exe crisce/app.py generate generate --accident-sketch .\Datasets\CIREN\99817\ --dataset-name CIREN --beamng-home `
   
    ` --beamng-user 
    

    
   

To create a simulation form the following synthetic sketch (i.e., fourway_1): CIREN-99817

you can run the following command:

py.exe crisce/app.py generate generate --accident-sketch ./Datasets/SYNTH/fourway_1 --dataset-name SYNTH --beamng-home `
   
    ` --beamng-user 
    

    
   

Reporting

The generate command produces a number of intermediate outputs that show the progress of the computation and measure the accuracy of the simulation that is printed on the console:

Quality_of_environment = 33.0, quality_of_crash = 17.0, quality_of_trajecory = 19.009199327937655
Crash Simulation Accuracy =  69.00919932793765 %

The intermediate results instead are stored under the sketch folder (under output) or the folder configured via the --output-to parameter.

For the case CIREN-99817 for example, those are the intermediate results produced by CRISCE:

output/
├── car
│   ├── 0_mask_result_b.jpg
│   ├── 0_mask_result_r.jpg
│   ├── 1_blend_masks_r_b.jpg
│   ├── 1_blend_masks_res.jpg
│   ├── 2_opening_morph.jpg
│   ├── 3_AABB_OBB.jpg
│   ├── 4_crash_point_visualization.jpg
│   ├── 5_triangle_extraction.jpg
│   ├── 6_angles_for_vehicles.jpg
│   ├── 7_sequence_of_movements.jpg
│   ├── 8_twelve_point_model_sides.jpg
│   └── 9_crash_point_on_vehicles.jpg
├── kinematics
│   ├── 0_distorted_control_points.jpg
│   ├── 1_distorted_control_points.jpg
│   ├── 2_distorted_trajectory.jpg
│   ├── 2_original_trajectory.jpg
│   ├── 3_distorted_trajectory.jpg
│   └── 3_original_trajectory.jpg
├── road
│   ├── 0_gray_image.jpg
│   ├── 1_blur_image.jpg
│   ├── 2_threshold_image.jpg
│   ├── 3_dilate_image.jpg
│   ├── 4_erode_image.jpg
│   ├── 5_Contour_Viz_image.jpg
│   ├── 6_midpoints_of_lane.jpg
│   ├── 7_distortion_mapping.jpg
│   └── 8_final_result.jpg
├── simulation
│   ├── 0_sim_plot_road.jpg
│   ├── 1_sim_initial_pos_dir.jpg
│   ├── 2_sim_bbox_traj.jpg
│   ├── 3_crisce_beamng_efficiency.jpg
│   ├── 3_crisce_efficiency.jpg
│   └── 4_trace_veh_BBOX.jpg
└── summary.json
Owner
Chair of Software Engineering II, Uni Passau
Chair of Software Engineering II, Uni Passau
Kohei's 5th place solution for xview3 challenge

xview3-kohei-solution Usage This repository assumes that the given data set is stored in the following locations: $ ls data/input/xview3/*.csv data/in

Kohei Ozaki 2 Jan 17, 2022
Resources related to EMNLP 2021 paper "FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations"

FAME: Feature-based Adversarial Meta-Embeddings This is the companion code for the experiments reported in the paper "FAME: Feature-Based Adversarial

Bosch Research 11 Nov 27, 2022
PROJECT - Az Residential Real Estate Analysis

AZ RESIDENTIAL REAL ESTATE ANALYSIS -Decided on libraries to import. Includes pa

2 Jul 05, 2022
Vehicle Detection Using Deep Learning and YOLO Algorithm

VehicleDetection Vehicle Detection Using Deep Learning and YOLO Algorithm Dataset take or find vehicle images for create a special dataset for fine-tu

Maryam Boneh 96 Jan 05, 2023
A New Open-Source Off-road Environment for Benchmark Generalization of Autonomous Driving

A New Open-Source Off-road Environment for Benchmark Generalization of Autonomous Driving Isaac Han, Dong-Hyeok Park, and Kyung-Joong Kim IEEE Access

13 Dec 27, 2022
Robust fine-tuning of zero-shot models

Robust fine-tuning of zero-shot models This repository contains code for the paper Robust fine-tuning of zero-shot models by Mitchell Wortsman*, Gabri

224 Dec 29, 2022
Pytoydl: A toy deep learning framework built upon numpy.

Documents: https://pytoydl.readthedocs.io/zh/latest/ Pytoydl A toy deep learning framework built upon numpy. You can star this repository to keep trac

28 Dec 10, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
Official PyTorch implementation and pretrained models of the paper Self-Supervised Classification Network

Self-Classifier: Self-Supervised Classification Network Official PyTorch implementation and pretrained models of the paper Self-Supervised Classificat

Elad Amrani 24 Dec 21, 2022
(NeurIPS '21 Spotlight) IQ-Learn: Inverse Q-Learning for Imitation

Inverse Q-Learning (IQ-Learn) Official code base for IQ-Learn: Inverse soft-Q Learning for Imitation, NeurIPS '21 Spotlight IQ-Learn is an easy-to-use

Divyansh Garg 102 Dec 20, 2022
Deep Latent Force Models

Deep Latent Force Models This repository contains a PyTorch implementation of the deep latent force model (DLFM), presented in the paper, Compositiona

Tom McDonald 5 Oct 26, 2022
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

184 Dec 11, 2022
ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation

ENet in Caffe Execution times and hardware requirements Network 1024x512 1280x720 Parameters Model size (fp32) ENet 20.4 ms 32.9 ms 0.36 M 1.5 MB SegN

Timo Sämann 561 Jan 04, 2023
Time Series Forecasting with Temporal Fusion Transformer in Pytorch

Forecasting with the Temporal Fusion Transformer Multi-horizon forecasting often contains a complex mix of inputs – including static (i.e. time-invari

Nicolás Fornasari 6 Jan 24, 2022
Flax is a neural network ecosystem for JAX that is designed for flexibility.

Flax: A neural network library and ecosystem for JAX designed for flexibility Overview | Quick install | What does Flax look like? | Documentation See

Google 3.9k Jan 02, 2023
The best solution of the Weather Prediction track in the Yandex Shifts challenge

yandex-shifts-weather The repository contains information about my solution for the Weather Prediction track in the Yandex Shifts challenge https://re

Ivan Yu. Bondarenko 15 Dec 18, 2022
Official Pytorch implementation of 'RoI Tanh-polar Transformer Network for Face Parsing in the Wild.'

Official Pytorch implementation of 'RoI Tanh-polar Transformer Network for Face Parsing in the Wild.'

Jie Shen 125 Jan 08, 2023
Memory efficient transducer loss computation

Introduction This project implements the optimization techniques proposed in Improving RNN Transducer Modeling for End-to-End Speech Recognition to re

Fangjun Kuang 51 Nov 25, 2022
MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens

MSG-Transformer Official implementation of the paper MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens, by Jiemin

Hust Visual Learning Team 68 Nov 16, 2022
Semantic Segmentation Architectures Implemented in PyTorch

pytorch-semseg Semantic Segmentation Algorithms Implemented in PyTorch This repository aims at mirroring popular semantic segmentation architectures i

Meet Shah 3.3k Dec 29, 2022