MetaDrive: Composing Diverse Scenarios for Generalizable Reinforcement Learning

Overview


MetaDrive: Composing Diverse Driving Scenarios for Generalizable RL


MetaDrive is a driving simulator with the following key features:

  • Compositional: It supports generating infinite scenes with various road maps and traffic settings for the research of generalizable RL.
  • Lightweight: It is easy to install and run. It can run up to 300 FPS on a standard PC.
  • Realistic: Accurate physics simulation and multiple sensory input including Lidar, RGB images, top-down semantic map and first-person view images.

🛠 Quick Start

Install MetaDrive via:

git clone https://github.com/decisionforce/metadrive.git
cd metadrive
pip install -e .

or

pip install metadrive-simulator

Note that the program is tested on both Linux and Windows. Some control and display issues in MacOS wait to be solved

You can verify the installation of MetaDrive via running the testing script:

# Go to a folder where no sub-folder calls metadrive
python -m metadrive.examples.profile_metadrive

Note that please do not run the above command in a folder that has a sub-folder called ./metadrive.

🚕 Examples

Run the following command to launch a simple driving scenario with auto-drive mode on. Press W, A, S, D to drive the vehicle manually.

python -m metadrive.examples.drive_in_single_agent_env

Run the following command to launch a safe driving scenario, which includes more complex obstacles and cost to be yielded.

python -m metadrive.examples.drive_in_safe_metadrive_env

You can also launch an instance of Multi-Agent scenario as follows

python -m metadrive.examples.drive_in_multi_agent_env --env roundabout

or launch and render in pygame front end

python -m metadrive.examples.drive_in_multi_agent_env --pygame_render --env roundabout

env argument could be:

  • roundabout (default)
  • intersection
  • tollgate
  • bottleneck
  • parkinglot
  • pgmap

Run the example of procedural generation of a new map as:

python -m metadrive.examples.procedural_generation

Note that the above four scripts can not be ran in a headless machine. Please refer to the installation guideline in documentation for more information about how to launch runing in a headless machine.

Run the following command to draw the generated maps from procedural generation:

python -m metadrive.examples.draw_maps

To build the RL environment in python script, you can simply code in the OpenAI gym format as:

import metadrive  # Import this package to register the environment!
import gym

env = gym.make("MetaDrive-v0", config=dict(use_render=True))
# env = metadrive.MetaDriveEnv(config=dict(environment_num=100))  # Or build environment from class
env.reset()
for i in range(1000):
    obs, reward, done, info = env.step(env.action_space.sample())  # Use random policy
    env.render()
    if done:
        env.reset()
env.close()

🏫 Documentations

Find more details in: MetaDrive

📎 References

Working in Progress!

build codecov Documentation GitHub license Codacy Badge GitHub contributors

Comments
  • Reproducibility Problem

    Reproducibility Problem

    Hello,

    I am trying to create custom scenarios. For that, I created a custom map similar to vis_a_small_town.py, and I am using the drive_in_multi_agent_env.py example. The environment is defined as follow:

    env = envs[env_cls_name](
            {
                "use_render": True, # if not args.pygame_render else False,
                "manual_control": True,
                "crash_done": False,
                #"agent_policy": ManualControllableIDMPolicy, 
                "num_agents": total_agent_number,
                #"prefer_track_agent": "agent3",
                "show_fps": True, 
                "vehicle_config": {
                    "lidar": {"num_others": total_agent_number},
                    "show_lidar": False,    
                },
                "target_vehicle_configs": {"agent{}".format(i): {
                        #"spawn_lateral": i * 2,
                        "spawn_longitude": i * 10,
                        #"spawn_lane_index":0,
                        "vehicle_model": vehicle_model_list[i],
                        #"max_engine_force":1,
                        "max_speed":100,
                    }
                                               for i in range(5)}
            }
        )
    

    I have a list that consists of steering and throttle_brake values. The scenario consists of almost 1600 steps. I am assigning these values regarding step number into env.step:

    o, r, d, info=env.step({
                    'agent0': [agent0.steering, agent0.pedal], 
                    'agent1': [agent1.steering, agent1.pedal],
                    'agent2': [agent2.steering, agent2.pedal],
                    'agent3': [agent3.steering, agent3.pedal],
                    'agent4': [agent4.steering, agent4.pedal],
                    'agent5': [agent5.steering, agent5.pedal],
                    'agent6': [agent6.steering, agent6.pedal],
                    'agent7': [agent7.steering, agent7.pedal],
                    'agent8': [agent8.steering, agent8.pedal],
                    'agent9': [agent9.steering, agent9.pedal],
                    }
                )
    

    At the end of the list for vehicle commands, the counter for loop set the zero, and commands are reused. Also, vehicles' locations, speeds, and headings are set to initial values stored in the dictionary:

    def initilize_vehicles(env):
    
        global total_agent_number
    
        for i in range (total_agent_number):
            agent_str = "agent" + str(i)
            env.vehicles[agent_str].set_heading_theta(vehicles_initial_values[agent_str]['initial_heading_theta'])
            env.vehicles[agent_str].set_position([vehicles_initial_values[agent_str]['initial_position_x'],vehicles_initial_values[agent_str]['initial_position_y']]) # it is x,y from the first block of the map
            env.vehicles[agent_str].set_velocity(env.vehicles[agent_str].velocity_direction, vehicles_initial_values[agent_str]['initial_velocity']) 
    

    I want to reproduce the scenario and test my main algorithm. However, the problem is vehicles do not act in the same way in every run of scenario. I checked my commands for vehicle using:

    env.vehicles["agent0"].steering,env.vehicles["agent0"].throttle_brake

    Vehicle commands are the same for each repetition of scenarios.

    When I don't use a loop and start the MetaDrive from the terminal, I mostly see the same action from cars. I tested almost 10 times. But in loop case, cars start to act differently after the first loop.

    Reproducibility is a huge concern for me. Is it something about the physic engine? Are there any configuration parameters for the engine?

    Thanks!!

    opened by BedirhanKeskin 9
  • Add more description for Waymo dataset

    Add more description for Waymo dataset

    What changes do you make in this PR?

    • Please describe why you create this PR

    Checklist

    • [ ] I have merged the latest main branch into current branch.
    • [ ] I have run bash scripts/format.sh before merging.
    • Please use "squash and merge" mode.
    opened by pengzhenghao 6
  • Constant FPS mode

    Constant FPS mode

    Is there a way to set constant fps mode? I tried env.engine.force_fps.toggle(): then env.engine.force_fps.fps is showing 50 but visualisation is showing 10-16 fps in the top right corner. Is there any other way? Thanks in advance!

    opened by bbenja 6
  • What is neighbours_distance ?

    What is neighbours_distance ?

    Hello, What is the neighbours_distance and difference with the distance definition inside Lidar? They are inside MULTI_AGENT_METADRIVE_DEFAULT_CONFIG I guess the unit is in meters? `` Ekran görüntüsü 2022-03-31 114715

    opened by BedirhanKeskin 5
  • I encountered an error at an unknown location during runtime. Hello

    I encountered an error at an unknown location during runtime. Hello

    Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. Known pipe types: wglGraphicsPipe (all display modules loaded.)

    opened by shushushulian 4
  • about panda3d

    about panda3d

    when i run "python -m metadrive.examples.drive_in_safe_metadrive_env", set use_render=true the output: Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. Known pipe types: glxGraphicsPipe (1 aux display modules not yet loaded.)

    opened by benicioolee 4
  • RGB Camera returns time-buffered grayscale images

    RGB Camera returns time-buffered grayscale images

    Hi, I am running a vanilla MetaDriveEnv with the rgb camera sensor.

    veh_config = dict(
        image_source="rgb_camera",
        rgb_camera=(IMG_DIM, IMG_DIM))
    

    I wanted to see the images the sensor was producing, so was saving a few of them: I am using: from PIL import Image

    action = np.array([0,0])
    obs, reward, done, info = env.step(action)
    img = Image.fromarray(np.array(obs['image']*256,np.uint8))
    img.save(f"test.jpeg")
    

    I noticed that the images all looked grayscale. And upon further inspection I found the following behavior:: Suppose we want (N,N) images, which should be represented as arrays of size (N,N,3). Step 0: image[:,:,0] = zeros(N,N) ; image[:,:,1] = zeros(N,N) ; image[:,:,2] = zeros(N,N) Step 1: image[:,:,0] = zeros(N,N) ; image[:,:,1] = zeros(N,N) ; image[:,:,2] = m1 Step 2: image[:,:,0] = zeros(N,N) ; image[:,:,1] = m1 ; image[:,:,2] = m2 Step 3: image[:,:,0] = m1 ; image[:,:,1] = m2 ; image[:,:,2] = m3

    where m1, m2, m3 are (N,N) matrices.

    So, the images are in reality displaying 3 different timesteps with the color channels taking the time info (R=t-2, G=t-1, B=t) . That is why the images look mostly gray, since the values are identical almost everywhere – except where we expect some movement (contours) where we see that lines look colorful and strange.

    Apologies if this is expected behavior, and I just had some configuration incorrect.

    image

    opened by EdAlexAguilar 4
  • Fix close and reset issue

    Fix close and reset issue

    What changes do you make in this PR?

    • Please describe why you create this PR

    close #191

    Checklist

    • [x] I have merged the latest main branch into current branch.
    • [x] I have run bash scripts/format.sh before merging.
    • Please use "squash and merge" mode.
    opened by pengzhenghao 4
  • Selection of parameter in Rllib training for SAC agent in MetaDriveEnv and SafeMetaDriveEnv

    Selection of parameter in Rllib training for SAC agent in MetaDriveEnv and SafeMetaDriveEnv

    What is the proper buffer size / batch size / entropy coefficient for SAC to reproduce the results? I find it hard to reproduce results in SafeMetaDriveEnv. In https://arxiv.org/pdf/2109.12674.pdf, does the reported success rate of SAÇ in Table1 refer to the training success rate (and no collision, i.e. safe_rl_env=True)?

    opened by HenryLHH 4
  • Suggestion to run multiple instance in parallel?

    Suggestion to run multiple instance in parallel?

    First of all I would like to express my gratitude on this great project. I really like the feature-rich and lightweight nature of MetaDrive as a driving simulator for reinforcement learning.

    I am wondering what is the recommended way to run multiple MetaDrive instances in parallel (each one with a single ego-car agent)? This seem to be a common use case for reinforcement learning training. I am currently running a batch of MetaDrive simulator with each of them in wrapped in a process, which does seem to have overheads of extra resource and communication/synchronization.

    Another problem I encountered when running multiple instances (say, 60 instances on single machine) in their own process is that I will get a lot of warning like this:

    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    

    I guess this has something to do with audio. This happens even though I am running the TopDown environment, which should not involve sound. Did you see those warning when running multiple instances as well?

    Also, is there a plan to have a vectorized batch version?

    Thanks!

    opened by breakds 4
  • Rendering FPS of example script is too low

    Rendering FPS of example script is too low

    When I run python -m metadrive.examples.drive_in_single_agent_env I found the fps is about 4 fps I used the nvidia-smi command and found that my 2060gpu was not occupied.

    I also can find some warnings about: WARNING:root: It seems you don't install our cython utilities yet! Please reinstall MetaDrive via .........

    opened by feidieufo 4
  • Errors when running metadrive.tests.scripts.generate_video_for_image_obs

    Errors when running metadrive.tests.scripts.generate_video_for_image_obs

    In metadrive directory, I ran python -m metadrive.tests.scripts.generate_video_for_image_obs , then it reported an error as below: python -m metadrive.tests.scripts.generate_video_for_image_obs

    Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. :display(warning): Unable to load libpandagles2.so: No error. Known pipe types: (all display modules loaded.) Traceback (most recent call last): File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/queenie/Documents/metadrive/metadrive/tests/scripts/generate_video_for_image_obs.py", line 157, in env.reset() File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 333, in reset self.lazy_init() # it only works the first time when reset() is called to avoid the error when render File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 234, in lazy_init self.engine = initialize_engine(self.config) File "/Users/queenie/Documents/metadrive/metadrive/engine/engine_utils.py", line 11, in initialize_engine cls.singleton = cls(env_global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/base_engine.py", line 28, in init EngineCore.init(self, global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/core/engine_core.py", line 135, in init super(EngineCore, self).init(windowType=self.mode) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 339, in init self.openDefaultWindow(startDirect = False, props=props) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1021, in openDefaultWindow self.openMainWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1056, in openMainWindow self.openWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 766, in openWindow win = func() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 752, in callbackWindowDict = callbackWindowDict) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 818, in _doOpenWindow self.makeDefaultPipe() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 648, in makeDefaultPipe "No graphics pipe is available!\n" File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/directnotify/Notifier.py", line 130, in error raise exception(errorString) Exception: No graphics pipe is available! Your Config.prc file must name at least one valid panda display library via load-display or aux-display. (drivemeta) ➜ metadrive git:(main) python -m metadrive.tests.scripts.generate_video_for_image_obs Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. :display(warning): Unable to load libpandagles2.so: No error. Known pipe types: (all display modules loaded.) Traceback (most recent call last): File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/queenie/Documents/metadrive/metadrive/tests/scripts/generate_video_for_image_obs.py", line 157, in env.reset() File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 333, in reset self.lazy_init() # it only works the first time when reset() is called to avoid the error when render File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 234, in lazy_init self.engine = initialize_engine(self.config) File "/Users/queenie/Documents/metadrive/metadrive/engine/engine_utils.py", line 11, in initialize_engine cls.singleton = cls(env_global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/base_engine.py", line 28, in init EngineCore.init(self, global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/core/engine_core.py", line 135, in init super(EngineCore, self).init(windowType=self.mode) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 339, in init self.openDefaultWindow(startDirect = False, props=props) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1021, in openDefaultWindow self.openMainWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1056, in openMainWindow self.openWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 766, in openWindow win = func() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 752, in callbackWindowDict = callbackWindowDict) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 818, in _doOpenWindow self.makeDefaultPipe() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 648, in makeDefaultPipe "No graphics pipe is available!\n" File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/directnotify/Notifier.py", line 130, in error raise exception(errorString) Exception: No graphics pipe is available! Your Config.prc file must name at least one valid panda display library via load-display or aux-display.

    opened by YouSonicAI 2
  • opencv-python-headless in requirements seems to create conflict?

    opencv-python-headless in requirements seems to create conflict?

    Sometimes it has different version to opencv-python can cause issue. It is only used in top-down-rendering. Can we change this dependency to opencv-python?

    opened by pengzhenghao 0
Releases(MetaDrive-0.2.6.0)
Owner
DeciForce: Crossroads of Machine Perception and Autonomy
Research on Unifying Machine Perception and Autonomy in Zhou Group
DeciForce: Crossroads of Machine Perception and Autonomy
Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".

VL-BERT By Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai. This repository is an official implementation of the paper VL-BERT:

Weijie Su 698 Dec 18, 2022
SPEAR: Semi suPErvised dAta progRamming

Semi-Supervised Data Programming for Data Efficient Machine Learning SPEAR is a library for data programming with semi-supervision. The package implem

decile-team 91 Dec 06, 2022
Pose estimation with MoveNet Lightning

Pose Estimation With MoveNet Lightning MoveNet is the TensorFlow pre-trained model that identifies 17 different key points of the human body. It is th

Yash Vora 2 Jan 04, 2022
UNAVOIDS: Unsupervised and Nonparametric Approach for Visualizing Outliers and Invariant Detection Scoring

UNAVOIDS: Unsupervised and Nonparametric Approach for Visualizing Outliers and Invariant Detection Scoring Code Summary aggregate.py: this script aggr

1 Dec 28, 2021
Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

2 Dec 28, 2021
Volumetric parameterization of the placenta to a flattened template

placenta-flattening A MATLAB algorithm for volumetric mesh parameterization. Developed for mapping a placenta segmentation derived from an MRI image t

Mazdak Abulnaga 12 Mar 14, 2022
code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download

Shiqi Yang 84 Dec 26, 2022
Improving Generalization Bounds for VC Classes Using the Hypergeometric Tail Inversion

Improving Generalization Bounds for VC Classes Using the Hypergeometric Tail Inversion Preface This directory provides an implementation of the algori

Jean-Samuel Leboeuf 0 Nov 03, 2021
Source code for the plant extraction workflow introduced in the paper “Agricultural Plant Cataloging and Establishment of a Data Framework from UAV-based Crop Images by Computer Vision”

Plant extraction workflow Source code for the plant extraction workflow introduced in the paper "Agricultural Plant Cataloging and Establishment of a

Maurice Günder 0 Apr 22, 2022
This is Unofficial Repo. Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection (CVPR 2021)

Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection This is a PyTorch implementation of the LipForensics paper. This is an U

Minha Kim 2 May 11, 2022
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 08, 2022
My tensorflow implementation of "A neural conversational model", a Deep learning based chatbot

Deep Q&A Table of Contents Presentation Installation Running Chatbot Web interface Results Pretrained model Improvements Upgrade Presentation This wor

Conchylicultor 2.9k Dec 28, 2022
Do Neural Networks for Segmentation Understand Insideness?

This is part of the code to reproduce the results of the paper Do Neural Networks for Segmentation Understand Insideness? [pdf] by K. Villalobos (*),

biolins 0 Mar 20, 2021
Resources for our AAAI 2022 paper: "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification".

LOREN Resources for our AAAI 2022 paper (pre-print): "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification". DEMO System Check out o

Jiangjie Chen 37 Dec 27, 2022
SplineConv implementation for Paddle.

SplineConv implementation for Paddle This module implements the SplineConv operators from Matthias Fey, Jan Eric Lenssen, Frank Weichert, Heinrich Mül

北海若 3 Dec 29, 2021
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models.

Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models

AdvBox 1.3k Dec 25, 2022
Tzer: TVM Implementation of "Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation (OOPSLA'22)“.

Artifact • Reproduce Bugs • Quick Start • Installation • Extend Tzer Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation This is the s

12 Dec 29, 2022
Localizing Visual Sounds the Hard Way

Localizing-Visual-Sounds-the-Hard-Way Code and Dataset for "Localizing Visual Sounds the Hard Way". The repo contains code and our pre-trained model.

Honglie Chen 58 Dec 07, 2022
IA for recognising Traffic Signs using Keras [Tensorflow]

Traffic Signs Recognition ⚠️ 🚦 Fundamentals of Intelligent Systems Introduction 📄 Development of a neural network capable of recognizing nine differ

Sebastián Fernández García 2 Dec 19, 2022
Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready inference.

Yolov5 running on TorchServe (GPU compatible) ! This is a dockerfile to run TorchServe for Yolo v5 object detection model. (TorchServe (PyTorch librar

82 Nov 29, 2022