A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning

Overview

A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning


WebsiteAboutInstallationUsing OpenDR toolkitExamplesRoadmapLicense

License Test Suite (master)

About

The aim of OpenDR Project is to develop a modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning to provide advanced perception and cognition capabilities, meeting in this way the general requirements of robotics applications in the applications areas of healthcare, agri-food and agile production. OpenDR provides the means to link the robotics applications to software libraries (deep learning frameworks, e.g., PyTorch and Tensorflow) to the operating environment (ROS). OpenDR focuses on the AI and Cognition core technology in order to provide tools that make robotic systems cognitive, giving them the ability to:

  1. interact with people and environments by developing deep learning methods for human centric and environment active perception and cognition,
  2. learn and categorize by developing deep learning tools for training and inference in common robotics settings, and
  3. make decisions and derive knowledge by developing deep learning tools for cognitive robot action and decision making.

As a result, the developed OpenDR toolkit will also enable cooperative human-robot interaction as well as the development of cognitive mechatronics where sensing and actuation are closely coupled with cognitive systems thus contributing to another two core technologies beyond AI and Cognition. OpenDR aims to develop, train, deploy and evaluate deep learning models that improve the technical capabilities of the core technologies beyond the current state of the art.

Installing OpenDR Toolkit

OpenDR can be installed in the following ways:

  1. By cloning this repository (CPU/GPU support)
  2. Using pip (CPU only)
  3. Using docker (CPU/GPU support)

You can find detailed installation instruction in the documentation.

Using OpenDR toolkit

OpenDR provides an intuitive and easy to use Python interface, a C API for performance critical application, a wealth of usage examples and supporting tools, as well as ready-to-use ROS nodes. OpenDR is built to support Webots Open Source Robot Simulator, while it also extensively follows industry standards, such as ONNX model format and OpenAI Gym Interface. You can find detailed documentation in OpenDR wiki, as well as in the tools index.

Roadmap

OpenDR has the following roadmap:

  • v1.0 (2021): Baseline deep learning tools for core robotic functionalities
  • v2.0 (2022): Optimized lightweight and high-resolution deep learning tools for robotics
  • v3.0 (2023): Active perception-enabled deep learning tools for improved robotic perception

How to contribute

Please follow the instructions provided in the wiki.

Acknowledgments

OpenDR project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449.

Comments
  • Install scripts, bdist_wheel, x86 docker and instructions

    Install scripts, bdist_wheel, x86 docker and instructions

    This PR adds the following:

    • [x] Scripts to install OpenDR toolkit in clean Ubuntu 20.04 systems (even when running from a minimal image, e.g,. docker ones)
    • [x] setup.py to correctly install OpenDR package
    • [x] Corrected scripts to activate OpenDR venv environment
    • [x] Scripts to create bdist wheels for cpu only usage
    • [x] Dockerfile for assembling cpu-only OpenDR inference
    • [x] Readme listing different installation options
    • [x] Update wiki to reflect the changes made in this PR

    This PR also add missing __init__.py in toolkit and a __version__ variable according to typical python usage.

    test sources test tools 
    opened by passalis 38
  • Upgrade to CUDA 11.2 and improve GPU support

    Upgrade to CUDA 11.2 and improve GPU support

    This PR upgrades the toolkit to using CUDA 11.2. This also ensures that the toolkit will be compatible with the NVIDIA 30xx GPUs. For PyTorch we are using precompiled packages that bundle CUDA11.1. This does not affect the system-wide CUDA version.

    This PR also improves testing using GPUs, as well as fixes some documentation issues regarding the use of OPENDR_DEVICE variable.

    Tasks to be performed

    • [x] Change dockerfile to using CUDA 11.2
    • [x] Update PyTorch and mxnet
    • [x] Update detectron
    • [x] Update DCNv2
    • [x] Make sure that pip installation does not need any kind of update
    • [x] Update the documentation if needed

    We need to restore github branch in Dockerfile prior to merging.

    test sources test tools test release 
    opened by passalis 33
  • Synthetic multi view facial generator

    Synthetic multi view facial generator

    This is a PR for synthetic multi-view facial image generator which will be a standalone tool of OPENDR generating data (facial images) for procedures such as e.g. training.

    test sources test tools 
    opened by ekakalet 33
  • Mobile rl

    Mobile rl

    Hi everyone,

    This is an initial version with our approach on mobile manipulation based on our paper (https://arxiv.org/abs/2101.05325). It's not completely ready to be merged yet, but should already include all the main parts.

    • It implements the LearnerRL interface
    • Formatted according to PEP-8, .clang-format
    • Most unnecessary functionality should already be removed
    • Includes a first version of the documentation, including examples to train and to evaluate provided checkpoints

    But there are also a few questions from my side. Mainy because this is a project that relies on a python3-based interface for the user and an environment implemented in c++, which additionally draws on functionality from ROS (mainly moveit).

    • Atm I am keeping the c++ src and header files within the module, combined with it's own CMakeLists.txt. (i.e. within src/control/mobile_manipulation/). Is that appropriate?
    • How should I define ROS and c++ dependencies? The module provides environments for several robotic platforms (PR2, Tiago, HSR). This means to compile or run the module the user needs (i) a ROS installation (developped and tested for melodic) (ii) a separate catkin_ws for each robot (iii) launch a launchfile before running the python scripts. Due to this I feel like it makes sense to not require every user of other openDR modules to install these, but rather have them specified specific to this module. Some of robot specific dependencies should furthermore be compiled in separate catkin_workspaces. The model checkpoints are tiny (3x 3MB) and currently directly located in the git repo. Is that ok for such small files?
    • Licenses: this module includes slightly modified launchfiles from openly available ROS packages (~/robots_worlds/[pr2/hsr/tiago]). Do these have to be marked or treated specially somehow?
    • This was developped as part of WP5.2 Deep Navigation. As there is no navigation folder and this approach can be seen as a combination of navigation and control I have located it within control for now. Let me know in case I should move it elsewhere

    Any help on the above would be much appreciated. Other comments on what is already in here are also welcome as well.

    Some remaining todo's for myself to remember:

    • update checkpoints
    • test that gazebo evaluation works
    • test that examples in readme work
    test sources test tools 
    opened by dHonerkamp 31
  • Skeleton based action recognition

    Skeleton based action recognition

    This PR adds two learners (which train and evaluate a baseline model and three proposed models) for skeleton-based human action recognition.

    • A new data type named SkeletonSequence is added to the engine.data.py and a new target class named ActionCategory is added to engine.target.py.

    • The learners' implementation follows the provided template and sufficient tests are provided for all the functions which will be directly called by the user including, fit(), eval(), infer(), save(), load(), optimize(), multi_stream_eval(), network_builder().

    test sources test tools 
    opened by negarhdr 28
  • ROS2 workspace and example nodes

    ROS2 workspace and example nodes

    This PR contains a new ROS2 (Foxy-Fitzroy version) workspace located in the projects directory and, for now, it serves to ~test and finalize the structure, naming, etc.,~ gather and finalize all ROS2 nodes in a unified PR. Right now there are no ~docstrings~ (docstrings added), documentation or READMEs. This description will get updated for any additions.

    Contents:

    1. opendr_perception python package
      • Contains a pose estimation node
      • Contains fall detection node
      • Contains object detection 2d centernet/detr/ssd/yolov3 nodes
      • Contains face detection retinaface node
      • Contains face recognition node
      • Contains semantic segmentation bisenet node
      • ~Contains a subscriber tester node (tester), that subscribes to the messages published by the pose estimation node for testing~ Testing can be performed as described in steps 9 and 10 of Building and Running below.
    2. opendr_ros2_bridge python package
      • Contains bridge.py which includes a class with methods to convert images, poses, etc. from and to ROS2 messages
      • This uses cv_bridge which is included in the vision_opencv package
    3. opendr_ros2_messages CMake package

    The logic behind the structuring of the packages and nodes is similar to OpenDR's ROS1 packages/nodes.

    Below you can find instructions to install, build and run the nodes for testing. Note that i did everything on a system with ROS1 already installed.

    I faced many issues along the way that might reappear in a fresh install of ROS2, etc., so if any problems/errors occur following the instructions please get in touch with me, to possibly save you some time.

    Installation

    • To install ROS2 i followed this tutorial (section 2), which installs the 'foxy' release of ROS2. (Note that on '(7) configure environment variables', you need to replace dashing with foxy)
    • Edit: At this point you might need to run sudo apt-get install ros-foxy-vision-msgs as discussed below
    • Install colcon, basically just sudo apt install python3-colcon-common-extensions
    • Install ros2 usb cam to test with local webcam. In my case i use ros2 run usb_cam usb_cam_node_exe to run it after installation, which seems to work fine

    Building and Running

    1. Navigate to your OpenDR installation and activate it as usual
    2. Navigate to workspace root, opendr_ws_2 directory
    3. Install cv_bridge via the instructions in its README, excluding the last step(build). There seems to be no need to build it, as it will get built along with the rest of the packages later.
    4. Navigate to the workspace root (opendr_ws_2) as the previous step leaves you inside vision_opencv dir
    5. Run colcon build
    6. Run . install/setup.bash
    7. Run ros2 run opendr_perception pose_estimation to start the pose estimation node (or any other existing node)
    8. In a new terminal run ros2 run usb_cam usb_cam_node_exe to grab images from a webcam
    9. In a new terminal run ros2 run rqt_image_view rqt_image_view and select the corresponding topic to view the image result
    10. In a new terminal run ros2 topic echo opendr/poses to view the pose message. Note that it is not really human readable in that form, it should be read in another node and converted into an OpenDR pose object to have access to human-friendly print methods.

    * If you are using conda, check out Illia's comment down below. Thanks @iliiliiliili !

    To be added

    ROS2 nodes to be added according to what ROS1 nodes exist already:


    Perception package:

    • [x] Object detection 2D detr (update from original author) (#296)
    • [x] Video activity recognition (#323)
    • [x] RGBD hand gesture recognition (#341)
    • [x] Panoptic segmentation EfficientPS (#270)
    • [x] Heart anomaly detection(#337)
    • [x] Speech command recognition ( #340)
    • [x] Audiovisual emotion recognition (#342)
    • [x] Skeleton based action recognition (#344)
    • [x] Landmark-based facial expression recognition (#345)
    • [x] Image-based facial emotion estimation (new tool #264, #346)
    • [x] Object detection 2D gem (#295)
    • [x] Object detection 2D YOLOv5 (added in #360, I will directly add the ROS2 node on ros2 branch)
    • [x] Object detection 2D Nanodet (added in #278, I will directly add the ROS2 node on ros2 branch)
    • [x] Object tracking 2D SiamRPN (added in #367, ~I will directly add the~ WIP ROS2 node on ros2 branch)
    • [x] High resolution pose estimation (added in #356, I will directly add the ROS2 node on ros2 branch)
    • [x] Image dataset (#319)
    • [x] Point cloud dataset (#319)
    • [x] Object detection 3D voxel (#319)
    • [x] Object tracking 2D deep sort (#319)
    • [x] Object tracking 2D fair mot (#319)
    • [x] Object tracking 3D ab3dmot (#319)

    Data generation package:

    • [x] Synthetic facial recognition (#288)

    Simulation package:

    • [x] Human model generation client/service (#291)

    Planning package:

    • [x] End to end planner (this is new for ROS1 too) (~~#286, new PR will be opened for ROS2~~ #358)

    Edit1: Updated the last steps of the instructions as well as the contents list as per the latest changes. Edit2: Added information in contents list about the new opendr_ros2_messages. added TODO list for remaining nodes

    enhancement test sources test release 
    opened by tsampazk 24
  • Fer va estimation

    Fer va estimation

    This PR adds image-based facial expression recognition and valence-arousal estimation. It includes learner, unit-tests, demo, document, and ROS node. This is a replacement for the previous PR which had conflicts with other tools.

    test sources test tools 
    opened by negarhdr 23
  • Panoptic segmentation

    Panoptic segmentation

    Hi, this PR adds the EfficientPS network. The original repo can be found here.

    ~~Please also check issue #90.~~

    Todos:

    • [x] Upload pre-trained models to OpenDR server and adjust the URLs in efficient_ps_learner.py.
    • [x] Add unit tests
    • [x] Add documentation to /docs/reference
    • [x] Merge Heatmap implementation with version proposed in #100 ~~and updates pending in #98~~ ~~Install CUDA in GitHub CI~~ --> will not be resolvable since the code requires GPUs. See comment.

    Known issues:

    • ~~reason for failing tests: 3rd party dependencies assume an existing pytorch installation since they attempt to load torch in their setup.py~~
    test sources test tools 
    opened by vniclas 21
  • End to end planning

    End to end planning

    Hi All,

    This is an initial version of our method on end-to-end local planning. It's not completely ready to be merged yet, but should already include all the main parts.

    • It implements the LearnerRL interface
    • Formatted according to PEP-8
    • Includes a first version of the documentation

    Remaining todo's for myself:

    • Tests for code
    test sources test tools 
    opened by halil93ibrahim 21
  • Mobile rl 2

    Mobile rl 2

    Creating a new PR due to a forced pushed. See #68 for initial discussion. To recap the open points from the initial PR:

    • the license on the tiago urdf -> contacted PAL
    • the unittests -> currently blocked by the missing linting support for typing
    test sources test tools 
    opened by dHonerkamp 19
  • rosnode - rgbd_hand_gesture_recognition.py - parameter

    rosnode - rgbd_hand_gesture_recognition.py - parameter

    Parameters need to be consistent with other tools with agparser

    I am using opendr installed on my computer on the develop branch, I am feeding the rgb camera topic and the depth_image like this one: image

    I cannot get an output from the /opendr/gestures topic Is the depth_image topic different from the one that you are using?

    opened by thomaspeyrucain 16
  • Fix package creator

    Fix package creator

    The creation was successful, however the root package wasn't uploaded due to a missing new line in packages.txt. I've uploaded the missing one manually, this is just to ensure everything is fine

    test sources test release 
    opened by ad-daniel 1
  • C api implementations

    C api implementations

    The follow PR contains:

    1. More tools in C api.
    2. New data structures for tensors manipulation in C.
    3. Better Json parser with arrays and floats.
    4. Docs.
    5. Small changes in face_recognition and nanodet_jit (naming parameters as said in wiki).
    6. Tests in new data structures and tools.
    7. Python api bug fixes in open pose and fair mot for onnx optimizations.
    enhancement test sources 
    opened by ManosMpampis 1
  • Several tools have deprecation warnings, especially those relying on numpy

    Several tools have deprecation warnings, especially those relying on numpy

    As emerged here https://github.com/opendr-eu/opendr/pull/381 without the upper restriction on numpy, the version 1.24.0 might be installed and in it several deprecation warnings have expired. Even when things work, several deprecation warnings are printed when running the tests. Both issues should be addressed.

    bug 
    opened by ad-daniel 0
  • ROS1 Object Tracking 2D DeepSort error with input from webcam

    ROS1 Object Tracking 2D DeepSort error with input from webcam

    I was unable to find it documented so i am opening a new issue with the following error for the deepsort ROS1 node:

    [ERROR] [1671020211.255272]: bad callback: <bound method ObjectTracking2DDeepSortNode.callback of <__main__.ObjectTracking2DDeepSortNode object at 0x7edf5c2f28>>
    Traceback (most recent call last):
      File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback
        cb(msg)
      File "/opendr/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_deep_sort_node.py", line 105, in callback
        tracking_boxes = self.learner.infer(image_with_detections, swap_left_top=True)
      File "/opendr/src/opendr/perception/object_tracking_2d/deep_sort/object_tracking_2d_deep_sort_learner.py", line 289, in infer
        result = self.tracker.infer(image, frame_id, swap_left_top=swap_left_top)
      File "/opendr/src/opendr/perception/object_tracking_2d/deep_sort/algorithm/deep_sort_tracker.py", line 81, in infer
        bbox_xywh[:, 3:] *= 1.2
    IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
    

    What i found the node works properly when provided with images from the image_dataset_node but throws this error when taking input from a webcam.

    bug 
    opened by tsampazk 2
  • ROS2 Node for EfficientLPS

    ROS2 Node for EfficientLPS

    Hi all,

    This PR is to add ROS2 node for EfficientLPS. This PR should be merged after #359 is merged. It includes EfficientLPS node and PointCloud2 Publisher node in it. It will remain as a draft until #359 is merged.

    test sources test tools 
    opened by aselimc 0
Releases(v2.0.0)
  • v2.0.0(Dec 30, 2022)

    Released on December, 31st, 2022.

    New Features:

    • Added YOLOv5 as an inference-only tool (#360).
    • Added Continual Transformer Encoders (#317).
    • Added Continual Spatio-Temporal Graph Convolutional Networks tool (#370).
    • Added AmbiguityMeasure utility tool (#361).
    • Added SiamRPN 2D tracking tool (#367).
    • Added Facial Emotion Estimation tool (#264).
    • Added High resolution pose estimation tool (#356).
    • Added ROS2 nodes for all included tools (#256).
    • Added missing ROS nodes and homogenized the interface across the tools (#305).

    Bug Fixes:

    • Fixed BoundingBoxList, TrackingAnnotationList, BoundingBoxList3D and TrackingAnnotationList3D confidence warnings (#365).
    • Fixed undefined image_id and segmentation for COCO BoundingBoxList (#365).
    • Fixed Continual X3D ONNX support (#372).
    • Fixed several issues with ROS nodes and improved performance (#305).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Jun 30, 2022)

  • v1.1(Jun 14, 2022)

    Released on June, 14th, 2022.

    New Features:

    • Added end-to-end planning tool (https://github.com/opendr-eu/opendr/pull/223).
    • Added seq2seq-nms module, along with other custom NMS implementations for 2D object detection.(https://github.com/opendr-eu/opendr/pull/232).

    Enhancements:

    • Added support for modular pip packages allowing tools to be installed separately (https://github.com/opendr-eu/opendr/pull/201).
    • Simplified the installation process for pip by including the appropriate post-installation scripts (https://github.com/opendr-eu/opendr/pull/201).
    • Improved the structure of the toolkit by moving io from utils to engine.helper (https://github.com/opendr-eu/opendr/pull/201).
    • Added support for post-install scripts and opendr dependencies in .ini files (https://github.com/opendr-eu/opendr/pull/201).
    • Updated toolkit to support CUDA 11.2 and improved GPU support (https://github.com/opendr-eu/opendr/pull/215).
    • Added a standalone pose-based fall detection tool (https://github.com/opendr-eu/opendr/pull/237)

    Bug Fixes:

    • updated wheel building pipeline to include missing files and removed unnecessary dependencies (https://github.com/opendr-eu/opendr/pull/200).
    • panoptic_segmentation/efficient_ps: updated dataset preparation scripts to create correct validation ground truth (https://github.com/opendr-eu/opendr/pull/221).
    • panoptic_segmentation/efficient_ps: added specific configuration files for the provided pretrained models (https://github.com/opendr-eu/opendr/pull/221).
    • c_api/face_recognition: pass key by const reference in json_get_key_string() (https://github.com/opendr-eu/opendr/pull/221).
    • pose_estimation/lightweight_open_pose: fixed height check on transformations.py according to original tool repo (https://github.com/opendr-eu/opendr/pull/242).
    • pose_estimation/lightweight_open_pose: fixed two bugs where ONNX optimization failed on specific learner parameterization (https://github.com/opendr-eu/opendr/pull/242).

    Dependency Updates:

    • heart anomaly detection: upgraded scikit-learn runtime dependency from 0.21.3 to 0.22 (https://github.com/opendr-eu/opendr/pull/198).
    • Relaxed all dependencies to allow future versions of non-critical tools to be used (https://github.com/opendr-eu/opendr/pull/201).
    Source code(tar.gz)
    Source code(zip)
  • v1.0(Dec 31, 2021)

    This is the first public version of OpenDR toolkit, which provides baseline deep learning tools for core robotic functionalities. The first version includes (among others):

    • an intuitive and easy-to-use Python interface
    • a wealth of usage examples and supporting tools
    • ready-to-use ROS nodes
    • a partial C API

    You can find detailed installation instructions in OpenDR repository, while detailed documentation can be found in OpenDR wiki.

    Source code(tar.gz)
    Source code(zip)
Owner
OpenDR
OpenDR H2020 Research Project
OpenDR
DLFlow is a deep learning framework.

DLFlow是一套深度学习pipeline,它结合了Spark的大规模特征处理能力和Tensorflow模型构建能力。利用DLFlow可以快速处理原始特征、训练模型并进行大规模分布式预测,十分适合离线环境下的生产任务。利用DLFlow,用户只需专注于模型开发,而无需关心原始特征处理、pipeline构建、生产部署等工作。

DiDi 152 Oct 27, 2022
TFOD-MASKRCNN - Tensorflow MaskRCNN With Python

Tensorflow- MaskRCNN Steps git clone https://github.com/amalaj7/TFOD-MASKRCNN.gi

Amal Ajay 2 Jan 18, 2022
Synthetic Humans for Action Recognition, IJCV 2021

SURREACT: Synthetic Humans for Action Recognition from Unseen Viewpoints Gül Varol, Ivan Laptev and Cordelia Schmid, Andrew Zisserman, Synthetic Human

Gul Varol 59 Dec 14, 2022
AI assistant built in python.the features are it can display time,say weather,open-google,youtube,instagram.

AI assistant built in python.the features are it can display time,say weather,open-google,youtube,instagram.

AK-Shanmugananthan 1 Nov 29, 2021
An Official Repo of CVPR '20 "MSeg: A Composite Dataset for Multi-Domain Segmentation"

This is the code for the paper: MSeg: A Composite Dataset for Multi-domain Semantic Segmentation (CVPR 2020, Official Repo) [CVPR PDF] [Journal PDF] J

226 Nov 05, 2022
QuakeLabeler is a Python package to create and manage your seismic training data, processes, and visualization in a single place — so you can focus on building the next big thing.

QuakeLabeler Quake Labeler was born from the need for seismologists and developers who are not AI specialists to easily, quickly, and independently bu

Hao Mai 15 Nov 04, 2022
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information

ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information This repository contains code, model, dataset for ChineseBERT at ACL2021. Ch

413 Dec 01, 2022
Código de um painel de auto atendimento feito em Python.

Painel de Auto-Atendimento O intuito desse projeto era fazer em Python um programa que simulasse um painel de auto atendimento, no maior estilo Mac Do

Calebe Alves Evangelista 2 Nov 09, 2022
Deep Q-network learning to play flappybird.

AI Plays Flappy Bird I've trained a DQN that learns to play flappy bird on it's own. Try the pre-trained model First install the pip requirements and

Anish Shrestha 3 Mar 01, 2022
Code for the paper: "On the Bottleneck of Graph Neural Networks and Its Practical Implications"

On the Bottleneck of Graph Neural Networks and its Practical Implications This is the official implementation of the paper: On the Bottleneck of Graph

75 Dec 22, 2022
PyTorch wrapper for Taichi data-oriented class

Stannum PyTorch wrapper for Taichi data-oriented class PRs are welcomed, please see TODOs. Usage from stannum import Tin import torch data_oriented =

86 Dec 23, 2022
duralava is a neural network which can simulate a lava lamp in an infinite loop.

duralava duralava is a neural network which can simulate a lava lamp in an infinite loop. Example This is not a real lava lamp but a "fake" one genera

Maximilian Bachl 87 Dec 20, 2022
A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

張致強 14 Dec 02, 2022
A Haskell kernel for IPython.

IHaskell You can now try IHaskell directly in your browser at CoCalc or mybinder.org. Alternatively, watch a talk and demo showing off IHaskell featur

Andrew Gibiansky 2.4k Dec 29, 2022
The Environment I built to study Reinforcement Learning + Pokemon Showdown

pokemon-showdown-rl-environment The Environment I built to study Reinforcement Learning + Pokemon Showdown Been a while since I ran this. Think it is

3 Jan 16, 2022
Fast and customizable reconnaissance workflow tool based on simple YAML based DSL.

Fast and customizable reconnaissance workflow tool based on simple YAML based DSL, with support of notifications and distributed workload of that work

Américo Júnior 3 Mar 11, 2022
A Survey on Deep Learning Technique for Video Segmentation

A Survey on Deep Learning Technique for Video Segmentation A Survey on Deep Learning Technique for Video Segmentation Wenguan Wang, Tianfei Zhou, Fati

Tianfei Zhou 112 Dec 12, 2022
A Python library that provides a simplified alternative to DBAPI 2

A Python library that provides a simplified alternative to DBAPI 2. It provides a facade in front of DBAPI 2 drivers.

Tony Locke 44 Nov 17, 2021
Codes for building and training the neural network model described in Domain-informed neural networks for interaction localization within astroparticle experiments.

Domain-informed Neural Networks Codes for building and training the neural network model described in Domain-informed neural networks for interaction

DIDACTS 0 Dec 13, 2021