PICO is an algorithm for exploiting Reinforcement Learning (RL) on Multi-agent Path Finding tasks.

Related tags

AlgorithmsPICO
Overview

GitHub license Read the Docs GitHub issues GitHub forks GitHub stars

PICO is an algorithm for exploiting Reinforcement Learning (RL) on Multi-agent Path Finding tasks. It is developed by the Multi-Agent Artificial Intelligence Lab (MAIL) in East China Normal University and the AI Research Institute in Geekplus Technology Co., Ltd. PICO is constructed based on the framework of PRIMAL:Pathfinding via Reinforcement and Imitation Multi-Agent Learning and focuses more on the collision avoidance rather than manual post-processing when collision occurs. Exploiting the design of decentralized communication and implicit priority in these secenarios benifits better path finding. To emphasis, more details about PICO can be found in our paper Multi-Agent Path Finding with Prioritized Communication Learning, which is accepted by ICRA 2022.

Distributed Assembly

Reinforcement learning code to train multiple agents to collaboratively plan their paths in a 2D grid world.

Key Components of PICO

  • pico_training.py: Multi-agent training code. Training runs on GPU by default, change line "with tf.device("/gpu:0"):" to "with tf.device("/cpu:0"):" to train on CPU (much slower).Researchers can also flexibly customized their configuration in this file.
  • mapf_gym.py: Multi-agent path planning gym environment, in which agents learn collective path planning.
  • pico_testing.py: Code to run systematic validation tests of PICO, pulled from the saved_environments folder as .npy files and output results in a given folder (by default: test_result).

Installation

git clone https://github.com/mail-ecnu/PICO.git
cd PICO
conda env create -f conda_env.yml
conda activate PICO-dev

Before compilation: compile cpp_mstar code

  • cd into the od_mstar3 folder.
  • python3 setup.py build_ext (may need --inplace as extra argument).
  • copy so object from build/lib.*/ at the root of the od_mstar3 folder.
  • Check by going back to the root of the git folder, running python3 and "import cpp_mstar"

Quick Examples

pico_training.py:

episode_count          = 0
MAX_EPISODE            = 20
EPISODE_START          = episode_count
gamma                  = .95 # discount rate for advantage estimation and reward discounting
#moved network parameters to ACNet.py
EXPERIENCE_BUFFER_SIZE = 128
GRID_SIZE              = 11 #the size of the FOV grid to apply to each agent
ENVIRONMENT_SIZE       = (10,20)#(10,70) the total size of the environment (length of one side)
OBSTACLE_DENSITY       = (0,0.3) #(0,0.5) range of densities
DIAG_MVMT              = False # Diagonal movements allowed?
a_size                 = 5 + int(DIAG_MVMT)*4
SUMMARY_WINDOW         = 10
NUM_META_AGENTS        = 3
NUM_THREADS            = 8 #int(multiprocessing.cpu_count() / (2 * NUM_META_AGENTS))
# max_episode_length     = 256 * (NUM_THREADS//8)
max_episode_length     = 256
NUM_BUFFERS            = 1 # NO EXPERIENCE REPLAY int(NUM_THREADS / 2)
EPISODE_SAMPLES        = EXPERIENCE_BUFFER_SIZE # 64
LR_Q                   = 2.e-5
ADAPT_LR               = True
ADAPT_COEFF            = 5.e-5 #the coefficient A in LR_Q/sqrt(A*steps+1) for calculating LR
load_model             = False
RESET_TRAINER          = False
gifs_path              = 'gifs'
from datetime import datetime
TIMESTAMP = "{0:%Y-%m-%dT%H-%M/}".format(datetime.now())

GLOBAL_NET_SCOPE       = 'global'

#Imitation options
PRIMING_LENGTH         = 2500    #0 number of episodes at the beginning to train only on demonstrations
DEMONSTRATION_PROB     = 0.5

Then

python pico_training.py

Custom testing

Edit pico_testing.py according to the training setting. By default, the model is loaded from the model folder.

Then

python pico_testing.py

Requirements

  • Python 3.4
  • Cython 0.28.4
  • OpenAI Gym 0.9.4
  • Tensorflow 1.3.1
  • Numpy 1.13.3
  • matplotlib
  • imageio (for GIFs creation)
  • tk
  • networkx (if using od_mstar.py and not the C++ version)

Citing our work

If you use this repo in your work, please consider citing the corresponding paper (first two authors contributed equally):

@InProceedings{lichen2022mapf,
  title =    {Multi-Agent Path Finding with Prioritized Communication Learning},
  author =   {Wenhao, Li* and Hongjun, Chen* and Bo, Jin and Wenzhe, Tan and Hongyuan, Zha and Xiangfeng, Wang},
  booktitle =    {ICRA},
  year =     {2022},
  pdf =      {https://arxiv.org/pdf/2202.03634},
  url =      {https://arxiv.org/abs/2202.03634},
}

License

Licensed under the MIT License.

PICO is an algorithm for exploiting Reinforcement Learning (RL) on Multi-agent Path Finding tasks.

PICO is an algorithm for exploiting Reinforcement Learning (RL) on Multi-agent Path Finding tasks. It is developed by the Multi-Agent Artificial Intel

21 Dec 20, 2022
This is a demo for AAD algorithm.

Asynchronous-Anisotropic-Diffusion-Algorithm This is a demo for AAD algorithm. The subroutine of the anisotropic diffusion algorithm is modified from

3 Mar 21, 2022
A* (with 2 heuristic functions), BFS , DFS and DFS iterativeA* (with 2 heuristic functions), BFS , DFS and DFS iterative

Descpritpion This project solves the Taquin game (jeu de taquin) problem using different algorithms : A* (with 2 heuristic functions), BFS , DFS and D

Ayari Ahmed 3 May 09, 2022
Python based framework providing a simple and intuitive framework for algorithmic trading

Harvest is a Python based framework providing a simple and intuitive framework for algorithmic trading. Visit Harvest's website for details, tutorials

100 Jan 03, 2023
Implementation of core NuPIC algorithms in C++

NuPIC Core This repository contains the C++ source code for the Numenta Platform for Intelligent Computing (NuPIC)

Numenta 270 Nov 19, 2022
Python Package for Reflection Ultrasound Computed Tomography (RUCT) Delay And Sum (DAS) Algorithm

pyruct Python Package for Reflection Ultrasound Computed Tomography (RUCT) Delay And Sum (DAS) Algorithm The imaging setup is explained in these paper

Berkan Lafci 21 Dec 12, 2022
A library for benchmarking, developing and deploying deep learning anomaly detection algorithms

A library for benchmarking, developing and deploying deep learning anomaly detection algorithms Key Features • Getting Started • Docs • License Introd

OpenVINO Toolkit 1.5k Jan 04, 2023
This is a Python implementation of the HMRF algorithm on networks with categorial variables.

Salad Salad is an Open Source Python library to segment tissues into different biologically relevant regions based on Hidden Markov Random Fields. The

1 Nov 16, 2021
An NUS timetable generator which uses a genetic algorithm to optimise timetables to suit the needs of NUS students.

A timetable optimiser for NUS which uses an evolutionary algorithm to "breed" a timetable suited to your needs.

Nicholas Lee 3 Jan 09, 2022
Rover. Finding the shortest pass by Dijkstra’s shortest path algorithm

rover Rover. Finding the shortest path by Dijkstra’s shortest path algorithm Задача Вы — инженер, проектирующий роверы-беспилотники. Вам надо спроекти

1 Nov 11, 2021
This is an implementation of the QuickHull algorithm in Python. I

QuickHull This is an implementation of the QuickHull algorithm in Python. It randomly generates a set of points and finds the convex hull of this set

Anant Joshi 4 Dec 04, 2022
Implementation of an ordered dithering algorithm used in computer graphics

Ordered Dithering Project In this project, we use an ordered dithering method to turn an RGB image, first to a gray scale image and then, turn the gra

1 Oct 26, 2021
CLI Eight Puzzle mini-game featuring BFS, DFS, Greedy and A* searches as solver algorithms.

🕹 Eight Puzzle CLI Jogo do quebra-cabeças de 8 peças em linha de comando desenvolvido para a disciplina de Inteligência Artificial. Escrito em python

Lucas Nakahara 1 Jun 30, 2021
GoldenSAML Attack Libraries and Framework

WhiskeySAML and Friends TicketsPlease TicketsPlease: Python library to assist with the generation of Kerberos tickets, remote retrieval of ADFS config

Secureworks 43 Jan 03, 2023
An implementation of ordered dithering algorithm in python as multimedia course project

One way of minimizing the size of an image is to simply reduce the number of bits you use to represent each pixel.

7 Dec 02, 2022
sudoku solver using CSP forward-tracking algorithms.

Sudoku sudoku solver using CSP forward-tracking algorithms. Description Sudoku is a logic-based game that consists of 9 3x3 grids that create one larg

Cindy 0 Dec 27, 2021
Apriori - An algorithm for frequent item set mining and association rule learning over relational databases

Apriori Apriori is an algorithm for frequent item set mining and association rul

Mohammad Nazari 8 Jan 10, 2022
Algorithms-in-Python - Programs related to DSA in Python for placement practice

Algorithms-in-Python Programs related to DSA in Python for placement practice CO

MAINAK CHAUDHURI 2 Feb 02, 2022
Supplementary Data for Evolving Reinforcement Learning Algorithms

evolvingrl Supplementary Data for Evolving Reinforcement Learning Algorithms This dataset contains 1000 loss graphs from two experiments: 500 unique g

John Co-Reyes 42 Sep 21, 2022
8-puzzle-solver with UCS, ILS, IDA* algorithm

Eight Puzzle 8-puzzle-solver with UCS, ILS, IDA* algorithm pre-usage requirements python3 python3-pip virtualenv prepare enviroment virtualenv -p pyth

Mohsen Arzani 4 Sep 22, 2021