PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

Related tags

Deep LearningCoMON
Overview

Conference Python 3.6 Supports Habitat Lab

Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents

This is a PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

Project Webpage: https://shivanshpatel35.github.io/comon/

CoMON Task

In CoMON, an episode involves two heterogeneous agents -- a disembodied agent with access to oracle top-down map of the environment and an embodied agent which navigates and interacts with the environment. The two agents communicate and collaborate to perform the MultiON task.

Communication Mechanisms

Architecture Overview

Installing dependencies:

This code is tested on python 3.6.10, pytorch v1.4.0 and CUDA V9.1.85.

Install pytorch from https://pytorch.org/ according to your machine configuration.

This code uses older versions of habitat-sim and habitat-lab. Install them by running the following commands:

Installing habitat-sim:

git clone https://github.com/facebookresearch/habitat-sim.git
cd habitat-sim 
git checkout ae6ba1cdc772f7a5dedd31cbf9a5b77f6de3ff0f
pip install -r requirements.txt; 
python setup.py install --headless # (for headless machines with GPU)
python setup.py install # (for machines with display attached)

Installing habitat-lab:

git clone --branch stable https://github.com/facebookresearch/habitat-lab.git
cd habitat-lab
git checkout 676e593b953e2f0530f307bc17b6de66cff2e867
pip install -e .

For installation issues in habitat, feel free to raise an issue in this repository, or in the corresponding habitat repository.

Setup

Clone the repository and install the requirements:

git clone https://github.com/saimwani/comon
cd comon
pip install -r requirements.txt

Downloading data and checkpoints

To evaluate pre-trained models and train new models, you will need to download the MultiON dataset, including objects inserted into the scenes, and model checkpoints for CoMON. Running download_data.sh from the root directory (CoMON/) will download the data and extract it to appropriate directories. Note that you are still required to download Matterport3D scenes after you run the script (see section on Download Matterport3D scenes below).

bash download_multion_data.sh

Download multiON dataset

You do not need to complete this step if you have successfully run the download_data.sh script above.

Run the following to download multiON dataset and cached oracle occupancy maps:

mkdir data
cd data
mkdir datasets
cd datasets
wget -O multinav.zip "http://aspis.cmpt.sfu.ca/projects/multion/multinav.zip"
unzip multinav.zip && rm multinav.zip
cd ../
wget -O objects.zip "http://aspis.cmpt.sfu.ca/projects/multion/objects.zip"
unzip objects.zip && rm objects.zip
wget -O default.phys_scene_config.json "http://aspis.cmpt.sfu.ca/projects/multion/default.phys_scene_config.json"
cd ../
mkdir oracle_maps
cd oracle_maps
wget -O map300.pickle "http://aspis.cmpt.sfu.ca/projects/multion/map300.pickle"
cd ../

Download Matterport3D scenes

The Matterport scene dataset and multiON dataset should be placed in data folder under the root directory (multiON/) in the following format:

CoMON/
  data/
    scene_datasets/
      mp3d/
        1LXtFkjw3qL/
          1LXtFkjw3qL.glb
          1LXtFkjw3qL.navmesh
          ...
    datasets/
      multinav/
        3_ON/
          train/
            ...
          val/
            val.json.gz
        2_ON
          ...
        1_ON
          ...

Download Matterport3D data for Habitat by following the instructions mentioned here.

Usage

Pre-trained models

You do not need to complete this step if you have successfully run the download_data.sh script above.

mkdir model_checkpoints

Download a model checkpoint for Unstructured communication (U-Comm) or Structured communication (S-Comm) setup as shown below.

Agent Run
U-Comm wget -O model_checkpoints/ckpt.1.pth "http://aspis.cmpt.sfu.ca/projects/comon/model_checkpoints/un_struc/ckpt.1.pth"
S-Comm wget -O model_checkpoints/ckpt.1.pth "http://aspis.cmpt.sfu.ca/projects/comon/model_checkpoints/struc/ckpt.1.pth"

Evaluation

To evaluate a pretrained S-Comm agent, run this from the root folder (CoMON/):

python habitat_baselines/run.py --exp-config habitat_baselines/config/multinav/comon.yaml --comm-type struc --run-type eval

For U-Comm setup, replace struc with un-struc.

Average evaluation metrics are printed on the console when evaluation ends. Detailed metrics are placed in tb/eval/metrics directory.

Training

For training an S-Comm agent, run this from the root directory:

python habitat_baselines/run.py --exp-config habitat_baselines/config/multinav/comon.yaml --comm-type struc --run-type train

For U-Comm, replace struc with un-struc.

Citation

Shivansh Patel*, Saim Wani*, Unnat Jain*, Alexander Schwing, Svetlana Lazebnik, Manolis Savva, Angel X. Chang. Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents In ICCV 2021. PDF

Bibtex

@inproceedings{patel2021interpretation,
  Author = {Shivansh Patel and Saim Wani and Unnat Jain and Alexander Schwing and 
  Svetlana Lazebnik and  Manolis Savva and Angel X. Chang},
  Title = {Interpretation of Emergent Communication 
  in Heterogeneous Collaborative Embodied Agents},
  Booktitle = {ICCV},
  Year = {2021}
  }

Acknowledgements

This repository is built upon Habitat Lab.

Owner
Saim Wani
Saim Wani
Official PyTorch Implementation of GAN-Supervised Dense Visual Alignment

GAN-Supervised Dense Visual Alignment — Official PyTorch Implementation Paper | Project Page | Video This repo contains training, evaluation and visua

944 Jan 07, 2023
Evaluation toolkit of the informative tracking benchmark comprising 9 scenarios, 180 diverse videos, and new challenges.

Informative-tracking-benchmark Informative tracking benchmark (ITB) higher diversity. It contains 9 representative scenarios and 180 diverse videos. m

Xin Li 15 Nov 26, 2022
CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability

This is the official repository of the paper: CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability A private copy of the

Fadi Boutros 33 Dec 31, 2022
InsCLR: Improving Instance Retrieval with Self-Supervision

InsCLR: Improving Instance Retrieval with Self-Supervision This is an official PyTorch implementation of the InsCLR paper. Download Dataset Dataset Im

Zelu Deng 25 Aug 30, 2022
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Ren Yurui 261 Jan 09, 2023
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Documentation | FAQ | Release Notes | Roadmap | MACE Model Zoo | Demo | Join Us | 中文 Mobile AI Compute Engine (or MACE for short) is a deep learning i

Xiaomi 4.7k Dec 29, 2022
Repository For Programmers Seeking a platform to show their skills

Programming-Nerds Repository For Programmers Seeking Pull Requests In hacktoberfest ❓ What's Hacktoberfest 2021? Hacktoberfest is the easiest way to g

42 Oct 29, 2022
Official PyTorch implementation of PS-KD

Self-Knowledge Distillation with Progressive Refinement of Targets (PS-KD) Accepted at ICCV 2021, oral presentation Official PyTorch implementation of

61 Dec 28, 2022
Program your own vulkan.gpuinfo.org query in Python. Used to determine baseline hardware for WebGPU.

query-gpuinfo-data License This software is not presently released under a license. The data in data/ is obtained under CC BY 4.0 as specified there.

Kai Ninomiya 5 Jul 18, 2022
This repo contains implementation of different architectures for emotion recognition in conversations.

Emotion Recognition in Conversations Updates 🔥 🔥 🔥 Date Announcements 03/08/2021 🎆 🎆 We have released a new dataset M2H2: A Multimodal Multiparty

Deep Cognition and Language Research (DeCLaRe) Lab 1k Dec 30, 2022
Deep Convolutional Generative Adversarial Networks

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks Alec Radford, Luke Metz, Soumith Chintala All images in t

Alec Radford 3.4k Dec 29, 2022
MiraiML: asynchronous, autonomous and continuous Machine Learning in Python

MiraiML Mirai: future in japanese. MiraiML is an asynchronous engine for continuous & autonomous machine learning, built for real-time usage. Usage In

Arthur Paulino 25 Jul 27, 2022
Vector Quantized Diffusion Model for Text-to-Image Synthesis

Vector Quantized Diffusion Model for Text-to-Image Synthesis Due to company policy, I have to set microsoft/VQ-Diffusion to private for now, so I prov

Shuyang Gu 294 Jan 05, 2023
Vrcwatch - Supply the local time to VRChat as Avatar Parameters through OSC

English: README-EN.md VRCWatch VRCWatch は、VRChat 内のアバター向けに現在時刻を送信するためのプログラムです。 使

Kosaki Mezumona 17 Nov 30, 2022
Official repository for GCR rerank, a GCN-based reranking method for both image and video re-ID

Official repository for GCR rerank, a GCN-based reranking method for both image and video re-ID

53 Nov 22, 2022
The Noise Contrastive Estimation for softmax output written in Pytorch

An NCE implementation in pytorch About NCE Noise Contrastive Estimation (NCE) is an approximation method that is used to work around the huge computat

Kaiyu Shi 287 Nov 25, 2022
FastFace: Lightweight Face Detection Framework

Light Face Detection using PyTorch Lightning

Ömer BORHAN 75 Dec 05, 2022
a baseline to practice

ccks2021_track3_baseline a baseline to practice 路径可能会有问题,自己改改 torch==1.7.1 pyhton==3.7.1 transformers==4.7.0 cuda==11.0 this is a baseline, you can fi

45 Nov 23, 2022
PFLD pytorch Implementation

PFLD-pytorch Implementation of PFLD A Practical Facial Landmark Detector by pytorch. 1. install requirements pip3 install -r requirements.txt 2. Datas

zhaozhichao 669 Jan 02, 2023
custom pytorch implementation of MoCo v3

MoCov3-pytorch custom implementation of MoCov3 [arxiv]. I made minor modifications based on the official MoCo repository [github]. No ViT part code an

39 Nov 14, 2022