Code corresponding to The Introspective Agent: Interdependence of Strategy, Physiology, and Sensing for Embodied Agents

Overview

The Introspective Agent:

Interdependence of Strategy, Physiology, and Sensing for Embodied Agents

This is the code corresponding to The Introspective Agent: Interdependence of Strategy, Physiology, and Sensing for Embodied Agents by Sarah Pratt, Luca Weihs, and Ali Farhadi.

Abstract:

The last few years have witnessed substantial progress in the field of embodied AI where artificial agents, mirroring biological counterparts, are now able to learn from interaction to accomplish complex tasks. Despite this success, biological organisms still hold one large advantage over these simulated agents: adaptation. While both living and simulated agents make decisions to achieve goals (strategy), biological organisms have evolved to understand their environment (sensing) and respond to it (physiology). The net gain of these factors depends on the environment, and organisms have adapted accordingly. For example, in a low vision aquatic environment some fish have evolved specific neurons which offer a predictable, but incredibly rapid, strategy to escape from predators. Mammals have lost these reactive systems, but they have a much larger fields of view and brain circuitry capable of understanding many future possibilities. While traditional embodied agents manipulate an environment to best achieve a goal, we argue for an introspective agent, which considers its own abilities in the context of its environment. We show that different environments yield vastly different optimal designs, and increasing long-term planning is often far less beneficial than other improvements, such as increased physical ability. We present these findings to broaden the definition of improvement in embodied AI passed increasingly complex models. Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment

Code

Training

To train the predator and prey, run the following command:

python train.py --planning PLANNING --speed SPEED --vision VISION

Planning has the options of ['low', 'mid', 'high'].
Speed has the options of ['veryslow', 'slow', 'average', 'fast', 'veryfast'].
Vision has the options of ['short', 'medium', 'long'].

So an example command looks like this:

python train.py --planning high --speed average --vision long

Prey and Predator weights will save every 1000 gradient updates under a folder of the form log/planning_high_vision_long_speed_average (as an example corresponding to the example train command)

Evaluation

Our evaluation metric is the number of times that the predator is able to catch the prey in 10,000 steps. The location of the prey is randomly reset after it is caught by the predator (or after 400 steps to avoid outlier episodes).

To evaluate the training run, use the command:

python eval.py --prey-weights ./PATH/TO/PREY/WEIGHTS --predator-weights ./PATH/TO/PREY/WEIGHTS --speed fast --vision short --planning high

which will output a string of the form:

Number of prey caught in 10,000 steps is NUMBER

To visualize a video of the evaluation run, use the flag --video

Prerequisite packages can be found in requirements.txt

If you found this repository useful, please consider citing:

@article{pratt2022introspective,
  title={The Introspective Agent: Interdependence of Strategy, Physiology, and Sensing for Embodied Agents},
  author={Pratt, Sarah and Weihs, Luca and Farhadi, Ali},
  journal={arXiv preprint arXiv:2201.00411},
  year={2022}
}
Datasets for new state-of-the-art challenge in disentanglement learning

High resolution disentanglement datasets This repository contains the Falcor3D and Isaac3D datasets, which present a state-of-the-art challenge for co

NVIDIA Research Projects 37 May 26, 2022
Official implementation of NeurIPS'2021 paper TransformerFusion

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers Project Page | Paper | Video TransformerFusion: Monocular RGB Scene Reconstru

Aljaz Bozic 118 Dec 25, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
Meli Data Challenge 2021 - First Place Solution

My solution for the Meli Data Challenge 2021

Matias Moreyra 23 Mar 09, 2022
Implementation of ICCV19 Paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network"

OANet implementation Pytorch implementation of OANet for ICCV'19 paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network", by

Jiahui Zhang 225 Dec 05, 2022
๐Ÿ…๐Ÿ…๐Ÿ…YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320ร—320~

YOLOv5-Lite๏ผšlighter, faster and easier to deploy Perform a series of ablation experiments on yolov5 to make it lighter (smaller Flops, lower memory, a

pogg 1.5k Jan 05, 2023
Processed, version controlled history of Minecraft's generated data and assets

mcmeta Processed, version controlled history of Minecraft's generated data and assets Repository structure Each of the following branches has a commit

Misode 75 Dec 28, 2022
Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training Consistency Shift (ICCV 2021)

ฮ -NAS This repository provides the evaluation code of our submitted paper: Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training

Jiqi Zhang 18 Aug 18, 2022
Nb workflows - A workflow platform which allows you to run parameterized notebooks programmatically

NB Workflows Description If SQL is a lingua franca for querying data, Jupyter sh

Xavier Petit 6 Aug 18, 2022
Code for the paper: Adversarial Machine Learning: Bayesian Perspectives

Code for the paper: Adversarial Machine Learning: Bayesian Perspectives This repository contains code for reproducing the experiments in the ** Advers

Roi Naveiro 2 Nov 11, 2022
Honours project, on creating a depth estimation map from two stereo images of featureless regions

image-processing This module generates depth maps for shape-blocked-out images Install If working with anaconda, then from the root directory: conda e

2 Oct 17, 2022
Curvlearn, a Tensorflow based non-Euclidean deep learning framework.

English | ็ฎ€ไฝ“ไธญๆ–‡ Why Non-Euclidean Geometry Considering these simple graph structures shown below. Nodes with same color has 2-hop distance whereas 1-ho

Alibaba 123 Dec 12, 2022
Implementation for Simple Spectral Graph Convolution in ICLR 2021

Simple Spectral Graph Convolutional Overview This repo contains an example implementation of the Simple Spectral Graph Convolutional (S^2GC) model. Th

allenhaozhu 64 Dec 31, 2022
Semantically Contrastive Learning for Low-light Image Enhancement

Semantically Contrastive Learning for Low-light Image Enhancement Here, we propose an effective semantically contrastive learning paradigm for Low-lig

48 Dec 16, 2022
BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.

Overview BisQue is a web-based platform specifically designed to provide researchers with organizational and quantitative analysis tools for up to 5D

Vision Research Lab @ UCSB 26 Nov 29, 2022
A Quick and Dirty Progressive Neural Network written in TensorFlow.

prog_nn .โ–„โ–„ ยท โ–„ยท โ–„โ–Œ โ– โ–„ โ–„โ–„โ–„ยท โ– โ–„ โ–โ–ˆ โ–€. โ–โ–ˆโ–ชโ–ˆโ–ˆโ–Œโ€ขโ–ˆโ–Œโ–โ–ˆโ–โ–ˆ โ–„โ–ˆโ–ช โ€ขโ–ˆโ–Œโ–โ–ˆ โ–„โ–€โ–€โ–€โ–ˆโ–„โ–โ–ˆโ–Œโ–โ–ˆโ–ชโ–โ–ˆโ–โ–โ–Œ โ–ˆโ–ˆโ–€

SynPon 53 Dec 12, 2022
This script scrapes and stores the availability of timeslots for Car Driving Test at all RTA Serivce NSW centres in the state.

This script scrapes and stores the availability of timeslots for Car Driving Test at all RTA Serivce NSW centres in the state. Dependencies Account wi

Balamurugan Soundararaj 21 Dec 14, 2022
This code provides various models combining dilated convolutions with residual networks

Overview This code provides various models combining dilated convolutions with residual networks. Our models can achieve better performance with less

Fisher Yu 1.1k Dec 30, 2022
Monocular 3D Object Detection: An Extrinsic Parameter Free Approach (CVPR2021)

Monocular 3D Object Detection: An Extrinsic Parameter Free Approach (CVPR2021) Yunsong Zhou, Yuan He, Hongzi Zhu, Cheng Wang, Hongyang Li, Qinhong Jia

Yunsong Zhou 51 Dec 14, 2022
JAXMAPP: JAX-based Library for Multi-Agent Path Planning in Continuous Spaces

JAXMAPP: JAX-based Library for Multi-Agent Path Planning in Continuous Spaces JAXMAPP is a JAX-based library for multi-agent path planning (MAPP) in c

OMRON SINIC X 24 Dec 28, 2022