A platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.

Overview

Wilderness Scavenger: 3D Open-World FPS Game AI Challenge

This is a platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.

Change Log

  • 2022-05-16: improved engine backend (Linux) with better stability (v1.0)
    • Check out Supported Platforms for download links.
    • Make sure to update to the latest version of the engine if you would like to use depth map or enemy state features.
  • 2022-05-18: updated engine backend for Windows and MacOS (v1.0)

Competition Overview

With a focus on learning intelligent agents in open-world games, this year we are hosting a new contest called Wilderness Scavenger. In this new game, which features a Battle Royale-style 3D open-world gameplay experience and a random PCG-based world generation, participants must learn agents that can perform subtasks common to FPS games, such as navigation, scouting, and skirmishing. To win the competition, agents must have strong perception of complex 3D environments and then learn to exploit various environmental structures (such as terrain, buildings, and plants) by developing flexible strategies to gain advantages over other competitors. Despite the difficulty of this goal, we hope that this new competition can serve as a cornerstone of research in AI-based gaming for open-world games.

Features

  • A light-weight 3D open-world FPS game developed with Unity3D game engine
  • Rendering-off game acceleration for fast training and evaluation
  • Large open world environment providing high freedom of agent behaviors
  • Highly customizable game configuration with random supply distribution and dynamic refresh
  • PCG-based map generation with randomly spawned buildings, plants and obstacles (100 training maps)
  • Interactive replay tool for game record visualization

Basic Structures

We developed this repository to provide a training and evaluation platform for the researchers interested in open-world FPS game AI. For getting started quickly, a typical workspace structure when using this repository can be summarized as follows:

.
├── examples  # providing starter code examples and training baselines
│   ├── envs/...
│   ├── basic.py
│   ├── basic_track1_navigation.py
│   ├── basic_track2_supply_gather.py
│   ├── basic_track3_supply_battle.py
│   ├── baseline_track1_navigation.py
│   ├── baseline_track2_supply_gather.py
│   └── baseline_track3_supply_battle.py
├── inspirai_fps  # the game play API source code
│   ├── lib/...
│   ├── __init__.py
│   ├── gamecore.py
│   ├── raycast_manager.py
│   ├── simple_command_pb2.py
│   ├── simple_command_pb2_grpc.py
│   └── utils.py
└── fps_linux  # the engine backend (Linux)
    ├── UnityPlayer.so
    ├── fps.x86_64
    ├── fps_Data/...
    └── logs/...
  • fps_linux (requires to be manually downloaded and unzipped to your working directory): the (Linux) engine backend extracted from our game development project, containing all the game related assets, binaries and source codes.
  • inspirai_fps: the python gameplay API for agent training and testing, providing the core Game class and other useful tool classes and functions.
  • examples: we provide basic starter codes for each game mode targeting each track of the challenge, and we also give out our implementation of some baseline solutions based on ray.rllib reinforcement learning framework.

Supported Platforms

We support the multiple platforms with different engine backends, including:

Installation (from source)

To use the game play API, you need to first install the package inspirai_fps by following the commands below:

git clone https://github.com/inspirai/wilderness-scavenger
cd wilderness-scavenger
pip install .

We recommend installing this package with python 3.8 (which is our development environment), so you may first create a virtual env using conda and finish installation:

$ conda create -n WildScav python=3.8
$ conda activate WildScav
(WildScav) $ pip install .

Installation (from PyPI)

Note: this may not be maintained in time. We strongly recommend using the installation method above

Alternatively, you can install the package from PyPI directly. But note that this will only install the gameplay API inspirai_fps, not the backend engine. So you still need to manually download the correct engine backend from the Supported Platfroms section.

pip install inspirai-fps

Loading Engine Backend

To successfully run the game, you need to make sure the game engine backend for your platform is downloaded and set the engine_dir parameter of the Game init function correctly. For example, here is a code snippet in the script example/basic.py:

from inspirai_fps import Game, ActionVariable
...
parser.add_argument("--engine-dir", type=str, default="../fps_linux")
...
game = Game(..., engine_dir=args.engine_dir, ...)

Loading Map Data

To get access to some features like realtime depth map computation or randomized player spawning, you need to load the map data and load them into the Game. After this, once you turn on the depth map rendering, the game server will automatically compute a depth map viewing from the player's first person perspective at each time step.

  1. Download map data from Google Drive or Feishu and decompress the downloaded file to your preferred directory (e.g., <WORKDIR>/map_data).
  2. Set map_dir parameter of the Game initializer accordingly
  3. Set the map_id as you like
  4. Turn on the function of depth map computation
  5. Turn on random start location to spawn agents at random places

Read the following code snippet in the script examples/basic.py as an example:

from inspirai_fps import Game, ActionVariable
...
parser.add_argument("--map-id", type=int, default=1)
parser.add_argument("--use-depth-map", action="store_true")
parser.add_argument("--random-start-location", action="store_true")
parser.add_argument("--map-dir", type=str, default="../map_data")
...
game = Game(map_dir=args.map_dir, ...)
game.set_map_id(args.map_id)  # this will load the valid locations of the specified map
...
if args.use_depth_map:
    game.turn_on_depth_map()
    game.set_depth_map_size(380, 220, 200)  # width (pixels), height (pixels), depth_limit (meters)
...
if args.random_start_location:
    for agent_id in range(args.num_agents):
        game.random_start_location(agent_id, indoor=False)  # this will randomly spawn the player at a valid outdoor location, or indoor location if indoor is True
...
game.new_episode()  # start a new episode, this will load the mesh of the specified map

Gameplay Visualization

We have also developed a replay visualization tool based on the Unity3D game engine. It is similar to the spectator mode common in multiplayer FPS games, which allows users to interactively follow the gameplay. Users can view an agent's action from different perspectives and also switch between multiple agents or different viewing modes (e.g., first person, third person, free) to see the entire game in a more immersive way. Participants can download the tool for their specific platforms here:

To use this tool, follow the instruction below:

  • Decompress the downloaded file to anywhere you prefer.
  • Turn on recording function with game.turn_on_record(). One record file will be saved at the end of each episode.

Find the replay files under the engine directory according to your platform:

  • Linux: <engine_dir>/fps_Data/StreamingAssets/Replay
  • Windows: <engine_dir>\FPSGameUnity_Data\StreamingAssets\Replay
  • MacOS: <engine_dir>/Contents/Resources/Data/StreamingAssets/Replay

Copy replay files you want to the replay tool directory according to your platform and start the replay tool.

For Windows users:

  • Copy the replay file (e.g. xxx.bin) into <replayer_dir>/FPSGameUnity_Data/StreamingAssets/Replay
  • Run FPSGameUnity.exe to start the application.

For MacOS users:

  • Copy the replay file (e.g. xxx.bin) into <replayer_dir>/Contents/Resources/Data/StreamingAssets/Replay
  • Run fps.app to start the application.

In the replay tool, you can:

  • Select the record you want to watch from the drop-down menu and click PLAY to start playing the record.
  • During the replay, users can make the following operations
    • Press Tab: pause or resume
    • Press E: switch observation mode (between first person, third person, free)
    • Press Q: switch between multiple agents
    • Press ECS: stop replay and return to the main menu
A simple python module to generate anchor (aka default/prior) boxes for object detection tasks.

PyBx WIP A simple python module to generate anchor (aka default/prior) boxes for object detection tasks. Calculated anchor boxes are returned as ndarr

thatgeeman 4 Dec 15, 2022
Official repo for BMVC2021 paper ASFormer: Transformer for Action Segmentation

ASFormer: Transformer for Action Segmentation This repo provides training & inference code for BMVC 2021 paper: ASFormer: Transformer for Action Segme

42 Dec 23, 2022
Pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model'

RTK-PAD This is an official pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model', which is accepted by IEEE T

6 Aug 01, 2022
Accuracy Aligned. Concise Implementation of Swin Transformer

Accuracy Aligned. Concise Implementation of Swin Transformer This repository contains the implementation of Swin Transformer, and the training codes o

FengWang 77 Dec 16, 2022
🐸STT integration examples

🐸 STT 0.9.x Examples These are various examples on how to use or integrate 🐸 STT using our packages. It is a good way to just try out 🐸 STT before

coqui 92 Dec 19, 2022
FaRL for Facial Representation Learning

FaRL for Facial Representation Learning This repo hosts official implementation of our paper General Facial Representation Learning in a Visual-Lingui

Microsoft 19 Jan 05, 2022
PyTorch Implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedding (ORAL, MICCAIW 2021)

Small Lesion Segmentation in Brain MRIs with Subpixel Embedding PyTorch implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedd

22 Oct 21, 2022
Neon-erc20-example - Example of creating SPL token and wrapping it with ERC20 interface in Neon EVM

Example of wrapping SPL token by ERC2-20 interface in Neon Requirements Install

7 Mar 28, 2022
Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion"

DSPoint Official implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion". Paper link: https://arxiv.org/abs/2111.10

Ziyao Zeng 14 Feb 26, 2022
Codes for our paper The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders published to EMNLP 2021.

The Stem Cell Hypothesis Codes for our paper The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders published to EMNLP

Emory NLP 5 Jul 08, 2022
Entity-Based Knowledge Conflicts in Question Answering.

Entity-Based Knowledge Conflicts in Question Answering Run Instructions | Paper | Citation | License This repository provides the Substitution Framewo

Apple 35 Oct 19, 2022
This repository contains the code for the paper ``Identifiable VAEs via Sparse Decoding''.

Sparse VAE This repository contains the code for the paper ``Identifiable VAEs via Sparse Decoding''. Data Sources The datasets used in this paper wer

Gemma Moran 17 Dec 12, 2022
[NeurIPS 2020] Official repository for the project "Listening to Sound of Silence for Speech Denoising"

Listening to Sounds of Silence for Speech Denoising Introduction This is the repository of the "Listening to Sounds of Silence for Speech Denoising" p

Henry Xu 40 Dec 20, 2022
MvtecAD unsupervised Anomaly Detection

MvtecAD unsupervised Anomaly Detection This respository is the unofficial implementations of DFR: Deep Feature Reconstruction for Unsupervised Anomaly

0 Feb 25, 2022
Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONNX.

ONNX-HybridNets-Multitask-Road-Detection Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONN

Ibai Gorordo 45 Jan 01, 2023
Implementation of Graph Convolutional Networks in TensorFlow

Graph Convolutional Networks This is a TensorFlow implementation of Graph Convolutional Networks for the task of (semi-supervised) classification of n

Thomas Kipf 6.6k Dec 30, 2022
The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop.

AICITY2021_Track2_DMT The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop. Introduction

Hao Luo 91 Dec 21, 2022
Dieser Scanner findet Websites, die nicht direkt in Suchmaschinen auftauchen, aber trotzdem erreichbar sind.

Deep Web Scanner Dieses Script findet Websites, die per IPv4-Adresse erreichbar sind und speichert deren Metadaten. Die Ausgabe im Terminal wird nach

Alex K. 30 Nov 18, 2022
Python implementation of a live deep learning based age/gender/expression recognizer

TUT live age estimator Python implementation of a live deep learning based age/gender/smile/celebrity twin recognizer. All components use convolutiona

Heikki Huttunen 80 Nov 21, 2022
Barlow Twins and HSIC

Barlow Twins and HSIC Unofficial Pytorch implementation for Barlow Twins and HSIC_SSL on small datasets (CIFAR10, STL10, and Tiny ImageNet). Correspon

Yao-Hung Hubert Tsai 49 Nov 24, 2022