NeurIPS workshop paper 'Counter-Strike Deathmatch with Large-Scale Behavioural Cloning'

Overview

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning

Tim Pearce, Jun Zhu
Offline RL workshop, NeurIPS 2021
Paper: https://arxiv.org/abs/2104.04258
Four minute introduction video: https://youtu.be/rnz3lmfSHv0
Gameplay examples: https://youtu.be/KTY7UhjIMm4

Contents

  1. Code Overview
  2. Trained Models
  3. Datasets
  4. Requirements
  5. License
  6. Disclaimer
  7. Maintenance

Code Overview

We briefly describe the workflow: dataset capture -> data processing -> training -> testing.

  1. Use dm_record_data.py to scrape data as a spectator, or dm_record_data_me_wasd.py to record when actively playing. This creates .npy files with screenshots and metadata.
  2. Run dm_infer_actions.py on .npy files from step 1 to infer actions from the metadata ('inverse dynamics model'). These are appended to the same .npy files. The visulations and stats produced by this script can be used to clean the data -- e.g. if the metadata from GSI and RAM disagrees for some variables, this is a sign that the data might be unreliable, if there are long periods of immobility (vel_static), this can mean you were tracking a motionless player. We suggest deleting such files.
  3. Run dm_pretrain_process.py on .npy files from step 2, this creates new .hdf5 files containing screenshots and onehot targets.
  4. (Optional.) Run tools_extract_metaadata.py to pull out metadata from the .npy files, saves as new .npy file.
  5. Run dm_train_model.py to train a model.
  6. Use tools_create_stateful.py to create a 'stateful' version of the model.
  7. Script dm_run_agent.py runs the model in the CSGO environment.

Brief overview of each script's purpose.

  • config.py : Contains key global hyperparameters and settings. Also several functions used across many scripts.
  • screen_input.py : Gets screenshot of the CSGO window, does some cropping and downsampling. Uses win32gui, win32ui, win32con, OpenCV.
  • key_output.py : Contains functions to send mouse clicks, set mouse position and key presses. Uses ctypes.
  • key_output.py : Contains functions to check for mouse location, mouse clicks and key presses. Uses win32api and ctypes.
  • meta_utils.py : Contains functions to set up RAM reading. Also contains a server class to connect to CSGO's GSI tool -- you should update MYTOKENHERE variable if setting up GSI.
  • dm_hazedumper_offsets.py : Contains offsets to read variables from RAM directly. They updated by the function in meta_utils.py.
  • dm_record_data.py : Records screenshots and metadata when in spectator mode. (Note these record scripts can be temperamental, and depend on hazedumper offsets being up to date.)
  • dm_record_data_me_wasd.py : Records screenshots and metadata when actively playing on local machine.
  • dm_infer_actions.py : For recorded dataset, infer actions from metadata and append to same file. Also stats indicative of data quality, and some functionality to rename files sequentially.
  • dm_pretrain_process.py : Takes .npy files from dm_infer_actions.py, and creates twin .hdf5 files ready for fast data loading with inputs and labels as expected by model. Also a function to strip image from .npy files.
  • dm_train_model.py : Trains the neural network model. Either from scratch or from a previous checkpoint.
  • dm_run_agent.py : Runs a trained model in CSGO. Option to collect metadata (eg score) using IS_GSI=True.
  • tools_dataset_inspect.py : Minimal code to open different datasets and accompanying metadata files.
  • tools_extract_metadata.py : Cuts out metadata (currvars, infer_a and helper_arr) from the original .npy files, and saves as new .npy without image. Aggregates 100 files together.
  • tools_dataset_stats.py : Collates metadata from the large online dataset, and produces summary stats and figures as in appendix A.
  • tools_create_stateful.py : This takes a trained 'non-stateful' model and saves a cloned 'stateful' version that should be used at test time (otherwise LSTM states reset between each forward pass). Required due to keras weirdness.
  • tools_map_coverage_analysis.py : Visualises map coverage of different agent and datasets, computes Earth mover's distance.
  • tools_view_save_egs.py : Opens and displays sequences of images .hdf5 and metadata .npy. Option of overlaying inferred actions. Option of saving as .png.
  • console_csgo_setup.txt : Console commands for setting up CSGO levels.

Trained Models

Four trained models are provided. There are 'non-stateful' (use during training) and 'stateful' (use at test time) versions of each.
Download link: https://drive.google.com/drive/folders/11n5Nxj6liSXKP6_vl-FjwfRHtWiSO6Ry?usp=sharing

  • ak47_sub_55k_drop_d4 : Pretrained on AK47 sequences only.
  • ak47_sub_55k_drop_d4_dmexpert_28 : Finetuned on expert deathmatch data.
  • ak47_sub_55k_drop_d4_aimexpertv2_60 : Finetuned on aim mode expert data.
  • July_remoterun7_g9_4k_n32_recipe_ton96__e14 : Pretrained on full dataset.

Datasets

For access to all datasets, please email the first author (tp424 at cam dot ac dot uk) with the title 'csgo dataset access' and a one sentence summary of how you intend to use the data. The email address you send from will be granted access to the cloud account. A brief description of dataset and directory structure is given below.

  • dataset_dm_scraped_dust2/hdf5_dm_july2021_*.hdf5

    • total files: 5500
    • approx size: 700 GB
    • map: dust2
    • gamemode: deathmatch
    • source: scraped from online servers
  • dataset_dm_expert_dust2/hdf5_dm_july2021_expert_*.hdf5

    • total files: 190
    • approx size: 24 GB
    • map: dust2
    • gamemode: deathmatch
    • source: manually created, clean actions
  • dataset_aim_expert/hdf5_aim_july2021_expert_*.hdf5

    • total files: 45
    • approx size: 6 GB
    • map: aim map
    • gamemode: aim mode
    • source: manually created, clean actions
  • dataset_dm_expert_othermaps/hdf5_dm_nuke_expert_*.hdf5

    • total files: 10
    • approx size: 1 GB
    • map: nuke
    • gamemode: deathmatch
    • source: manually created, clean actions
  • dataset_dm_expert_othermaps/hdf5_dm_nuke_expert_*.hdf5

    • total files: 10
    • approx size: 1 GB
    • map: nuke
    • gamemode: deathmatch
    • source: manually created, clean actions
  • dataset_dm_expert_othermaps/hdf5_dm_inferno_expert_*.hdf5

    • total files: 10
    • approx size: 1 GB
    • map: mirage
    • gamemode: deathmatch
    • source: manually created, clean actions
  • dataset_metadata/currvarsv2_dm_july2021_*_to_*.npy, currvarsv2_dm_july2021_expert_*_to_*.npy, currvarsv2_dm_mirage_expert_1_to_100.npy, currvarsv2_dm_inferno_expert_1_to_100.npy, currvarsv2_dm_nuke_expert_1_to_100.npy, currvarsv2_aim_july2021_expert_1_to_100.npy

    • total files: 55 + 2 + 1 + 1 + 1 + 1 = 61
    • approx size: 6 GB
    • map: as per filename
    • gamemode: as per filename
    • source: as per filename
  • location_trackings_backup/

    • total files: 305
    • approx size: 0.5 GB
    • map: dust2
    • gamemode: deathmatch
    • source: contains metadata used to compute map coverage analysis
      • currvarsv2_agentj22 is the agent trained over the full online dataset
      • currvarsv2_agentj22_dmexpert20 is previous model finetuned on the clean expert dust2 dataset
      • currvarsv2_bot_capture is medium difficulty built-in bot

Structure of .hdf5 files (image and action labels):

Each file contains an ordered sequence of 1000 frames (~1 minute) of play. This contains screenshots, as well as processed action labels. We chose .hdf5 format for fast dataloading, since a subset of frames can be accessed without opening the full file. The lookup keys are as follows (where i is frame number 0-999)

  • frame_i_x: is the image
  • frame_i_xaux: contains actions applied in previous timesteps, as well as health, ammo, and team. see dm_pretrain_preprocess.py for details, note this was not used in our final version of the agent
  • frame_i_y: contains target actions in flattened vector form; [keys_pressed_onehot, Lclicks_onehot, Rclicks_onehot, mouse_x_onehot, mouse_y_onehot]
  • frame_i_helperarr: in format [kill_flag, death_flag], each a binary variable, e.g. [1,0] means the player scored a kill and did not die in that timestep

Structure of .npy files (scraped metadata):

Each .npy file contains metadata corresponding to 100 .hdf5 files (as indicated by file name) They are dictionaries with keys of format: file_numi_frame_j for file number i, and frame number j in 0-999 The values are of format [curr_vars, infer_a, frame_i_helperarr] where,

  • curr_vars: contains a dictionary of the metadata originally scraped -- see dm_record_data.py for details
  • infer_a: are inferred actions, [keys_pressed,mouse_x,mouse_y,press_mouse_l,press_mouse_r], with mouse_x and y being continuous values and keys_pressed is in string format
  • frame_i_helperarr: is a repeat of the .hdf5 file

Requirements

Python and OS requirements

Below are the Python package versions used in development. The OS used for interacting with the game (testing and recording) was Windows, and for model training was Linux.

Python: '3.6.9' 
tensorflow: '2.3.0'
tensorflow.keras: '2.4.0'
h5py: '2.10.0'
numpy: '1.18.5'
OpenCV (cv2): '4.4.0'
scipy: '1.4.1'
ctypes: '1.1.0'
json: '2.0.9'

Hardware

Running the agent requires making forward passes through a large convolutional neural networks at 16 fps. If you do not have a CUDA enabled GPU, the agent's performance may degrade.

CSGO requirements

We collected the datasets and conducted testing over game versions 1.37.7.0 to 1.38.0.1. CSGO is continually updated and this may affect performance. The map dust2 received minor updates in 1.38.0.2. To test on the map from version 1.38.0.1, download here: https://steamcommunity.com/sharedfiles/filedetails/?id=2606435621. Future updates to gameplay may also degrade performance, consider rolling back the CSGO game version in this case.

Game State Integration (GSI) is used to pull out some metadata about the game. The dm_run_agent.py script is written so that it may be run without installing GSI (option IS_GSI). If you'd like to record data or extract metadata while running the agent, you'll need to set up GSI: https://www.reddit.com/r/GlobalOffensive/comments/cjhcpy/game_state_integration_a_very_large_and_indepth/ and update MYTOKENHERE in meta_utils.py.

Game settings (resolution, cross hair, mouse sensitivity etc) are documented in the paper, appendix E.

License

This repo can be used for personal projects and open-sourced research. We do not grant a license for its commercial use in any form. If in doubt, please contact us for permission.

Disclaimer

Whilst our code is not intended for cheating/hacking purposes, it's possible that Valve may detect the usage of some of these scripts in game (for example simulated mouse movements and RAM parsing) which in turn might lead to suspicion of cheating. We accept no liability for these sort of issues. Use it at your own risk!

Maintenance

This repo shares code used for academic research. It's not production ready. It's unlikely to be robust across operating systems, python versions, python packages, future CSGO updates etc. There's no plan to actively maintain this repo for these purposes, nor to fix minor bugs. If you'd like to help out with this, please reach out.

Owner
Tim Pearce
AI researcher \\ Curr: Tsinghua Uni \\ Prev: Cambridge Uni
Tim Pearce
In this project we predict the forest cover type using the cartographic variables in the training/test datasets.

Kaggle Competition: Forest Cover Type Prediction In this project we predict the forest cover type (the predominant kind of tree cover) using the carto

Marianne Joy Leano 1 Mar 15, 2022
Customer Segmentation using RFM

Customer-Segmentation-using-RFM İş Problemi Bir e-ticaret şirketi müşterilerini segmentlere ayırıp bu segmentlere göre pazarlama stratejileri belirlem

Nazli Sener 7 Dec 26, 2021
Lowest memory consumption and second shortest runtime in NTIRE 2022 challenge on Efficient Super-Resolution

FMEN Lowest memory consumption and second shortest runtime in NTIRE 2022 on Efficient Super-Resolution. Our paper: Fast and Memory-Efficient Network T

33 Dec 01, 2022
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

TL;DR Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently. Click on the image to

4.2k Jan 01, 2023
Multi Camera Calibration

Multi Camera Calibration 'modules/camera_calibration/app/camera_calibration.cpp' is for calculating extrinsic parameter of each individual cameras. 'm

7 Dec 01, 2022
Joint learning of images and text via maximization of mutual information

mutual_info_img_txt Joint learning of images and text via maximization of mutual information. This repository incorporates the algorithms presented in

Ruizhi Liao 10 Dec 22, 2022
Adversarial Framework for (non-) Parametric Image Stylisation Mosaics

Fully Adversarial Mosaics (FAMOS) Pytorch implementation of the paper "Copy the Old or Paint Anew? An Adversarial Framework for (non-) Parametric Imag

Zalando Research 120 Dec 24, 2022
【steal piano】GitHub偷情分析工具!

【steal piano】GitHub偷情分析工具! 你是否有这样的困扰,有一天你的仓库被很多人加了star,但是你却不知道这些人都是从哪来的? 别担心,GitHub偷情分析工具帮你轻松解决问题! 原理 GitHub偷情分析工具透过分析star的时间以及他们之间的follow关系,可以推测出每个st

黄巍 442 Dec 21, 2022
Using multidimensional LSTM neural networks to create a forecast for Bitcoin price

Multidimensional LSTM BitCoin Time Series Using multidimensional LSTM neural networks to create a forecast for Bitcoin price. For notes around this co

Jakob Aungiers 318 Dec 14, 2022
Weakly Supervised Text-to-SQL Parsing through Question Decomposition

Weakly Supervised Text-to-SQL Parsing through Question Decomposition The official repository for the paper "Weakly Supervised Text-to-SQL Parsing thro

14 Dec 19, 2022
Super Resolution for images using deep learning.

Neural Enhance Example #1 — Old Station: view comparison in 24-bit HD, original photo CC-BY-SA @siv-athens. As seen on TV! What if you could increase

Alex J. Champandard 11.7k Dec 29, 2022
A collection of metrics for evaluating timbre dissimilarity using the TorchMetrics API

Timbre Dissimilarity Metrics A collection of metrics for evaluating timbre dissimilarity using the TorchMetrics API Installation pip install -e . Usag

Ben Hayes 21 Jan 05, 2022
Breast Cancer Detection 🔬 ITI "AI_Pro" Graduation Project

BreastCancerDetection - This program is designed to predict two severity of abnormalities associated with breast cancer cells: benign and malignant. Mammograms from MIAS is preprocessed and features

6 Nov 29, 2022
PyTorch implementation of InstaGAN: Instance-aware Image-to-Image Translation

InstaGAN: Instance-aware Image-to-Image Translation Warning: This repo contains a model which has potential ethical concerns. Remark that the task of

Sangwoo Mo 827 Dec 29, 2022
This repository contains the code for designing risk bounded motion plans for car-like robot using Carla Simulator.

Nonlinear Risk Bounded Robot Motion Planning This code simulates the bicycle dynamics of car by steering it on the road by avoiding another static car

8 Sep 03, 2022
A module for solving and visualizing Schrödinger equation.

qmsolve This is an attempt at making a solid, easy to use solver, capable of solving and visualize the Schrödinger equation for multiple particles, an

506 Dec 28, 2022
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron,

Pratul Srinivasan 65 Dec 14, 2022
Self-Supervised Document-to-Document Similarity Ranking via Contextualized Language Models and Hierarchical Inference

Self-Supervised Document Similarity Ranking (SDR) via Contextualized Language Models and Hierarchical Inference This repo is the implementation for SD

Microsoft 36 Nov 28, 2022
Phy-Q: A Benchmark for Physical Reasoning

Phy-Q: A Benchmark for Physical Reasoning Cheng Xue*, Vimukthini Pinto*, Chathura Gamage* Ekaterina Nikonova, Peng Zhang, Jochen Renz School of Comput

29 Dec 19, 2022
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.

STaCK: Sentence Ordering with Temporal Commonsense Knowledge This repository contains the pytorch implementation of the paper STaCK: Sentence Ordering

Deep Cognition and Language Research (DeCLaRe) Lab 23 Dec 16, 2022