Used Logistic Regression, Random Forest, and XGBoost to predict the outcome of Search & Destroy games from the Call of Duty World League for the 2018 and 2019 seasons.

Overview

Call of Duty World League: Search & Destroy Outcome Predictions

CWL Image

Growing up as an avid Call of Duty player, I was always curious about what factors led to a team winning or losing a match. Was it strictly based on the number of kills each player obtained? Was it who played the objective more? Or was it something different? Finally, after years of waiting, I decided that it was time to find my answers. Coupling my love for Call of Duty and my passion for data science, I began to investigate predicting the outcome of Search & Destroy games from the Call of Duty World League's 2018 and 2019 seasons.

Utilizing Python, I created a Logistic Regression binary classification model that provided insight into the significant factors that led teams to win Search and Destroy matches. Did you know that every time a player has exactly two kills in around a team's odds of winning increase by 59%? Or that every time a team defuses the bomb, their odds of winning the match increase by 54%? What about when someone on the team commits suicide? The team's odds of winning the match decreased by a whopping 43%!

I also built an XGBoost and a Random Forest model to see how accurately I could predict a Search & Destroy match outcome. The XGBoost model was ~89% accurate when predicting Search & Destroy match outcomes on test data! This model found that one of the least important variables for predicting a team's win or loss is if the team had a sneak defuse at any point during the match. Although sneak defuses are beneficial to a team's success, it would be more impactful if players removed all enemies from the round before defusing the bomb.

Project Goals

  1. Learn about essential factors that play into a team's outcome for Search & Destroy matches
  2. See how well I can predict a team's wins and losses for Search & Destroy matches

What did I do?

I used data from 17 different CWL tournaments spanning two years. If you are curious, you can find each dataset within this Activision repository hosted here. I excluded the data from the 2017 CWL Championships tournament because this set does not have all the Search & Destroy variables that the other datasets have. The final dataset had 3,128 observations with 30 variables. In total, there are 1,564 Search & Destroy matches in this dataset. All variables are continuous; there were no categorical variables within the final data used for modeling besides the binary indicator for the match's outcome.

To reach the first goal of this project, I created a Logistic Regression model to learn about the crucial factors that can either help a team win or pull a team toward a loss. To reach the second goal of this project, I elected to use both Random Forest and XGBoost models for classification to try and find the best model possible at predicting match outcomes.

How did I do it?

Logistic Regression

After joining the data, I first needed to group the observations by each match and team, then I filtered for only Search & Destroy games. That way, we have observations for both wins and losses of only Search & Destroy matches. I used a set of 14 variables for the model development process. The variables are as follows: Deaths, Assists, Headshots, Suicides, Hits, Bomb Plants, Bomb Defuses, Bomb Sneak Defuses, Snd Firstbloods, Snd 2-kill round, Snd 3-kill round, Snd 4-kill round, 2-piece, & 3-piece. If you are curious, you can find an explanation of each variable in the entire dataset in the Activision repository linked above.

Since we are using these models to classify wins and losses correctly, I elected to use the Area Under the Receiver Operating Characteristic (AUROC) curve as a metric for determining the best model. I used AUROC because of its balance between the True Positive Rate and the False Positive Rate. I found that the Logistic Regression model with the highest AUROC value on training data had the following variables: Assists, Headshots, Suicides, Defuses, Snd 2-kill round, Snd 3-kill round, & Snd 4-kill round. This model was then used to predict test data and produced the following AUROC curve:

Logistic AUROC Graph

It is worth noting that this model was 75% accurate when predicting wins and losses on test data. Overall, I expected this model to perform worse due to the small number of variables used. Still, it seems as if these variables do an excellent job at deciphering the wins and losses in Search & Destroy matches. You can find the actual values in the confusion matrix built by this model here.

Random Forest & XGBoost

For the second goal of this project, I used both Random Forest and XGBoost classification models to see just how well we could predict the outcome of a match. Neither of these algorithms has the same assumptions as Logistic Regression, so I used the complete set of 14 variables for each technique. Without optimizing hyperparameters, I first built both models to have a baseline model for both algorithms. After this, I decided to use a grid search on the hyperparameters in each model to find the best possible tune for the data.

I found that the optimized XGBoost model had a higher AUROC value than the optimized Random Forest model on training data, so I used the XGBoost model to predict the test data. This model produced the following AUROC curve:

XGBoost AUROC Graph

As expected, this model did much better than the Logistic Regression for predicting match outcomes! This model is ~89% accurate when predicting wins and losses on test data. You can find the confusion matrix for this model here.

What did I find?

From the Logistic Regression model, I found that a team's odds of winning the entire match increase by ~5% every time someone gets a kill with a headshot and ~54% every time the bomb gets defused. A team's odds of winning also increase by 59% every time a player has exactly two kills in a round, ~115% every time a player has precisely three kills in a round, and ~121% every time a player has precisely four kills in around. I also found that a team's odds of winning the entire match decrease by 43% every time a player commits suicide and (oddly enough) 0.34% every time a player receives an assist.

I recommend that professional COD teams looking to up their Search & Destroy win percentage need to find and recruit players with a high amount of bomb defuses and many headshots in Search & Destroy games. If I were a coach, I would be looking to grab Arcitys, Zer0, Clayster, Rated, & Silly. These are five players who have a high count of headshots and defuses in Search & Destroy matches.

If you are curious to learn about the essential variables in the XGBoost model, head over here!

Owner
Brett Vogelsang
M.S. Candidate at the Institute for Advanced Analytics at NC State University.
Brett Vogelsang
slim-python is a package to learn customized scoring systems for decision-making problems.

slim-python is a package to learn customized scoring systems for decision-making problems. These are simple decision aids that let users make yes-no p

Berk Ustun 37 Nov 02, 2022
Predicting India’s COVID-19 Third Wave with LSTM

Predicting India’s COVID-19 Third Wave with LSTM Complete project of predicting new COVID-19 cases in the next 90 days with LSTM India is seeing a ste

Samrat Dutta 4 Jan 27, 2022
A Python Package to Tackle the Curse of Imbalanced Datasets in Machine Learning

imbalanced-learn imbalanced-learn is a python package offering a number of re-sampling techniques commonly used in datasets showing strong between-cla

6.2k Jan 01, 2023
A collection of neat and practical data science and machine learning projects

Data Science A collection of neat and practical data science and machine learning projects Explore the docs » Report Bug · Request Feature Table of Co

Will Fong 2 Dec 10, 2021
Reproducibility and Replicability of Web Measurement Studies

Reproducibility and Replicability of Web Measurement Studies This repository holds additional material to the paper "Reproducibility and Replicability

6 Dec 31, 2022
A toolkit for geo ML data processing and model evaluation (fork of solaris)

An open source ML toolkit for overhead imagery. This is a beta version of lunular which may continue to develop. Please report any bugs through issues

Ryan Avery 4 Nov 04, 2021
Continuously evaluated, functional, incremental, time-series forecasting

timemachines Autonomous, univariate, k-step ahead time-series forecasting functions assigned Elo ratings You can: Use some of the functionality of a s

Peter Cotton 343 Jan 04, 2023
Relevance Vector Machine implementation using the scikit-learn API.

scikit-rvm scikit-rvm is a Python module implementing the Relevance Vector Machine (RVM) machine learning technique using the scikit-learn API. Quicks

James Ritchie 204 Nov 18, 2022
My project contrasts K-Nearest Neighbors and Random Forrest Regressors on Real World data

kNN-vs-RFR My project contrasts K-Nearest Neighbors and Random Forrest Regressors on Real World data In many areas, rental bikes have been launched to

1 Oct 28, 2021
This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch

This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch. It uses a simple TestEnvironment to test the algorithm

Martin Huber 59 Dec 09, 2022
This is the code repository for LRM Stochastic watershed model.

LRM-Squannacook Input data for generating stochastic streamflows are observed and simulated timeseries of streamflow. their format needs to be CSV wit

1 Feb 14, 2022
SIMD-accelerated bitwise hamming distance Python module for hexidecimal strings

hexhamming What does it do? This module performs a fast bitwise hamming distance of two hexadecimal strings. This looks like: DEADBEEF = 1101111010101

Michael Recachinas 12 Oct 14, 2022
Python package for stacking (machine learning technique)

vecstack Python package for stacking (stacked generalization) featuring lightweight functional API and fully compatible scikit-learn API Convenient wa

Igor Ivanov 671 Dec 25, 2022
AutoX是一个高效的自动化机器学习工具,它主要针对于表格类型的数据挖掘竞赛。 它的特点包括: 效果出色、简单易用、通用、自动化、灵活。

English | 简体中文 AutoX是什么? AutoX一个高效的自动化机器学习工具,它主要针对于表格类型的数据挖掘竞赛。 它的特点包括: 效果出色: AutoX在多个kaggle数据集上,效果显著优于其他解决方案(见效果对比)。 简单易用: AutoX的接口和sklearn类似,方便上手使用。

4Paradigm 431 Dec 28, 2022
List of Data Science Cheatsheets to rule the world

Data Science Cheatsheets List of Data Science Cheatsheets to rule the world. Table of Contents Business Science Business Science Problem Framework Dat

Favio André Vázquez 11.7k Dec 30, 2022
The Ultimate FREE Machine Learning Study Plan

The Ultimate FREE Machine Learning Study Plan

Patrick Loeber (Python Engineer) 2.5k Jan 05, 2023
50% faster, 50% less RAM Machine Learning. Numba rewritten Sklearn. SVD, NNMF, PCA, LinearReg, RidgeReg, Randomized, Truncated SVD/PCA, CSR Matrices all 50+% faster

[Due to the time taken @ uni, work + hell breaking loose in my life, since things have calmed down a bit, will continue commiting!!!] [By the way, I'm

Daniel Han-Chen 1.4k Jan 01, 2023
customer churn prediction prevention in telecom industry using machine learning and survival analysis

Telco Customer Churn Prediction - Plotly Dash Application Description This dash application allows you to predict telco customer churn using machine l

Benaissa Mohamed Fayçal 3 Nov 20, 2021
Machine Learning Algorithms ( Desion Tree, XG Boost, Random Forest )

implementation of machine learning Algorithms such as decision tree and random forest and xgboost on darasets then compare results for each and implement ant colony and genetic algorithms on tsp map,

Mohamadreza Rezaei 1 Jan 19, 2022
This repository has datasets containing information of Uber pickups in NYC from April 2014 to September 2014 and January to June 2015. data Analysis , virtualization and some insights are gathered here

uber-pickups-analysis Data Source: https://www.kaggle.com/fivethirtyeight/uber-pickups-in-new-york-city Information about data set The dataset contain

B DEVA DEEKSHITH 1 Nov 03, 2021