An algorithmic trading bot that learns and adapts to new data and evolving markets using Financial Python Programming and Machine Learning.

Overview

ALgorithmic_Trading_with_ML

An algorithmic trading bot that learns and adapts to new data and evolving markets using Financial Python Programming and Machine Learning.

The following steps are followed :

  • Establishing a Baseline Performance
  • Tuning the Baseline Trading Algorithm
  • Evaluating a New Machine Learning Classifier
  • Creating an Evaluation Report

Establishing a Baseline Performance

  1. Importing the OHLCV dataset into a Pandas DataFrame.

  2. Trading signals are created using short- and long-window SMA values.

svm_original_report

  1. The data is splitted into training and testing datasets.

  2. Using the SVC classifier model from SKLearn's support vector machine (SVM) learning method to fit the training data and making predictions based on the testing data. Reviewing the predictions.

  3. Reviewing the classification report associated with the SVC model predictions.

svm_strategy_returns

  1. Creating a predictions DataFrame that contains columns for “Predicted” values, “Actual Returns”, and “Strategy Returns”.

  2. Creating a cumulative return plot that shows the actual returns vs. the strategy returns. Save a PNG image of this plot. This will serve as a baseline against which to compare the effects of tuning the trading algorithm.

Actual_Returns_Vs_SVM_Original_Returns


Tune the Baseline Trading Algorithm

The model’s input features are tuned to find the parameters that result in the best trading outcomes. The cumulative products of the strategy returns are compared. Below steps are followed:

  1. The training algorithm is tuned by adjusting the size of the training dataset. To do so, slice your data into different periods.

10_month_svm_report 24_month_sw_4_lw_100_report 48month_sw_4_lw_100_report

Answer the following question: What impact resulted from increasing or decreasing the training window?

Increasing the training dataset size alone did not improve the returns prediction. The precision and recall values for class -1 improved with increase in training set data and presion and recall values for class 1 decreased compared to the original training daatset size(3 months)

  1. The trading algorithm is tuned by adjusting the SMA input features. Adjusting one or both of the windows for the algorithm.

Answer the following question: What impact resulted from increasing or decreasing either or both of the SMA windows?

  • Increasing the short window for SMA increased impacted the precision and recall scores. It improves these scores till certain limit and then the scores decreases.
  • While increasing the short window when we equally incresase the long window we could achieve optimal maximized scores.
  • Another interesting obervation is that when the training dataset increses the short window and long window has to be incresed to get maximum output.

3_month_sw_8_lw_100_report

The set of parameters that best improved the trading algorithm returns. 48_month_sw_10_lw_270_report 48_month_sw_10_lw_270_return_comparison


Evaluating a New Machine Learning Classifier

The original parameters are applied to a second machine learning model to find its performance. To do so, below steps are followed:

  1. Importing a new classifier, we chose LogisticRegression as our new classifier.

  2. Using the original training data we fit the Logistic regression model.

  3. The Logistic Regression model is backtested to evaluate its performance.

Answer the following questions: Did this new model perform better or worse than the provided baseline model? Did this new model perform better or worse than your tuned trading algorithm?

This new model performed good but not as well as our provided baseline model or the tuned trading algorithm.

lr_report lr_return_comparison

🔅 Shapash makes Machine Learning models transparent and understandable by everyone

🎉 What's new ? Version New Feature Description Tutorial 1.6.x Explainability Quality Metrics To help increase confidence in explainability methods, y

MAIF 2.1k Dec 27, 2022
A TensorFlow implementation of SOFA, the Simulator for OFfline LeArning and evaluation.

SOFA This repository is the implementation of SOFA, the Simulator for OFfline leArning and evaluation. Keeping Dataset Biases out of the Simulation: A

22 Nov 23, 2022
Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers

Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers This is the repo used for human motion prediction with non-autoregress

Idiap Research Institute 26 Dec 14, 2022
A project for developing transformer-based models for clinical relation extraction

Clinical Relation Extration with Transformers Aim This package is developed for researchers easily to use state-of-the-art transformers models for ext

uf-hobi-informatics-lab 101 Dec 19, 2022
Self-Supervised Learning for Domain Adaptation on Point-Clouds

Self-Supervised Learning for Domain Adaptation on Point-Clouds Introduction Self-supervised learning (SSL) allows to learn useful representations from

Idan Achituve 66 Dec 20, 2022
PyTorch implementation of a Real-ESRGAN model trained on custom dataset

Real-ESRGAN PyTorch implementation of a Real-ESRGAN model trained on custom dataset. This model shows better results on faces compared to the original

Sber AI 160 Jan 04, 2023
This is an official implementation of "Polarized Self-Attention: Towards High-quality Pixel-wise Regression"

Polarized Self-Attention: Towards High-quality Pixel-wise Regression This is an official implementation of: Huajun Liu, Fuqiang Liu, Xinyi Fan and Don

DeLightCMU 212 Jan 08, 2023
PyTorch implementation for the Neuro-Symbolic Sudoku Solver leveraging the power of Neural Logic Machines (NLM)

Neuro-Symbolic Sudoku Solver PyTorch implementation for the Neuro-Symbolic Sudoku Solver leveraging the power of Neural Logic Machines (NLM). Please n

Ashutosh Hathidara 60 Dec 10, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.

A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.

224 Jan 04, 2023
Learn other languages ​​using artificial intelligence with python.

The main idea of ​​the project is to facilitate the learning of other languages. We created a simple AI that will interact with you. Just ask questions that if she knows, she will answer.

Pedro Rodrigues 2 Jun 07, 2022
Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments (CoRL 2020)

Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments [Project website] [Paper] This project is a PyTorch

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 49 Nov 28, 2022
CS506-Spring2022 - Code and Slides for Boston University CS 506

CS 506 - Computational Tools for Data Science Code, slides, and notes for Boston

Lance Galletti 17 May 06, 2022
Rule Extraction Methods for Interactive eXplainability

REMIX: Rule Extraction Methods for Interactive eXplainability This repository contains a variety of tools and methods for extracting interpretable rul

Mateo Espinosa Zarlenga 21 Jan 03, 2023
B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search

B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search This is the offical implementation of the

SNU ADSL 0 Feb 07, 2022
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
Code for "Typilus: Neural Type Hints" PLDI 2020

Typilus A deep learning algorithm for predicting types in Python. Please find a preprint here. This repository contains its implementation (src/) and

47 Nov 08, 2022
DeLiGAN - This project is an implementation of the Generative Adversarial Network

This project is an implementation of the Generative Adversarial Network proposed in our CVPR 2017 paper - DeLiGAN : Generative Adversarial Net

Video Analytics Lab -- IISc 110 Sep 13, 2022
A Closer Look at Invalid Action Masking in Policy Gradient Algorithms

A Closer Look at Invalid Action Masking in Policy Gradient Algorithms This repo contains the source code to reproduce the results in the paper A Close

Costa Huang 73 Dec 24, 2022
details on efforts to dump the Watermelon Games Paprium cart

Reminder, if you like these repos, fork them so they don't disappear https://github.com/ArcadeHustle/WatermelonPapriumDump/fork Big thanks to Fonzie f

Hustle Arcade 29 Dec 11, 2022