Regression Metrics Calculation Made easy for tensorflow2 and scikit-learn

Overview

Regression Metrics

Installation

To install the package from the PyPi repository you can execute the following command:

pip install regressionmetrics

If you prefer, you can clone it and run the setup.py file. Use the following commands to get a copy from GitHub and install all dependencies:

git clone https://github.com/ashishpatel26/regressionmetrics.git
cd regressionmetrics
pip install .
  • Mean Absolute Error - sklearn, keras
  • Mean Square Error - sklearn, keras
  • Root Mean Square Error - sklearn, keras
  • Root Mean Square Logarithmic Error - sklearn, keras
  • Root Mean Square Logarithmic Error with negative value handle - sklearn
  • R2 Score - sklearn, keras
  • Adjusted R2 Score - sklearn, keras
  • Mean Absolute Percentage Error - sklearn, keras
  • Mean squared logarithmic Error - sklearn, keras
  • Symmetric mean absolute percentage error - sklearn, keras
  • Normalized Root Mean Squared Error - sklearn, keras

Usage

Usage with scikit learn :

from regressionmetrics.metrics import *

y_true = [3, 0.5, 2, 7]
y_pred = [2.5, 0.0, 2, -8]


print("R2Score: ",r2(y_true, y_pred))
print("Adjusted_R2_Score:",adj_r2(y_true, y_pred))
print("RMSE:", rmse(y_true, y_pred))
print("MAE:",mae(y_true, y_pred))
print("RMSLE with Neg Value:", rmsle_with_negval(y_true, y_pred))
print("MSE:", mse(y_true, y_pred))
print("MAPE: ", mape(y_true, y_pred))

Usage with Tensorflow keras:

from regressionmetrics.keras import *
import pandas as pd
import numpy as np

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(path="boston_housing.npz", test_split=0.2, seed=113)

model = keras.Sequential([
    layers.Dense(64, activation='relu', input_shape=(x_train.shape[1],)),
    layers.Dense(64, activation='relu'),
    layers.Dense(1)
])
model.compile(optimizer='rmsprop', loss='mse', metrics=[r2, mae, mse, rmse, mape, rmsle, nrmse])
model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_test, y_test))
Epoch 1/10
 1/13 [=>............................] - ETA: 7s - loss: 1574.7567 - r2: 0.6597 - mae: 37.1803 - mse: 1574.7567 - rmse: 37.1802 - mape: 159.261313/13 [==============================] - 1s 15ms/step - loss: 270.0653 - r2: 0.9472 - mae: 11.5427 - mse: 270.0653 - rmse: 11.5427 - mape: 57.3519 - rmsle: 0.6445 - nrmse: 0.5735 - val_loss: 88.6351 - val_r2: 0.9727 - val_mae: 6.6028 - val_mse: 88.6351 - val_rmse: 6.6028 - val_mape: 29.6502 - val_rmsle: 0.3161 - val_nrmse: 0.2965
Epoch 2/10
 1/13 [=>............................] - ETA: 0s - loss: 74.6623 - r2: 0.9913 - mae: 5.5958 - mse: 74.6623 - rmse: 5.5958 - mape: 25.3655 - rmsl13/13 [==============================] - 0s 3ms/step - loss: 87.1876 - r2: 0.9856 - mae: 6.9466 - mse: 87.1876 - rmse: 6.9466 - mape: 33.4256 - rmsle: 0.3057 - nrmse: 0.3343 - val_loss: 81.7884 - val_r2: 0.9712 - val_mae: 6.6424 - val_mse: 81.7884 - val_rmse: 6.6424 - val_mape: 28.8687 - val_rmsle: 0.3334 - val_nrmse: 0.2887
Epoch 3/10
 1/13 [=>............................] - ETA: 0s - loss: 41.2790 - r2: 0.9722 - mae: 5.3798 - mse: 41.2790 - rmse: 5.3798 - mape: 28.7497 - rmsl13/13 [==============================] - 0s 3ms/step - loss: 103.6462 - r2: 0.9825 - mae: 7.1041 - mse: 103.6462 - rmse: 7.1041 - mape: 34.6278 - rmsle: 0.3231 - nrmse: 0.3463 - val_loss: 71.7539 - val_r2: 0.9769 - val_mae: 6.1455 - val_mse: 71.7539 - val_rmse: 6.1455 - val_mape: 27.5078 - val_rmsle: 0.2893 - val_nrmse: 0.2751
Epoch 4/10
 1/13 [=>............................] - ETA: 0s - loss: 113.6758 - r2: 0.9917 - mae: 6.6575 - mse: 113.6758 - rmse: 6.6575 - mape: 20.8683 - rm13/13 [==============================] - 0s 3ms/step - loss: 88.1601 - r2: 0.9823 - mae: 6.8479 - mse: 88.1601 - rmse: 6.8479 - mape: 32.5867 - rmsle: 0.3080 - nrmse: 0.3259 - val_loss: 63.3707 - val_r2: 0.9829 - val_mae: 6.0845 - val_mse: 63.3707 - val_rmse: 6.0845 - val_mape: 33.1628 - val_rmsle: 0.2747 - val_nrmse: 0.3316
Epoch 5/10
 1/13 [=>............................] - ETA: 0s - loss: 85.8188 - r2: 0.9893 - mae: 7.0097 - mse: 85.8188 - rmse: 7.0097 - mape: 34.8362 - rmsl13/13 [==============================] - 0s 3ms/step - loss: 82.3233 - r2: 0.9860 - mae: 6.5795 - mse: 82.3233 - rmse: 6.5795 - mape: 32.5198 - rmsle: 0.3105 - nrmse: 0.3252 - val_loss: 74.4783 - val_r2: 0.9813 - val_mae: 6.8936 - val_mse: 74.4783 - val_rmse: 6.8936 - val_mape: 41.9492 - val_rmsle: 0.3067 - val_nrmse: 0.4195
Epoch 7/10
 1/13 [=>............................] - ETA: 0s - loss: 105.6430 - r2: 0.9658 - mae: 9.4737 - mse: 105.6430 - rmse: 9.4737 - mape: 53.0854 - rm13/13 [==============================] - 0s 3ms/step - loss: 76.0740 - r2: 0.9856 - mae: 6.4234 - mse: 76.0740 - rmse: 6.4234 - mape: 31.8728 - rmsle: 0.2828 - nrmse: 0.3187 - val_loss: 104.1779 - val_r2: 0.9679 - val_mae: 7.5539 - val_mse: 104.1779 - val_rmse: 7.5539 - val_mape: 30.9401 - val_rmsle: 0.3692 - val_nrmse: 0.3094
Epoch 8/10
 1/13 [=>............................] - ETA: 0s - loss: 100.0114 - r2: 0.9833 - mae: 6.8492 - mse: 100.0114 - rmse: 6.8492 - mape: 27.9621 - rm13/13 [==============================] - 0s 4ms/step - loss: 68.4268 - r2: 0.9892 - mae: 5.9540 - mse: 68.4268 - rmse: 5.9540 - mape: 29.7586 - rmsle: 0.2623 - nrmse: 0.2976 - val_loss: 171.7968 - val_r2: 0.9412 - val_mae: 10.5855 - val_mse: 171.7968 - val_rmse: 10.5855 - val_mape: 47.9010 - val_rmsle: 0.7561 - val_nrmse: 0.4790
Epoch 9/10
 1/13 [=>............................] - ETA: 0s - loss: 291.8670 - r2: 0.9725 - mae: 13.9899 - mse: 291.8670 - rmse: 13.9899 - mape: 61.3658 - 13/13 [==============================] - 0s 3ms/step - loss: 92.3889 - r2: 0.9796 - mae: 6.8932 - mse: 92.3889 - rmse: 6.8932 - mape: 33.2856 - rmsle: 0.3333 - nrmse: 0.3329 - val_loss: 67.2208 - val_r2: 0.9808 - val_mae: 5.8498 - val_mse: 67.2208 - val_rmse: 5.8498 - val_mape: 26.4504 - val_rmsle: 0.2680 - val_nrmse: 0.2645
Epoch 10/10
 1/13 [=>............................] - ETA: 0s - loss: 97.0853 - r2: 0.9923 - mae: 5.9866 - mse: 97.0853 - rmse: 5.9866 - mape: 24.9878 - rmsl13/13 [==============================] - 0s 3ms/step - loss: 78.3823 - r2: 0.9856 - mae: 6.5958 - mse: 78.3823 - rmse: 6.5958 - mape: 32.8136 - rmsle: 0.3025 - nrmse: 0.3281 - val_loss: 69.5314 - val_r2: 0.9787 - val_mae: 6.8302 - val_mse: 69.5314 - val_rmse: 6.8302 - val_mape: 37.3933 - val_rmsle: 0.2974 - val_nrmse: 0.3739

😃 Thanks for reading and forking.

You might also like...
Hitters Linear Regression - Hitters Linear Regression With Python
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

A set of tools for creating and testing machine learning features, with a scikit-learn compatible API

Feature Forge This library provides a set of tools that can be useful in many machine learning applications (classification, clustering, regression, e

Using python and scikit-learn to make stock predictions

MachineLearningStocks in python: a starter project and guide EDIT as of Feb 2021: MachineLearningStocks is no longer actively maintained MachineLearni

A real-time speech emotion recognition application using Scikit-learn and gradio
A real-time speech emotion recognition application using Scikit-learn and gradio

Speech-Emotion-Recognition-App A real-time speech emotion recognition application using Scikit-learn and gradio. Requirements librosa==0.6.3 numpy sou

Python package for Bayesian Machine Learning with scikit-learn API
Python package for Bayesian Machine Learning with scikit-learn API

Python package for Bayesian Machine Learning with scikit-learn API Installing & Upgrading package pip install https://github.com/AmazaspShumik/sklearn

A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

scikit-learn: machine learning in Python
scikit-learn: machine learning in Python

scikit-learn is a Python module for machine learning built on top of SciPy and is distributed under the 3-Clause BSD license. The project was started

A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

Comments
  • Very nice toolkit

    Very nice toolkit

    This isn't really an issue. I wanted to thank you for sharing such a nice toolkit for regression tasks with tensorflow

    Do you have a similar toolkit for classification?

    opened by happypanda5 0
Releases(v1.4.0)
  • v1.4.0(Oct 30, 2021)

    • Changelog for v1.4.0 (2022-01-13)

    • Name clashes resolved with keras names
    • Changelog for v1.3.0 (2021-11-18)

    • new regresson metrics are added with details explaination
    • Changelog for v1.2.0 (2021-10-31)

    • Adjusted r2 score error solved
    • Changelog for v1.1.0 (2021-10-31)

    • SomeError solved
    • Changelog for v1.0.0 (2021-10-31)

    • regressionmetrics package first release 1.0.0.
    Source code(tar.gz)
    Source code(zip)
Owner
Ashish Patel
AI Researcher & Senior Data Scientist at Softweb Solutions Avnet Solutions(Fortune 500) | Rank 3 Kaggle Kernel Master
Ashish Patel
Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing

Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing Paper Introduction Multi-task indoor scene understanding is widely considered a

62 Dec 05, 2022
Classification of EEG data using Deep Learning

Graduation-Project Classification of EEG data using Deep Learning Epilepsy is the most common neurological disease in the world. Epilepsy occurs as a

Osman Alpaydın 5 Jun 24, 2022
Research code for the paper "Variational Gibbs inference for statistical estimation from incomplete data".

Variational Gibbs inference (VGI) This repository contains the research code for Simkus, V., Rhodes, B., Gutmann, M. U., 2021. Variational Gibbs infer

Vaidotas Šimkus 1 Apr 08, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

33 Dec 18, 2022
A set of simple scripts to process the Imagenet-1K dataset as TFRecords and make index files for NVIDIA DALI.

Overview This is a set of simple scripts to process the Imagenet-1K dataset as TFRecords and make index files for NVIDIA DALI. Make TFRecords To run t

8 Nov 01, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 07, 2022
LEAP: Learning Articulated Occupancy of People

LEAP: Learning Articulated Occupancy of People Paper | Video | Project Page This is the official implementation of the CVPR 2021 submission LEAP: Lear

Neural Bodies 60 Nov 18, 2022
SimBERT升级版(SimBERTv2)!

RoFormer-Sim RoFormer-Sim,又称SimBERTv2,是我们之前发布的SimBERT模型的升级版。 介绍 https://kexue.fm/archives/8454 训练 tensorflow 1.14 + keras 2.3.1 + bert4keras 0.10.6 下载

318 Dec 31, 2022
Official source code to CVPR'20 paper, "When2com: Multi-Agent Perception via Communication Graph Grouping"

When2com: Multi-Agent Perception via Communication Graph Grouping This is the PyTorch implementation of our paper: When2com: Multi-Agent Perception vi

34 Nov 09, 2022
RCD: Relation Map Driven Cognitive Diagnosis for Intelligent Education Systems

RCD: Relation Map Driven Cognitive Diagnosis for Intelligent Education Systems This is our implementation for the paper: Weibo Gao, Qi Liu*, Zhenya Hu

BigData Lab @USTC 中科大大数据实验室 10 Oct 16, 2022
Code for EMNLP2021 paper "Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training"

VoCapXLM Code for EMNLP2021 paper Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training Environment DockerFile: dancingso

Bo Zheng 15 Jul 28, 2022
Fermi Problems: A New Reasoning Challenge for AI

Fermi Problems: A New Reasoning Challenge for AI Fermi Problems are questions whose answer is a number that can only be reasonably estimated as a prec

AI2 15 May 28, 2022
Nerf pl - NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning

nerf_pl Update: an improved NSFF implementation to handle dynamic scene is open! Update: NeRF-W (NeRF in the Wild) implementation is added to nerfw br

AI葵 1.8k Dec 30, 2022
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition

Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S

144 Dec 24, 2022
Release of the ConditionalQA dataset

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

14 Oct 17, 2022
Predict Breast Cancer Wisconsin (Diagnostic) using Naive Bayes

Naive-Bayes Predict Breast Cancer Wisconsin (Diagnostic) using Naive Bayes Downloading Data Set Use our Breast Cancer Wisconsin Data Set Also you can

Faeze Habibi 0 Apr 06, 2022
HDMapNet: A Local Semantic Map Learning and Evaluation Framework

HDMapNet_devkit Devkit for HDMapNet. HDMapNet: A Local Semantic Map Learning and Evaluation Framework Qi Li, Yue Wang, Yilun Wang, Hang Zhao [Paper] [

Tsinghua MARS Lab 421 Jan 04, 2023
GenshinMapAutoMarkTools - Tools To add/delete/refresh resources mark in Genshin Impact Map

使用说明 适配 windows7以上 64位 原神1920x1080窗口(其他分辨率后续适配) 待更新渊下宫 English version is to be

Zero_Circle 209 Dec 28, 2022
Drslmarkov - Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

1 Nov 24, 2022
This repository contains the needed resources to build the HIRID-ICU-Benchmark dataset

HiRID-ICU-Benchmark This repository contains the needed resources to build the HIRID-ICU-Benchmark dataset for which the manuscript can be found here.

Biomedical Informatics at ETH Zurich 30 Dec 16, 2022