A library of sklearn compatible categorical variable encoders

Overview

Categorical Encoding Methods

Downloads Downloads Test Suite and Linting DOI

A set of scikit-learn-style transformers for encoding categorical variables into numeric by means of different techniques.

Important Links

Documentation: http://contrib.scikit-learn.org/category_encoders/

Encoding Methods

Unsupervised:

  • Backward Difference Contrast [2][3]
  • BaseN [6]
  • Binary [5]
  • Count [10]
  • Hashing [1]
  • Helmert Contrast [2][3]
  • Ordinal [2][3]
  • One-Hot [2][3]
  • Polynomial Contrast [2][3]
  • Sum Contrast [2][3]

Supervised:

  • CatBoost [11]
  • Generalized Linear Mixed Model [12]
  • James-Stein Estimator [9]
  • LeaveOneOut [4]
  • M-estimator [7]
  • Target Encoding [7]
  • Weight of Evidence [8]
  • Quantile Encoder [13]
  • Summary Encoder [13]

Installation

The package requires: numpy, statsmodels, and scipy.

To install the package, execute:

$ python setup.py install

or

pip install category_encoders

or

conda install -c conda-forge category_encoders

To install the development version, you may use:

pip install --upgrade git+https://github.com/scikit-learn-contrib/category_encoders

Usage

All of the encoders are fully compatible sklearn transformers, so they can be used in pipelines or in your existing scripts. Supported input formats include numpy arrays and pandas dataframes. If the cols parameter isn't passed, all columns with object or pandas categorical data type will be encoded. Please see the docs for transformer-specific configuration options.

Examples

There are two types of encoders: unsupervised and supervised. An unsupervised example:

from category_encoders import *
import pandas as pd
from sklearn.datasets import load_boston

# prepare some data
bunch = load_boston()
y = bunch.target
X = pd.DataFrame(bunch.data, columns=bunch.feature_names)

# use binary encoding to encode two categorical features
enc = BinaryEncoder(cols=['CHAS', 'RAD']).fit(X)

# transform the dataset
numeric_dataset = enc.transform(X)

And a supervised example:

from category_encoders import *
import pandas as pd
from sklearn.datasets import load_boston

# prepare some data
bunch = load_boston()
y_train = bunch.target[0:250]
y_test = bunch.target[250:506]
X_train = pd.DataFrame(bunch.data[0:250], columns=bunch.feature_names)
X_test = pd.DataFrame(bunch.data[250:506], columns=bunch.feature_names)

# use target encoding to encode two categorical features
enc = TargetEncoder(cols=['CHAS', 'RAD'])

# transform the datasets
training_numeric_dataset = enc.fit_transform(X_train, y_train)
testing_numeric_dataset = enc.transform(X_test)

For the transformation of the training data with the supervised methods, you should use fit_transform() method instead of fit().transform(), because these two methods do not have to generate the same result. The difference can be observed with LeaveOneOut encoder, which performs a nested cross-validation for the training data in fit_transform() method (to decrease over-fitting of the downstream model) but uses all the training data for scoring with transform() method (to get as accurate estimates as possible).

Furthermore, you may benefit from following wrappers:

  • PolynomialWrapper, which extends supervised encoders to support polynomial targets
  • NestedCVWrapper, which helps to prevent overfitting

Additional examples and benchmarks can be found in the examples directory.

Contributing

Category encoders is under active development, if you'd like to be involved, we'd love to have you. Check out the CONTRIBUTING.md file or open an issue on the github project to get started.

References

  1. Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing for Large Scale Multitask Learning. Proc. ICML.
  2. Contrast Coding Systems for categorical variables. UCLA: Statistical Consulting Group. From https://stats.idre.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/.
  3. Gregory Carey (2003). Coding Categorical Variables. From http://psych.colorado.edu/~carey/Courses/PSYC5741/handouts/Coding%20Categorical%20Variables%202006-03-03.pdf
  4. Strategies to encode categorical variables with many categories. From https://www.kaggle.com/c/caterpillar-tube-pricing/discussion/15748#143154.
  5. Beyond One-Hot: an exploration of categorical variables. From http://www.willmcginnis.com/2015/11/29/beyond-one-hot-an-exploration-of-categorical-variables/
  6. BaseN Encoding and Grid Search in categorical variables. From http://www.willmcginnis.com/2016/12/18/basen-encoding-grid-search-category_encoders/
  7. Daniele Miccii-Barreca (2001). A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems. SIGKDD Explor. Newsl. 3, 1. From http://dx.doi.org/10.1145/507533.507538
  8. Weight of Evidence (WOE) and Information Value Explained. From https://www.listendata.com/2015/03/weight-of-evidence-woe-and-information.html
  9. Empirical Bayes for multiple sample sizes. From http://chris-said.io/2017/05/03/empirical-bayes-for-multiple-sample-sizes/
  10. Simple Count or Frequency Encoding. From https://www.datacamp.com/community/tutorials/encoding-methodologies
  11. Transforming categorical features to numerical features. From https://tech.yandex.com/catboost/doc/dg/concepts/algorithm-main-stages_cat-to-numberic-docpage/
  12. Andrew Gelman and Jennifer Hill (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models. From https://faculty.psau.edu.sa/filedownload/doc-12-pdf-a1997d0d31f84d13c1cdc44ac39a8f2c-original.pdf
  13. Carlos Mougan, David Masip, Jordi Nin and Oriol Pujol (2021). Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems. https://link.springer.com/chapter/10.1007%2F978-3-030-85529-1_14
Comments
  • Add Multi-Process Supported HashingEncoder

    Add Multi-Process Supported HashingEncoder

    Add multiple-process supported HashingEncoder called NHashingEncoder. By using multi-process, it's several times faster than HashingEncoder. On i5-8259U, encoding 1,000,000 samples by HashingEncoder takes 720+ seconds while NHashingEncoder with parameter "max_process=4" only takes 230+ seconds. On 16x3.2GHz CPU + 64G Memory linux, encoding 20million samples by HashingEncoder takes over 4 hours while NHashingEncoder with parameter"max_process=8" only takes 20 minutes.

    opened by liushulun 30
  • Ordinal encoder support new handle unknown handle missing

    Ordinal encoder support new handle unknown handle missing

    Here is the first pass at making Ordinal Encoder support the fields handle_unknown and handle_missing as described at https://github.com/scikit-learn-contrib/categorical-encoding/issues/92.

    Lets go through the fields and their logic.

    handle_unknown

    1. value
      • unknown values go to -1 at transform time
    2. error
      • throw ValueError if encounter new categories during transform time
    3. return_nan
      • at transform time, return nan

    Ok, now handle_missing has configurations for each setting depending on if nan is present at fit time.

    handle_missing

    1. value
      • Nan present at fit time-> nan is treated as category
      • Nan not present at fit time -> transform returns -2
    2. return_nan
      • fit add -2 mapping and at transform return -2 with nan
    3. error
      • At fit or transform, throw error

    Ok, for a total implementation every encoder will have to be changed. What do we want to do avoiding gigantic Pull Requests? Have a long lived feature branch?

    Ok thoughts,

    1. I am going to implement cucumber tests for the handle_unknown and .handle_missing because trying to keep it all straight in my head is difficult.
    2. I need to go through inverse transform and check it against every new setting.
    3. My implementation for return_nan make processing in the downstream encoders more difficult because we are mapping nan to -2.
    4. The relationship between value and indicator for the multi-column encoders and the output of the ordinal encoder currently confuses me. I am going to sit down and write it all out so I know what should lead to what.
    5. Check the changes to the test_ordinal_dist test in test_ordinal. Why was None not being treated as a category?

    Tell me what you think and I can get started o the other encoders.

    opened by JohnnyC08 23
  • Fix binary encoder for columntransformer

    Fix binary encoder for columntransformer

    I discovered that when using the BinaryEncoder in a sklearn.ColumnTransformer, the passed params are lost.

    This is because the encoder gets instantiated twice in a ColumnTransformer. Currently, params are not registered to self in BinaryEncoder.init(), so they are lost when the ColumnTransformer is put to work.

    Disclaimer: I was able to correctly binary encode in a local debug session. However, as there are so many tests failing on the upstream master currently, it was hard to find out whether my solution has an undesired impact.

    Also, I am confused by ordinal.py L323-L326. Is this a bug? It seems to correctly encode both with the -2 and np.nan...

    opened by datarian 21
  • Quantile encoder

    Quantile encoder

    opened by cmougan 19
  • 1.4.0 Release Organization

    1.4.0 Release Organization

    Hey all, been away from the project for a bit, but I'm going back through all of the issues and PRs worked on recently (looks like a bunch of good progress!). Special thanks to @janmotl for all of the work as primary maintainer over the past months.

    Our last release was 1.3.0 on October 14th. Since then ya'll have:

    • Sped up the TargetEncoder and LeaveOneOutEncoder w/ vectorization (significantly)
    • Added support for Categorical types in many encoders
    • Implemented get_feature_names in remaining transformers
    • Improved testing coverage and quality
    • Solved edge cases in repeated column names for some transformers
    • Added support for transforming pandas Series as well as DataFrames and numpy Arrays
    • Fixed inverse transform for many encoders
    • Lots of smaller performance enhancements and code cleanups

    Which I think is a quite full set of features to constitute a release. I will be opening a separate issue to discuss how we as a community can improve our release cycle, but for now will be going through open issues and tagging anything that should be included before the v1.4.0 release. Any input on what should or shouldn't be completed prior to release is welcome.

    Thank you all for the work and support this year, and Happy Holidays.

    Release 
    opened by wdm0006 17
  • Behavior of OneHotEncoder handle_unknown option

    Behavior of OneHotEncoder handle_unknown option

    I'm trying to understand the behavior (and intent) of the handle_unknown option for OneHotEncoder (and by extension OrdinalEncoder). The docs imply that this should control NaN handling but below examples seem to indicate otherwise (category_encoders==1.2.8)

    In [2]: import pandas as pd
       ...: import numpy as np
       ...: from category_encoders import OneHotEncoder
       ...: 
    
    In [3]: X = pd.DataFrame({'a': ['foo', 'bar', 'bar'],
       ...:                   'b': ['qux', np.nan, 'foo']})
       ...: X
       ...: 
    Out[3]: 
         a    b
    0  foo  qux
    1  bar  NaN
    2  bar  foo
    
    In [4]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='ignore', 
       ...:                         impute_missing=True, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[4]: 
       a_foo  a_bar  b_qux  b_nan  b_foo
    0      1      0      1      0      0
    1      0      1      0      1      0
    2      0      1      0      0      1
    
    In [5]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='impute', 
       ...:                         impute_missing=True, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[5]: 
       a_foo  a_bar  a_-1  b_qux  b_nan  b_foo  b_-1
    0      1      0     0      1      0      0     0
    1      0      1     0      0      1      0     0
    2      0      1     0      0      0      1     0
    
    In [6]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='error', 
       ...:                         impute_missing=True, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[6]: 
       a_foo  a_bar  b_qux  b_nan  b_foo
    0      1      0      1      0      0
    1      0      1      0      1      0
    2      0      1      0      0      1
    
    In [7]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='ignore', 
       ...:                         impute_missing=False, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[7]: 
       a_foo  a_bar  b_qux  b_nan  b_foo
    0      1      0      1      0      0
    1      0      1      0      1      0
    2      0      1      0      0      1
    
    

    In particular, 'error' and 'ignore' give the same behavior, treating missing observations as another category. 'impute' adds constant zero-valued columns but also treats missing observations as another category. Naively would've expected behavior similar to pd.get_dummies(X, dummy_na={True|False}), with handle_unknown=ignore corresponding to dummy_na=False.

    bug 
    opened by multiloc 17
  • Get feature names

    Get feature names

    Implemented get_feature_names for HashingEncoder, OneHotEncoder and OrdinalEncoder.

    For my purposes, these work now. Not fully tested. It's more of a proposal for a concept. If liked, I will gladly implement for the rest of the encoders, incorporating any feedback.

    opened by datarian 16
  • Question: Difference between TargetEncoder and LeaveOneOutEncoder

    Question: Difference between TargetEncoder and LeaveOneOutEncoder

    It's not really clear to me what the difference between TargetEncoder and LeaveOneOutEncoder, as both encode using the target with leave-one-out. Can you maybe clarify and clarify this in the docs? Does either work for multi-class classification?

    question 
    opened by amueller 13
  • Implement Target Encoding with Hierarchical Structure Smoothing

    Implement Target Encoding with Hierarchical Structure Smoothing

    From section 4 of the paper sited in TargetEncoding.

    Instead of choosing the prior probability of the target as the null hypothesis, it is reasonable to replace it with the estimated probability at the next higher level of aggregation in the attribute hierarchy

    In other words, if we have a single zipcode 54321 but 100 zipcodes 54322 and 100 zipcodes 54323, we could use the mean of zipcode level 4 5432X as our mean smoothing term for zipcode 54321, instead of the mean for all zipcodes XXXXX.

    This would be a really nice additional piece of functionality to add as an encoder.

    enhancement 
    opened by JoshuaC3 13
  • In the ordinal encoder go ahead and update the existing column instea…

    In the ordinal encoder go ahead and update the existing column instea…

    To fix https://github.com/scikit-learn-contrib/categorical-encoding/issues/100

    The issue was arising because _tmp columns were being appended to the end of the data frame as part of the transform process.

    First, we noticed that the transform process was to append a temporary column, drop the existing column, and rename the temporary column to the existing column name.

    So, we went ahead and reduced that step to one step where we update the existing column using our mapping which preserves the order. I wasn't sure why the above mentioned transform method had that many steps and a single update seems to ensure the tests are passing.

    @janmotl I also noticed in travis the python3 step seems to be running python 2.7 instead of python3. From the install.sh I see mentions of a conda create and in the CI logs I see the mention of a virtualenv being set which I don't see mentioned in the project. Perhaps the travis cache needs to be cleared?

    opened by JohnnyC08 13
  • Differing dimensions for training and test

    Differing dimensions for training and test

    Hi,

    I would like to fit encodings on my training set and then using this fitted encoding to transform both the training and the test set:

    import category_encoders as ce
    
    train = ['Brunswick East', 'Fitzroy', 'Williamstown', 'Newport', 'Balwyn North', 'Doncaster', 'Melbourne', 'Albert Park', 'Bentleigh', 'Northcote']
    test = ['Fitzroy North', 'Fitzroy', 'Richmond', 'Surrey Hills', 'Blackburn', 'Port Melbourne', 'Footscray', 'Yarraville', 'Carnegie', 'Surrey Hills']
    
    encoder = ce.HelmertEncoder()
    encoder.fit(train)
    
    train_t = encoder.transform(train)
    test_t = encoder.transform(test)
    
    print train_t.shape
    >> (10, 10)
    print test_t.shape
    >> (10, 2)
    

    The problem is that the dimensions do not fit. What do I do wrong or how can I fix this issue?

    Best regards, Felix

    opened by FelixNeutatz 12
  • ValueError: `X` and `y` both have indexes, but they do not match.

    ValueError: `X` and `y` both have indexes, but they do not match.

    Expected Behavior

    When running any of the category encoders e.g. TargetEncoder() within a pipeline through permutation_test_score() it errors out with the above message. The error occurs in the convert_inputs() function which checks for if any(X.index != y.index): before raising the error.

    Actual Behavior

    The error is not correct and shouldn't occur. When I ran the same check above on my input X (dataframe) and y (series), the error doesn't occur.

    In fact, when I load input data, after splitting the data into X and y, and after label encoding y, I explicitly convert it into a pd.Series and assign it the X.index, so they are in fact identical.

    If in contrast, I do not convert the label encoded y into a pd.Series and leave it as an ndarray, then this error doesn't occur!

    Also, note that the same pipeline when fitted with the same X, y df and series works absolutely fine.

    Steps to Reproduce the Problem

    See an example of my pipeline below: image

    1. Create an arbitrary pipeline as follows:
    from sklearn.linear_model import SGDClassifier
    from category_encoders import TargetEncoder
    
    test_pipe = Pipeline([('enc', TargetEncoder()), ('clf', SGDClassifier(loss='log_loss'))])
    
    1. Run
    score, perm_scores, pvalue = permutation_test_score(test_pipe, X, y)
    

    Specifications

    • Version: 2.5.1.post0
    • Platform: Python 3.10.8
    • Subsystem: Pandas 1.5.1
    opened by RNarayan73 1
  • OneHotEncoder: handle_missing = 'ignore' would be very useful

    OneHotEncoder: handle_missing = 'ignore' would be very useful

    Expected Behavior

    It would be nice to be able to ignore missing values instead of creating new columns with an "_nan" suffix. Just like it is possible with pandas. What do you think?

    Actual Behavior

    Doesn't exist in the current latest version (accoring to my knowledge)

    Steps to Reproduce

    import pandas as pd
    import numpy as np
    from category_encoders import OneHotEncoder
    
    encoder = OneHotEncoder(
        cols=None,  # all non-numeric
        return_df=True,
        handle_missing="value",  # would be nice to have the option 'ignore'
        use_cat_names=True,
    )
    df = pd.DataFrame(
        {"this": ["GREEN", "GREEN", "YELLOW", "YELLOW"], "that": ["A", "B", "A", np.nan]}
    )
    
    encoder.fit_transform(df) # unwanted result
    pd.get_dummies(df, dummy_na=False) # wanted result
    

    Specifications

    • Version: 2.5.1.post0
    opened by woodly0 0
  • fix: Broken inverse_transform for OrdinalEncoder when custom mapping …

    fix: Broken inverse_transform for OrdinalEncoder when custom mapping …

    This PR fixes issue #202. It then allows for inverse transform to be performed with custom dict.

    Proposed Changes

    The changes are minor and modify line 171 by implementing comment https://github.com/scikit-learn-contrib/category_encoders/issues/202#issuecomment-946159286

    opened by fredmontet 2
  • Intercept in Contrast Coding Schemes

    Intercept in Contrast Coding Schemes

    Expected Behavior

    The constant (all values 1) intercept column should not be added when applying contrast coding schemes (i.e. backward difference, sum, polynomial and helmert coding)

    I don't think this intercept column is needed. If you fit a supervised learning model it is probably gonna help to remove the intercept column. I think it is there because when fitting linear models with statsmodels you have to add the intercept.
    However I don't like that the output of an encoder would then depend on whether the intercept column is already there or not, e.g. if I first apply encoder A on column A and then encoder B on column B the intercept column of B overwrite A's intercept column hence not adding a new column. Also if I have (for some reason) a column called intercept that is not constant it would get overwritten.

    Any opinion? Am I missing something? Is the intercept necessary?

    Actual Behavior

    A constant column with all values 1 is added

    Steps to Reproduce the Problem

    Run transform on any fitted contrast coding encoder, e.g.

            train = ['A', 'B', 'C']
            encoder = encoders.BackwardDifferenceEncoder(handle_unknown='value', handle_missing='value')
            encoder.fit_transform(train)
    
    opened by PaulWestenthanner 3
  • No need to check if # of dimensions of testing set align with training set in target_encoder

    No need to check if # of dimensions of testing set align with training set in target_encoder

    https://github.com/scikit-learn-contrib/category_encoders/blob/6a13c14919d56fed8177a173d4b3b82c5ea2fef5/category_encoders/utils.py#L322-L323

    For the function _check_transform_inputs(), I do not want it to report error when # of dimensions for testing set don't align with training set. However, the default is it has to align. Considering that the purpose of target encoder is to transform designated columns using target encoders, nothing else, logically we don't have to validate the dimension alignment.

    opened by hongG1997EQ 1
  • Memory increase of WOEEncoder for newer category_encoders version

    Memory increase of WOEEncoder for newer category_encoders version

    Memory increase of WOEEncoder for category_encoders version >=2.0.0

    Hi, I noticed another memory issue with WOEEncoder. I have submitted the same bug before in #335, the difference between two bugs is the different encoder methods used and different datasets. In order to distinguish between the two encoder APIs, I resubmitted a new bug report.

    Expected Behavior

    Similar memory usage

    Actual Behavior

    According to the experiment results, when the category_encoders version is higher than 2.0.0, weight_enc.fit(train[weight_encode], train['target']) memory usage increase from 58MB to 206MB.

    Memory(MB) | Version -- | -- 209| 2.3.0 209| 2.2.2 209| 2.1.0 209| 2.0.0 58| 1.3.0

    Steps to Reproduce the Problem

    Step 1: Download the dataset

    train.zip

    Step 2: install category_encoders

    pip install  category_encoders == #version#
    

    Step 3: change category_encoders version and save the memory usage

    import numpy as np 
    import pandas as pd 
    train = pd.read_csv('train.csv')
    test = pd.read_csv('test.csv')
    columns = [x for x in train.columns if x != 'target']
    object_col_label = ['bin_0','bin_1','bin_2','bin_3','bin_4']
    one_hot_encode = ['nom_0', 'nom_1', 'nom_2', 'nom_3', 'nom_4']
    target_encode = ['nom_5', 'nom_6', 'nom_7', 'nom_8', 'nom_9']
    weight_encode = target_encode + ['ord_4', 'ord_5' ,'ord_3'] + one_hot_encode + object_col_label
    import category_encoders as ce
    weight_enc = ce.woe.WOEEncoder(cols=weight_encode)
    import tracemalloc
    tracemalloc.start()
    weight_enc.fit(train[weight_encode], train['target'])
    current3, peak3 = tracemalloc.get_traced_memory()
    print("Get_dummies memory usage is {",current3 /1024/1024,"}MB; Peak memory was :{",peak3 / 1024/1024,"}MB")
    

    Specifications

    Version: 2.3.0, 2.2.2, 2.1.0, 2.0.0, 1.3.0 Platform: ubuntu 16.4 OS : Ubuntu CPU : Intel(R) Core(TM) i9-9900K CPU GPU : TITAN V

    opened by Piecer-plc 1
Releases(2.5.1.post0)
pandas, scikit-learn, xgboost and seaborn integration

pandas, scikit-learn and xgboost integration.

299 Dec 30, 2022
Predict the demand for electricity (R) - FRENCH

06.demand-electricity Predict the demand for electricity (R) - FRENCH Prédisez la demande en électricité Prérequis Pour effectuer ce projet, vous devr

1 Feb 13, 2022
Implementation of different ML Algorithms from scratch, written in Python 3.x

Implementation of different ML Algorithms from scratch, written in Python 3.x

Gautam J 393 Nov 29, 2022
Lightning ⚡️ fast forecasting with statistical and econometric models.

Nixtla Statistical ⚡️ Forecast Lightning fast forecasting with statistical and econometric models StatsForecast offers a collection of widely used uni

Nixtla 2.1k Dec 29, 2022
MooGBT is a library for Multi-objective optimization in Gradient Boosted Trees.

MooGBT is a library for Multi-objective optimization in Gradient Boosted Trees. MooGBT optimizes for multiple objectives by defining constraints on sub-objective(s) along with a primary objective. Th

Swiggy 66 Dec 06, 2022
Extreme Learning Machine implementation in Python

Python-ELM v0.3 --- ARCHIVED March 2021 --- This is an implementation of the Extreme Learning Machine [1][2] in Python, based on scikit-learn. From

David C. Lambert 511 Dec 20, 2022
A Time Series Library for Apache Spark

Flint: A Time Series Library for Apache Spark The ability to analyze time series data at scale is critical for the success of finance and IoT applicat

Two Sigma 970 Jan 04, 2023
Bayesian optimization based on Gaussian processes (BO-GP) for CFD simulations.

BO-GP Bayesian optimization based on Gaussian processes (BO-GP) for CFD simulations. The BO-GP codes are developed using GPy and GPyOpt. The optimizer

KTH Mechanics 8 Mar 31, 2022
A demo project to elaborate how Machine Learn Models are deployed on production using Flask API

This is a salary prediction website developed with the help of machine learning, this makes prediction of salary on basis of few parameters like interview score, experience test score.

1 Feb 10, 2022
QML: A Python Toolkit for Quantum Machine Learning

QML is a Python2/3-compatible toolkit for representation learning of properties of molecules and solids.

176 Dec 09, 2022
Automated Machine Learning Pipeline for tabular data. Designed for predictive maintenance applications, failure identification, failure prediction, condition monitoring, etc.

Automated Machine Learning Pipeline for tabular data. Designed for predictive maintenance applications, failure identification, failure prediction, condition monitoring, etc.

Amplo 10 May 15, 2022
A collection of Machine Learning Models To Web Api which are built on open source technologies/frameworks like Django, Flask.

Author Ibrahim Koné From-Machine-Learning-Models-To-WebAPI A collection of Machine Learning Models To Web Api which are built on open source technolog

Ibrahim Koné 2 May 24, 2022
Uplift modeling and causal inference with machine learning algorithms

Disclaimer This project is stable and being incubated for long-term support. It may contain new experimental code, for which APIs are subject to chang

Uber Open Source 3.7k Jan 07, 2023
A toolkit for geo ML data processing and model evaluation (fork of solaris)

An open source ML toolkit for overhead imagery. This is a beta version of lunular which may continue to develop. Please report any bugs through issues

Ryan Avery 4 Nov 04, 2021
Free MLOps course from DataTalks.Club

MLOps Zoomcamp Our MLOps Zoomcamp course Sign up here: https://airtable.com/shrCb8y6eTbPKwSTL (it's not automated, you will not receive an email immed

DataTalksClub 4.6k Dec 31, 2022
Machine-care - A simple python script to take care of simple maintenance tasks

Machine care An simple python script to take care of simple maintenance tasks fo

2 Jul 10, 2022
ML Kaggle Titanic Problem using LogisticRegrission

-ML-Kaggle-Titanic-Problem-using-LogisticRegrission here you will find the solution for the titanic problem on kaggle with comments and step by step c

Mahmoud Nasser Abdulhamed 3 Oct 23, 2022
🤖 ⚡ scikit-learn tips

🤖 ⚡ scikit-learn tips New tips are posted on LinkedIn, Twitter, and Facebook. 👉 Sign up to receive 2 video tips by email every week! 👈 List of all

Kevin Markham 1.6k Jan 03, 2023
Winning solution for the Galaxy Challenge on Kaggle

Winning solution for the Galaxy Challenge on Kaggle

Sander Dieleman 483 Jan 02, 2023
Machine learning that just works, for effortless production applications

Machine learning that just works, for effortless production applications

Elisha Yadgaran 16 Sep 02, 2022