Research on Tabular Deep Learning (Python package & papers)

Overview

Research on Tabular Deep Learning

For paper implementations, see the section "Papers and projects".

rtdl is a PyTorch-based package providing a user-friendly API for the main models and concepts from our papers. See the documentation.

Press "Watch" to stay up to date with new papers and releases!

Feel free to report issues and post questions/feedback/ideas.

Papers and projects

Name Location Comment
On Embeddings for Numerical Features in Tabular Deep Learning link arXiv 2022
Revisiting Deep Learning Models for Tabular Data link NeurIPS 2021
rtdl link Python package
Comments
  • Fix MLP.make_baseline() return type

    Fix MLP.make_baseline() return type

    Return object of type cls, not MLP, in MLP.make_baseline(). Otherwise, child classes inheriting from MLP constructed using the .make_baseline() method always have type MLP (instead of the type of the child class).

    opened by jpgard 6
  • Is it possible to provide a scikit-learn interface?

    Is it possible to provide a scikit-learn interface?

    This project is interesting and I want to use it as the baseline algorithm for my paper. However, it seems that I need to take several steps in order to make a prediction. Consequently, is it possible to provide a scikit-learn interface for making a convenient comparison between different algorithms?

    opened by hengzhe-zhang 5
  • Cannot link in the document of zero

    Cannot link in the document of zero

    Hi! I am trying to understand the usage of python package zero, which is used in the example of rtdl. But I found that the linkage in the comment line of the code is not available anymore.

    Here is the invalid link: https://yura52.github.io/zero/0.0.4/reference/api/zero.improve_reproducibility.html

    I am wondering is there any other document? Thank you!

    Regards.

    opened by WuZheng326 4
  • embedding of categorical variables

    embedding of categorical variables

    Hi Yury,

    Thank you for your excellent work. I get a problem when handling categorical features. Do I need to pre-train the embedding layer when applying it to the data processing or just to attach the embedding layer to the model and train it with the model.

    opened by lhq12 3
  • Add ⭐️Weights & Biases⭐️ Logging

    Add ⭐️Weights & Biases⭐️ Logging

    This PR aims to add basic Weights and Biases Metric Logging by appending to the existing codebase with minimal changes while supporting Checkpoint uploads as Weights and Biases Artifacts.

    Wherever needed, I have used the existing Weights and Biases integrations viz. LightGBM and XGBoost.

    I have validated the performance of all the proposed runs by running 150+ runs, which can be viewed on this project page and in detail in an accompanying blog post.

    opened by SauravMaheshkar 3
  • Bugs in piecewise-linear encoding

    Bugs in piecewise-linear encoding

    1. Here, indices = as_tensor(values) must be changed to this:
    indices = as_tensor(indices)
    
    1. Here, np.array(d_encoding) must be changed to this:
    torch.tensor(d_encoding).to(indices)
    
    1. Here, the argument dtype=X.dtype is missing for np.array

    2. Here, .to(X) is missing

    3. Here, it must be:

    is_last_bin = bin_indices + 1 == as_tensor(list(map(len, bin_edges)))
    
    opened by Yura52 2
  • LGBMRegressor on California Housing dataset is 0.68 >> 0.46

    LGBMRegressor on California Housing dataset is 0.68 >> 0.46

    I use the sample code to prepare the dataset:

    device = 'cpu'
    dataset = sklearn.datasets.fetch_california_housing()
    task_type = 'regression'
    
    X_all = dataset['data'].astype('float32')
    y_all = dataset['target'].astype('float32')
    n_classes = None
    
    X = {}
    y = {}
    X['train'], X['test'], y['train'], y['test'] = sklearn.model_selection.train_test_split(
        X_all, y_all, train_size=0.8
    )
    X['train'], X['val'], y['train'], y['val'] = sklearn.model_selection.train_test_split(
        X['train'], y['train'], train_size=0.8
    )
    
    # not the best way to preprocess features, but enough for the demonstration
    preprocess = sklearn.preprocessing.StandardScaler().fit(X['train'])
    X = {
        k: torch.tensor(preprocess.fit_transform(v), device=device)
        for k, v in X.items()
    }
    y = {k: torch.tensor(v, device=device) for k, v in y.items()}
    
    # !!! CRUCIAL for neural networks when solving regression problems !!!
    y_mean = y['train'].mean().item()
    y_std = y['train'].std().item()
    y = {k: (v - y_mean) / y_std for k, v in y.items()}
    
    y = {k: v.float() for k, v in y.items()}
    

    And I train a LGBMRegressor with the default hyper parameters:

    model = lgb.LGBMRegressor()
    model.fit(X['train'], y['train'])
    

    But when I evaluate on the test fold, I found the performance is 0.68:

    >>> test_pred = model.predict(X['test'])
    >>> test_pred = torch.from_numpy(test_pred)
    >>> rmse = torch.nn.functional.mse_loss(
    >>>     test_pred.view(-1), y['test'].view(-1)) ** 0.5 * y_std
    >>> print(f'Test RMSE: {rmse:.2f}.')
    Test RMSE: 0.68.
    

    Even using the model from rtdl gives me 0.56 RMSE:

    (epoch) 57 (batch) 0 (loss) 0.1885
    (epoch) 57 (batch) 10 (loss) 0.1315
    (epoch) 57 (batch) 20 (loss) 0.1735
    (epoch) 57 (batch) 30 (loss) 0.1197
    (epoch) 57 (batch) 40 (loss) 0.1952
    (epoch) 57 (batch) 50 (loss) 0.1167
    Epoch 057 | Validation score: 0.7334 | Test score: 0.5612 <<< BEST VALIDATION EPOCH
    

    Is there anything I miss? How can I reproduce the performance in your paper? Thanks!

    opened by fingertap 2
  • Regression results about the RTDL models.

    Regression results about the RTDL models.

    Hi, you did a great implementation of the tab-transformer. However, when I use your example notebook to do the simple regression for the Sin(x), neither the baseline model or the FTTransformer give the good results. I have no idea about this and want to know why.

    Here is the link

    opened by linkedlist771 1
  • typos in CatEmbeddings

    typos in CatEmbeddings

    1. link. The variable cardinalities_and_dimensions does not exist
    2. link. The condition looks broken. Solution: simplify it and remove the word "spec" from the error message.
    opened by Yura52 0
  • Running error, prenormalization is not a class variable

    Running error, prenormalization is not a class variable

    The code crushes at this line, because prenormalization is not in self

    https://github.com/Yura52/rtdl/blob/b130dd2e596c17109bef825bc9c8608e1ae617cc/rtdl/nn/_backbones.py#L627

    opened by zahar-chikishev 0
  • Typos?

    Typos?

    Hello,

    I am trying to use PiecewiseLinearEncoder(). I think I found a few typos. Please check my work.

    I first ran into an issue in piecewise_linear_encoding where I got the error in line 618 saying "RuntimeError: The size of tensor a (3688) must match the size of tensor b (32) at non-singleton dimension 1"

    I dug into the code and found that when PiecewiseLinearEncoder was calling piecewise_linear_encoding the positional arguments of indices and ratios were switched in the former from what was expected in the latter.

    Additionally, when inspecting piecewise_linear_encoding it looks like bin_edges = as_tensor(bin_ratios) not "as_tensor(bin_edges)" which would make more sense.

    Can you please check this out? Much appreciated.

    opened by jdefriel 1
  • How to resume training?

    How to resume training?

    I ran your model in colab for a few hours before google terminated it. I used pickle.dump/load to store the trained model. It works to make predictions but it doesn't seem to be able to resume training.

          if progress.success:
              print(' <<< BEST VALIDATION EPOCH', end='')
              with open(mydrive+jobname, 'wb') as filehandler:
                dump((model, y_std, y_mean),filehandler)
                #we could see result was improving
    
            with open(mydrive+jobname, 'rb') as filehandler:
              model, y_std, y_mean = load(filehandler)
            pred=model(batch,None) #this seems to work
            for epoch in range(1, n_epochs + 1):
                for iteration, batch_idx in enumerate(train_loader):
                    model.train()
                    optimizer.zero_grad()
                    x_batch = X['train'][batch_idx]
                    y_batch = y['train'][batch_idx]
                    loss = loss_fn(apply_model(x_batch).squeeze(1), y_batch)
                    loss.backward()
                    optimizer.step()
                    if iteration % report_frequency == 0:
                        print(f'(epoch) {epoch} (batch) {iteration} (loss) {loss.item():.4f}')
                    #no improvement any more. even the model was dumped immediately after created.
    

    what is the right way to store the model so that I can resume the training?

    opened by jerronl 0
  • A scikit-learn interface for RTDL package.

    A scikit-learn interface for RTDL package.

    Hello! I have written a scikit-learn interface for the RTDL package (https://github.com/hengzhe-zhang/scikit-rtdl). I rely on the skorch to avoid coding errors, and set the default parameters based on the parameters presented in your paper. Hoping you will like it!

    opened by hengzhe-zhang 1
Releases(v0.0.13)
  • v0.0.13(Mar 16, 2022)

  • v0.0.12(Mar 10, 2022)

  • v0.0.10(Feb 28, 2022)

  • v0.0.9(Nov 7, 2021)

    This is a hot-fix release after the big 0.0.8 release (see the release notes for 0.0.8):

    • revert the breaking change in NumericalFeatureTokenizer accidentally introduced in 0.0.8
    • minor documentation refinements
    Source code(tar.gz)
    Source code(zip)
  • v0.0.8(Nov 6, 2021)

    This release focuses on improving the documentation.

    Documentation

    • The following models and classes are now documented:
      • MLP
      • ResNet
      • FTTransformer
      • MultiheadAttention
      • NumericalFeatureTokenizer
      • CategoricalFeatureTokenizer
      • FeatureTokenizer
      • CLSToken
    • Usability have been greatly improved:
      • signatures are now highlighted
      • added the "copy" button to code blocks
      • permalink buttons (signature anchors) are now visible

    Bug fixes

    • MultiheadAttention: fix the crash when bias=False

    Dependencies

    • numpy >= 1.18
    • torch >= 1.7

    Project

    • added spell checking for documentation
    • sphinx was updated to 4.2.0
    • flit was updated to 3.4.0
    Source code(tar.gz)
    Source code(zip)
  • v0.0.7(Oct 10, 2021)

  • v0.0.6(Aug 26, 2021)

    v0.0.6

    New features

    • CLSToken (old name: "AppendCLSToken"): add expand method for easy construction of batches of [CLS]-tokens

    Bug fixes

    • FTTransformer: the make_baseline method now properly constructs an instance

    API changes

    • FTTransformer: the ffn_d_intermidiate argument was renamed to a more conventional ffn_d_hidden
    • FTTransformer: the normalization argument was split into three arguments: attention_normalization, ffn_normalization, head_normalization
    • ResNet: the d_intermidiate argument was renamed to a more conventional d_hidden
    • AppendCLSToken: renamed to CLSToken

    Documentation improvements

    • CLSToken
    • MLP.make_baseline

    Project

    • add tests with CUDA
    • remove the .vscode directory from the repository
    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Jul 20, 2021)

    API Changes:

    • MLP.make_baseline is now more user-friendly and accepts a single d_layers argument instead of four (d_first, d_intermidiate, d_last, n_blocks)
    Source code(tar.gz)
    Source code(zip)
  • v0.0.4(Jul 11, 2021)

  • v0.0.3(Jul 2, 2021)

    API Changes

    • ResNet & ResNet.Block: the d parameter was renamed to d_main

    Fixes

    • minor fix in the comments in examples/rtdl.ipynb

    Project

    • add tests that validate that the models in rtdl are literally the same as in the implementation of the paper
    Source code(tar.gz)
    Source code(zip)
Differential Privacy for Heterogeneous Federated Learning : Utility & Privacy tradeoffs

Differential Privacy for Heterogeneous Federated Learning : Utility & Privacy tradeoffs In this work, we propose an algorithm DP-SCAFFOLD(-warm), whic

19 Nov 10, 2022
Benchmarks for Model-Based Optimization

Design-Bench Design-Bench is a benchmarking framework for solving automatic design problems that involve choosing an input that maximizes a black-box

Brandon Trabucco 43 Dec 20, 2022
A simple library that implements CLIP guided loss in PyTorch.

pytorch_clip_guided_loss: Pytorch implementation of the CLIP guided loss for Text-To-Image, Image-To-Image, or Image-To-Text generation. A simple libr

Sergei Belousov 74 Dec 26, 2022
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

34 Oct 08, 2022
SigOpt wrappers for scikit-learn methods

SigOpt + scikit-learn Interfacing This package implements useful interfaces and wrappers for using SigOpt and scikit-learn together Getting Started In

SigOpt 73 Sep 30, 2022
Unofficial PyTorch Implementation of "Augmenting Convolutional networks with attention-based aggregation"

Pytorch Implementation of Augmenting Convolutional networks with attention-based aggregation This is the unofficial PyTorch Implementation of "Augment

DK 20 Sep 09, 2022
Which Style Makes Me Attractive? Interpretable Control Discovery and Counterfactual Explanation on StyleGAN

Interpretable Control Exploration and Counterfactual Explanation (ICE) on StyleGAN Which Style Makes Me Attractive? Interpretable Control Discovery an

Bo Li 11 Dec 01, 2022
Python KNN model: Predicting a probability of getting a work visa. Tableau: Non-immigrant visas over the years.

The value of international students to the United States. Probability of getting a non-immigrant visa. Project timeline: Jan 2021 - April 2021 Project

Zinaida Dvoskina 2 Nov 21, 2021
Deep High-Resolution Representation Learning for Human Pose Estimation

Deep High-Resolution Representation Learning for Human Pose Estimation (accepted to CVPR2019) News If you are interested in internship or research pos

HRNet 167 Dec 27, 2022
TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)

tf-metal-experiments TensorFlow Metal Backend on Apple Silicon Experiments (just for fun) Setup This is tested on M1 series Apple Silicon SOC only. Te

Timothy Liu 161 Jan 03, 2023
FaceAnon - Anonymize people in images and videos using yolov5-crowdhuman

Face Anonymizer Blur faces from image and video files in /input/ folder. Require

22 Nov 03, 2022
A project for developing transformer-based models for clinical relation extraction

Clinical Relation Extration with Transformers Aim This package is developed for researchers easily to use state-of-the-art transformers models for ext

uf-hobi-informatics-lab 101 Dec 19, 2022
A curated list and survey of awesome Vision Transformers.

English | 简体中文 A curated list and survey of awesome Vision Transformers. You can use mind mapping software to open the mind mapping source file. You c

OpenMMLab 281 Dec 21, 2022
WSDM‘2022: Knowledge Enhanced Sports Game Summarization

Knowledge Enhanced Sports Game Summarization Cooming Soon! :) Data will be released after approval process. Code will be published once the author of

Jiaan Wang 14 Jul 13, 2022
TCube generates rich and fluent narratives that describes the characteristics, trends, and anomalies of any time-series data (domain-agnostic) using the transfer learning capabilities of PLMs.

TCube: Domain-Agnostic Neural Time series Narration This repository contains the code for the paper: "TCube: Domain-Agnostic Neural Time series Narrat

Mandar Sharma 7 Oct 31, 2021
A PyTorch implementation of EfficientDet.

A PyTorch impl of EfficientDet faithful to the original Google impl w/ ported weights

Ross Wightman 1.4k Jan 07, 2023
The official re-implementation of the Neurips 2021 paper, "Targeted Neural Dynamical Modeling".

Targeted Neural Dynamical Modeling Note: This is a re-implementation (in Tensorflow2) of the original TNDM model. We do not plan to further update the

6 Oct 05, 2022
Pytorch implementation of BRECQ, ICLR 2021

BRECQ Pytorch implementation of BRECQ, ICLR 2021 @inproceedings{ li&gong2021brecq, title={BRECQ: Pushing the Limit of Post-Training Quantization by Bl

Yuhang Li 148 Dec 28, 2022
Genetic feature selection module for scikit-learn

sklearn-genetic Genetic feature selection module for scikit-learn Genetic algorithms mimic the process of natural selection to search for optimal valu

Manuel Calzolari 260 Dec 14, 2022
This repo is official PyTorch implementation of MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices(CVPRW 2021).

Github Code of "MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices" Introduction This repo is official PyTorch implementatio

Choi Sang Bum 203 Jan 05, 2023