Research on Tabular Deep Learning (Python package & papers)

Overview

Research on Tabular Deep Learning

For paper implementations, see the section "Papers and projects".

rtdl is a PyTorch-based package providing a user-friendly API for the main models and concepts from our papers. See the documentation.

Press "Watch" to stay up to date with new papers and releases!

Feel free to report issues and post questions/feedback/ideas.

Papers and projects

Name Location Comment
On Embeddings for Numerical Features in Tabular Deep Learning link arXiv 2022
Revisiting Deep Learning Models for Tabular Data link NeurIPS 2021
rtdl link Python package
Comments
  • Fix MLP.make_baseline() return type

    Fix MLP.make_baseline() return type

    Return object of type cls, not MLP, in MLP.make_baseline(). Otherwise, child classes inheriting from MLP constructed using the .make_baseline() method always have type MLP (instead of the type of the child class).

    opened by jpgard 6
  • Is it possible to provide a scikit-learn interface?

    Is it possible to provide a scikit-learn interface?

    This project is interesting and I want to use it as the baseline algorithm for my paper. However, it seems that I need to take several steps in order to make a prediction. Consequently, is it possible to provide a scikit-learn interface for making a convenient comparison between different algorithms?

    opened by hengzhe-zhang 5
  • Cannot link in the document of zero

    Cannot link in the document of zero

    Hi! I am trying to understand the usage of python package zero, which is used in the example of rtdl. But I found that the linkage in the comment line of the code is not available anymore.

    Here is the invalid link: https://yura52.github.io/zero/0.0.4/reference/api/zero.improve_reproducibility.html

    I am wondering is there any other document? Thank you!

    Regards.

    opened by WuZheng326 4
  • embedding of categorical variables

    embedding of categorical variables

    Hi Yury,

    Thank you for your excellent work. I get a problem when handling categorical features. Do I need to pre-train the embedding layer when applying it to the data processing or just to attach the embedding layer to the model and train it with the model.

    opened by lhq12 3
  • Add ⭐️Weights & Biases⭐️ Logging

    Add ⭐️Weights & Biases⭐️ Logging

    This PR aims to add basic Weights and Biases Metric Logging by appending to the existing codebase with minimal changes while supporting Checkpoint uploads as Weights and Biases Artifacts.

    Wherever needed, I have used the existing Weights and Biases integrations viz. LightGBM and XGBoost.

    I have validated the performance of all the proposed runs by running 150+ runs, which can be viewed on this project page and in detail in an accompanying blog post.

    opened by SauravMaheshkar 3
  • Bugs in piecewise-linear encoding

    Bugs in piecewise-linear encoding

    1. Here, indices = as_tensor(values) must be changed to this:
    indices = as_tensor(indices)
    
    1. Here, np.array(d_encoding) must be changed to this:
    torch.tensor(d_encoding).to(indices)
    
    1. Here, the argument dtype=X.dtype is missing for np.array

    2. Here, .to(X) is missing

    3. Here, it must be:

    is_last_bin = bin_indices + 1 == as_tensor(list(map(len, bin_edges)))
    
    opened by Yura52 2
  • LGBMRegressor on California Housing dataset is 0.68 >> 0.46

    LGBMRegressor on California Housing dataset is 0.68 >> 0.46

    I use the sample code to prepare the dataset:

    device = 'cpu'
    dataset = sklearn.datasets.fetch_california_housing()
    task_type = 'regression'
    
    X_all = dataset['data'].astype('float32')
    y_all = dataset['target'].astype('float32')
    n_classes = None
    
    X = {}
    y = {}
    X['train'], X['test'], y['train'], y['test'] = sklearn.model_selection.train_test_split(
        X_all, y_all, train_size=0.8
    )
    X['train'], X['val'], y['train'], y['val'] = sklearn.model_selection.train_test_split(
        X['train'], y['train'], train_size=0.8
    )
    
    # not the best way to preprocess features, but enough for the demonstration
    preprocess = sklearn.preprocessing.StandardScaler().fit(X['train'])
    X = {
        k: torch.tensor(preprocess.fit_transform(v), device=device)
        for k, v in X.items()
    }
    y = {k: torch.tensor(v, device=device) for k, v in y.items()}
    
    # !!! CRUCIAL for neural networks when solving regression problems !!!
    y_mean = y['train'].mean().item()
    y_std = y['train'].std().item()
    y = {k: (v - y_mean) / y_std for k, v in y.items()}
    
    y = {k: v.float() for k, v in y.items()}
    

    And I train a LGBMRegressor with the default hyper parameters:

    model = lgb.LGBMRegressor()
    model.fit(X['train'], y['train'])
    

    But when I evaluate on the test fold, I found the performance is 0.68:

    >>> test_pred = model.predict(X['test'])
    >>> test_pred = torch.from_numpy(test_pred)
    >>> rmse = torch.nn.functional.mse_loss(
    >>>     test_pred.view(-1), y['test'].view(-1)) ** 0.5 * y_std
    >>> print(f'Test RMSE: {rmse:.2f}.')
    Test RMSE: 0.68.
    

    Even using the model from rtdl gives me 0.56 RMSE:

    (epoch) 57 (batch) 0 (loss) 0.1885
    (epoch) 57 (batch) 10 (loss) 0.1315
    (epoch) 57 (batch) 20 (loss) 0.1735
    (epoch) 57 (batch) 30 (loss) 0.1197
    (epoch) 57 (batch) 40 (loss) 0.1952
    (epoch) 57 (batch) 50 (loss) 0.1167
    Epoch 057 | Validation score: 0.7334 | Test score: 0.5612 <<< BEST VALIDATION EPOCH
    

    Is there anything I miss? How can I reproduce the performance in your paper? Thanks!

    opened by fingertap 2
  • Regression results about the RTDL models.

    Regression results about the RTDL models.

    Hi, you did a great implementation of the tab-transformer. However, when I use your example notebook to do the simple regression for the Sin(x), neither the baseline model or the FTTransformer give the good results. I have no idea about this and want to know why.

    Here is the link

    opened by linkedlist771 1
  • typos in CatEmbeddings

    typos in CatEmbeddings

    1. link. The variable cardinalities_and_dimensions does not exist
    2. link. The condition looks broken. Solution: simplify it and remove the word "spec" from the error message.
    opened by Yura52 0
  • Running error, prenormalization is not a class variable

    Running error, prenormalization is not a class variable

    The code crushes at this line, because prenormalization is not in self

    https://github.com/Yura52/rtdl/blob/b130dd2e596c17109bef825bc9c8608e1ae617cc/rtdl/nn/_backbones.py#L627

    opened by zahar-chikishev 0
  • Typos?

    Typos?

    Hello,

    I am trying to use PiecewiseLinearEncoder(). I think I found a few typos. Please check my work.

    I first ran into an issue in piecewise_linear_encoding where I got the error in line 618 saying "RuntimeError: The size of tensor a (3688) must match the size of tensor b (32) at non-singleton dimension 1"

    I dug into the code and found that when PiecewiseLinearEncoder was calling piecewise_linear_encoding the positional arguments of indices and ratios were switched in the former from what was expected in the latter.

    Additionally, when inspecting piecewise_linear_encoding it looks like bin_edges = as_tensor(bin_ratios) not "as_tensor(bin_edges)" which would make more sense.

    Can you please check this out? Much appreciated.

    opened by jdefriel 1
  • How to resume training?

    How to resume training?

    I ran your model in colab for a few hours before google terminated it. I used pickle.dump/load to store the trained model. It works to make predictions but it doesn't seem to be able to resume training.

          if progress.success:
              print(' <<< BEST VALIDATION EPOCH', end='')
              with open(mydrive+jobname, 'wb') as filehandler:
                dump((model, y_std, y_mean),filehandler)
                #we could see result was improving
    
            with open(mydrive+jobname, 'rb') as filehandler:
              model, y_std, y_mean = load(filehandler)
            pred=model(batch,None) #this seems to work
            for epoch in range(1, n_epochs + 1):
                for iteration, batch_idx in enumerate(train_loader):
                    model.train()
                    optimizer.zero_grad()
                    x_batch = X['train'][batch_idx]
                    y_batch = y['train'][batch_idx]
                    loss = loss_fn(apply_model(x_batch).squeeze(1), y_batch)
                    loss.backward()
                    optimizer.step()
                    if iteration % report_frequency == 0:
                        print(f'(epoch) {epoch} (batch) {iteration} (loss) {loss.item():.4f}')
                    #no improvement any more. even the model was dumped immediately after created.
    

    what is the right way to store the model so that I can resume the training?

    opened by jerronl 0
  • A scikit-learn interface for RTDL package.

    A scikit-learn interface for RTDL package.

    Hello! I have written a scikit-learn interface for the RTDL package (https://github.com/hengzhe-zhang/scikit-rtdl). I rely on the skorch to avoid coding errors, and set the default parameters based on the parameters presented in your paper. Hoping you will like it!

    opened by hengzhe-zhang 1
Releases(v0.0.13)
  • v0.0.13(Mar 16, 2022)

  • v0.0.12(Mar 10, 2022)

  • v0.0.10(Feb 28, 2022)

  • v0.0.9(Nov 7, 2021)

    This is a hot-fix release after the big 0.0.8 release (see the release notes for 0.0.8):

    • revert the breaking change in NumericalFeatureTokenizer accidentally introduced in 0.0.8
    • minor documentation refinements
    Source code(tar.gz)
    Source code(zip)
  • v0.0.8(Nov 6, 2021)

    This release focuses on improving the documentation.

    Documentation

    • The following models and classes are now documented:
      • MLP
      • ResNet
      • FTTransformer
      • MultiheadAttention
      • NumericalFeatureTokenizer
      • CategoricalFeatureTokenizer
      • FeatureTokenizer
      • CLSToken
    • Usability have been greatly improved:
      • signatures are now highlighted
      • added the "copy" button to code blocks
      • permalink buttons (signature anchors) are now visible

    Bug fixes

    • MultiheadAttention: fix the crash when bias=False

    Dependencies

    • numpy >= 1.18
    • torch >= 1.7

    Project

    • added spell checking for documentation
    • sphinx was updated to 4.2.0
    • flit was updated to 3.4.0
    Source code(tar.gz)
    Source code(zip)
  • v0.0.7(Oct 10, 2021)

  • v0.0.6(Aug 26, 2021)

    v0.0.6

    New features

    • CLSToken (old name: "AppendCLSToken"): add expand method for easy construction of batches of [CLS]-tokens

    Bug fixes

    • FTTransformer: the make_baseline method now properly constructs an instance

    API changes

    • FTTransformer: the ffn_d_intermidiate argument was renamed to a more conventional ffn_d_hidden
    • FTTransformer: the normalization argument was split into three arguments: attention_normalization, ffn_normalization, head_normalization
    • ResNet: the d_intermidiate argument was renamed to a more conventional d_hidden
    • AppendCLSToken: renamed to CLSToken

    Documentation improvements

    • CLSToken
    • MLP.make_baseline

    Project

    • add tests with CUDA
    • remove the .vscode directory from the repository
    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Jul 20, 2021)

    API Changes:

    • MLP.make_baseline is now more user-friendly and accepts a single d_layers argument instead of four (d_first, d_intermidiate, d_last, n_blocks)
    Source code(tar.gz)
    Source code(zip)
  • v0.0.4(Jul 11, 2021)

  • v0.0.3(Jul 2, 2021)

    API Changes

    • ResNet & ResNet.Block: the d parameter was renamed to d_main

    Fixes

    • minor fix in the comments in examples/rtdl.ipynb

    Project

    • add tests that validate that the models in rtdl are literally the same as in the implementation of the paper
    Source code(tar.gz)
    Source code(zip)
The official implementation of paper Siamese Transformer Pyramid Networks for Real-Time UAV Tracking, accepted by WACV22

SiamTPN Introduction This is the official implementation of the SiamTPN (WACV2022). The tracker intergrates pyramid feature network and transformer in

Robotics and Intelligent Systems Control @ NYUAD 29 Jan 08, 2023
DeepCAD: A Deep Generative Network for Computer-Aided Design Models

DeepCAD This repository provides source code for our paper: DeepCAD: A Deep Generative Network for Computer-Aided Design Models Rundi Wu, Chang Xiao,

Rundi Wu 85 Dec 31, 2022
A geometric deep learning pipeline for predicting protein interface contacts.

A geometric deep learning pipeline for predicting protein interface contacts.

44 Dec 30, 2022
TensorFlow (Python API) implementation of Neural Style

neural-style-tf This is a TensorFlow implementation of several techniques described in the papers: Image Style Transfer Using Convolutional Neural Net

Cameron 3.1k Jan 02, 2023
An experiment to bait a generalized frontrunning MEV bot

Honeypot 🍯 A simple experiment that: Creates a honeypot contract Baits a generalized fronturnning bot with a unique transaction Analyze bot behaviour

0x1355 14 Nov 24, 2022
Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022

PGNet Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022, CVPR 2022 (arXiv 2204.05041) Abstract Recent salient objec

CVTEAM 109 Dec 05, 2022
Computer-Vision-Paper-Reviews - Computer Vision Paper Reviews with Key Summary along Papers & Codes

Computer-Vision-Paper-Reviews Computer Vision Paper Reviews with Key Summary along Papers & Codes. Jonathan Choi 2021 50+ Papers across Computer Visio

Jonathan Choi 2 Mar 17, 2022
An introduction to bioimage analysis - http://bioimagebook.github.io

Introduction to Bioimage Analysis This book tries explain the main ideas of image analysis in a practical and engaging way. It's written primarily for

Bioimage Book 20 Nov 28, 2022
PyTorch implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Erik Linder-Norén 13.4k Jan 08, 2023
A denoising diffusion probabilistic model synthesises galaxies that are qualitatively and physically indistinguishable from the real thing.

Realistic galaxy simulation via score-based generative models Official code for 'Realistic galaxy simulation via score-based generative models'. We us

Michael Smith 32 Dec 20, 2022
Framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample resolution

Sample-specific Bayesian Networks A framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample or per-patient re

Caleb Ellington 1 Sep 23, 2022
TorchFlare is a simple, beginner-friendly, and easy-to-use PyTorch Framework train your models effortlessly.

TorchFlare TorchFlare is a simple, beginner-friendly and an easy-to-use PyTorch Framework train your models without much effort. It provides an almost

Atharva Phatak 85 Dec 26, 2022
CVPR '21: In the light of feature distributions: Moment matching for Neural Style Transfer

In the light of feature distributions: Moment matching for Neural Style Transfer (CVPR 2021) This repository provides code to recreate results present

Nikolai Kalischek 49 Oct 13, 2022
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

290 Dec 25, 2022
A data-driven approach to quantify the value of classifiers in a machine learning ensemble.

Documentation | External Resources | Research Paper Shapley is a Python library for evaluating binary classifiers in a machine learning ensemble. The

Benedek Rozemberczki 188 Dec 29, 2022
Code for our ACL 2021 paper "One2Set: Generating Diverse Keyphrases as a Set"

One2Set This repository contains the code for our ACL 2021 paper “One2Set: Generating Diverse Keyphrases as a Set”. Our implementation is built on the

Jiacheng Ye 63 Jan 05, 2023
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
Classifying audio using Wavelet transform and deep learning

Audio Classification using Wavelet Transform and Deep Learning A step-by-step tutorial to classify audio signals using continuous wavelet transform (C

Aditya Dutt 17 Nov 29, 2022
Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

To run the code Unzip the package to your local directory; Run 'pip install -r requirements.txt' to download required packages; Open file ~/nips_code/

32 Dec 26, 2022
Code for our paper: Online Variational Filtering and Parameter Learning

Variational Filtering To run phi learning on linear gaussian (Fig1a) python linear_gaussian_phi_learning.py To run phi and theta learning on linear g

16 Aug 14, 2022