Library for 8-bit optimizers and quantization routines.

Overview

bitsandbytes

Bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers and quantization functions.

Paper -- Video -- Docs

TL;DR

Installation:

  1. Note down version: conda list | grep cudatoolkit
  2. Replace 111 with the version that you see: pip install bitsandbytes-cuda111

Usage:

  1. Comment out optimizer: #torch.optim.Adam(....)
  2. Add 8-bit optimizer of your choice bnb.optim.Adam8bit(....) (arguments stay the same)
  3. Replace embedding layer if necessary: torch.nn.Embedding(..) -> bnb.nn.Embedding(..)

Features

  • 8-bit Optimizers: Adam, AdamW, RMSProp, LARS, LAMB (saves 75% memory)
  • Stable Embedding Layer: Improved stability through better initialization, and normalization
  • 8-bit quantization: Quantile, Linear, and Dynamic quantization
  • Fast quantile estimation: Up to 100x faster than other algorithms

Requirements & Installation

Requirements: anaconda, cudatoolkit, pytorch Hardware requirements: NVIDIA Maxwell GPU or newer (>=GTX 9XX) Supported CUDA versions: 9.2 - 11.3

The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the "Get Started" instructions on the official website.

bitsandbytes is compatible with all major PyTorch releases and cudatoolkit versions, but for now, you need to select the right version manually. To do this run:

conda list | grep cudatoolkit

and take note of the Cuda version that you have installed. Then you can install bitsandbytes via:

# choices: {cuda92, cuda 100, cuda101, cuda102, cuda110, cuda111, cuda113}
# replace XXX with the respective number
pip install bitsandbytes-cudaXXX

To check if your installation was successful, you can execute the following command, which runs a single bnb Adam update.

wget https://gist.githubusercontent.com/TimDettmers/1f5188c6ee6ed69d211b7fe4e381e713/raw/4d17c3d09ccdb57e9ab7eca0171f2ace6e4d2858/check_bnb_install.py && python check_bnb_install.py

Using bitsandbytes

Using the 8-bit Optimizers

With bitsandbytes 8-bit optimizers can be used by changing a single line of code in your codebase. For NLP models we recommend also to use the StableEmbedding layers (see below) which improves results and helps with stable 8-bit optimization. To get started with 8-bit optimizers, it is sufficient to replace your old optimizer with the 8-bit optimizer in the following way:

import bitsandbytes as bnb

# adam = torch.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995)) # comment out old optimizer
adam = bnb.optim.Adam8bit(model.parameters(), lr=0.001, betas=(0.9, 0.995)) # add bnb optimizer
adam = bnb.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995), optim_bits=8) # equivalent


torch.nn.Embedding(...) ->  bnb.nn.StableEmbedding(...) # recommended for NLP models

Note that by default all parameter tensors with less than 4096 elements are kept at 32-bit even if you initialize those parameters with 8-bit optimizers. This is done since such small tensors do not save much memory and often contain highly variable parameters (biases) or parameters that require high precision (batch norm, layer norm). You can change this behavior like so:

# parameter tensors with less than 16384 values are optimized in 32-bit
# it is recommended to use multiplies of 4096
adam = bnb.optim.Adam8bit(model.parameters(), min_8bit_size=16384) 

Change Bits and other Hyperparameters for Individual Parameters

If you want to optimize some unstable parameters with 32-bit Adam and others with 8-bit Adam, you can use the GlobalOptimManager. With this, we can also configure specific hyperparameters for particular layers, such as embedding layers. To do that, we need two things: (1) register the parameter while they are still on the CPU, (2) override the config with the new desired hyperparameters (anytime, anywhere). See our guide for more details

Fairseq Users

To use the Stable Embedding Layer, override the respective build_embedding(...) function of your model. Make sure to also use the --no-scale-embedding flag to disable scaling of the word embedding layer (nor replaced with layer norm). You can use the optimizers by replacing the optimizer in the respective file (adam.py etc.).

Release and Feature History

For upcoming features and changes and full history see Patch Notes.

Errors

  1. RuntimeError: CUDA error: no kernel image is available for execution on the device. Solution

License

The majority of bitsandbytes is licensed under MIT, however portions of the project are available under separate license terms: Pytorch is licensed under the BSD license.

We thank Fabio Cannizzo for his work on FastBinarySearch which we use for CPU quantization.

Citation

If you found this library and 8-bit optimizers or quantization routines useful, please consider citing out work.

@misc{dettmers2021optim8bit,
      title={8-bit Optimizers via Block-wise Quantization},
      author={Tim Dettmers and Mike Lewis and Sam Shleifer and Luke Zettlemoyer},
      year={2021},
      eprint={2110.02861},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Comments
  • python setup.py install error

    python setup.py install error

    (bitsandbytes) [email protected]:~/disk1/github/bitsandbytes$ python setup.py install Traceback (most recent call last): File "setup.py", line 15, in name = f"bitsandbytes-cuda{os.environ['CUDA_VERSION']}", File "/home/chenxin/disk1/anaconda3/envs/bitsandbytes/lib/python3.8/os.py", line 675, in getitem raise KeyError(key) from None KeyError: 'CUDA_VERSION' (bitsandbytes) [email protected]:~/disk1/github/bitsandbytes$ conda list | grep cudatoolkit cudatoolkit 11.1.1 h6406543_8 conda-forge

    documentation enhancement 
    opened by mathpopo 10
  • Did you ever try MNMT systems?

    Did you ever try MNMT systems?

    As reported in the paper, for training a bi-directional transformer model on WMT14 or WMT16 the performance of 8-bit Adam stays relatively consistent with the 32-bit counterparts. I was also able to verify this on other data sources for training bi-directional models with my own setup.

    However, I've also tried multiple variations of 8-bit optimizers on multilingual neural machine translation (MNMT) models in fairseq and there it seems that even with --no-scale-embedding as well as the StableEmbedding the performance is roughly 3 BLEU behind the counterparts. The --no-scale-embedding flag amounts to roughly 7 BLEU gain, while the xavier init amounts to roughly 0.4 BLEU gain. Didn't look into the effect of the layer norm of the stable embeddings yet.

    Did you do any testing on that and have practical tips on getting the performance up?

    bug question 
    opened by SirRob1997 9
  • undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    (torch1.8-py3.8) [email protected]:/home/share/jiaofangkai$ python check_bnb_install.py
    Traceback (most recent call last):
      File "check_bnb_install.py", line 1, in <module>
        import bitsandbytes as bnb
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/__init__.py", line 5, in <module>
        from .optim import adam
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/__init__.py", line 5, in <module>
        from .adam import Adam, Adam8bit, Adam32bit
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/adam.py", line 6, in <module>
        from bitsandbytes.optim.optimizer import Optimizer2State
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/optimizer.py", line 6, in <module>
        import bitsandbytes.functional as F
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/functional.py", line 13, in <module>
        lib = ct.cdll.LoadLibrary(os.path.dirname(__file__) + '/libbitsandbytes.so')
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/ctypes/__init__.py", line 459, in LoadLibrary
        return self._dlltype(name)
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/ctypes/__init__.py", line 381, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: /home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37
    

    Hi, I have encountered similar questions to #5 . I have tested with TeslaT4 and RTX 2080Ti but both failed.

    The environment are as follows:

    # TeslaT4
    Ubuntu 18.04.6, Tesla T4, cuda-10.1, driver vesion: 418.197.02, python=3.8, torch=1.8.1+cu101
    
    # RTX 2080Ti
    Ubuntu 20.04.3, RTX 2080Ti, cuda-10.1, driver version: 435.21, python=3.8, torch=1.8.1+cu101
    
    bug 
    opened by SparkJiao 6
  • Support for Tesla Architecture

    Support for Tesla Architecture

    First of all, great work!

    Secondly, I can see that you specify that Maxwell Architecture is necessary, and I am wondering if

    1. it's possible to do 8-bit optimization on Tesla Architecture
    2. there are plans to implement it

    I ask because Kaggle and Colab notebooks use Tesla Architectures (P100, K80), and I'm sure those communities, myself included, would be interested in using bitsandbytes

    enhancement 
    opened by nbroad1881 5
  • import bitsandbytes as bnb 错误

    import bitsandbytes as bnb 错误

    import bitsandbytes as bnb 出现如下 OSError: /home/anaconda3/envs/ner/lib/python3.6/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    您好,请问这该怎么解决啊

    opened by zhishui3 4
  • no difference in memory usage

    no difference in memory usage

    Hi. I am training my network with bnb.optim.Adam8bit vs torch.optim.Adam but I don't see any difference in memory consumption.

    Running on GTX 2080Ti (single gpu or DDP). with cudatoolkit 11.1.74 bitsandbytes-cuda111

    looking in nvidia-smi I see 9.6GB in both cases Am I missing something here?

    opened by ofrimasad 3
  • errors when training to the third epoch. everytime.

    errors when training to the third epoch. everytime.

    THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorMath.cu line=29 error=1 : invalid argument
    Traceback (most recent call last):
      File "train_pointunet.py", line 211, in <module>
        loss_seg = lossfunc_seg(outputs_seg, labels)+lossfunc_dice(outputs_seg,labels)
      File "/home/why/miniconda3/envs/3.6.8/lib/python3.6/site-packages/torch/tensor.py", line 245, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/why/miniconda3/envs/3.6.8/lib/python3.6/site-packages/torch/autograd/__init__.py", line 147, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    RuntimeError: cuda runtime error (1) : invalid argument at /pytorch/aten/src/THC/generic/THCTensorMath.cu:29
    

    im very confused because in the first several epoches it works fine.

    opened by Dootmaan 2
  • [Question] Usage of bnb.nn.Embedding with existing classes from other libraries

    [Question] Usage of bnb.nn.Embedding with existing classes from other libraries

    Replace embedding layer if necessary: torch.nn.Embedding(..) -> bnb.nn.Embedding(..)

    Does it suppose user creation of custom classes to replace (for example) huggingface transformers' GPT2DoubleHeadsModel? Or there is something like bnb.optim.GlobalOptimManager which change provided model instance to use bitsandbytes embeddings instead of torch ones?

    enhancement question 
    opened by LSinev 2
  • The code uses more GPU memory with Multi-scale Vision Transformers

    The code uses more GPU memory with Multi-scale Vision Transformers

    Hi,

    Thanks for the great work! I'm currently trying to apply your code to vision transformers, specifically, on this code base: https://github.com/facebookresearch/SlowFast/tree/main/projects/mvit When using torch.optim.SGD(momentum=0.9), the code consumes 9221MiB GPU memory during training. After changing it to use bnb.optim.SGD8bit() with the same arguments, it consumes even a bit more GPU memory of 9235MiB. Do you have any idea why this would happen? Thank you! My CUDA version is 10.2 and torch version is 1.9.1.

    Best, Junwei

    question 
    opened by JunweiLiang 2
  • bnb.optim.AdamW

    bnb.optim.AdamW

    Hey @TimDettmers,

    Awesome library! bnb.optim.Adam saved me from having to use model parallelism :heart_eyes:

    Do you think it would be easy to also add a bnb.optim.AdamW version for https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW ?

    Happy to give it a try if you think it's easily feasible :-)

    enhancement 
    opened by patrickvonplaten 2
  • undefined symbol: __fatbinwrap_38

    undefined symbol: __fatbinwrap_38

    With some CUDA versions and on some architectures this error occurs:

    Traceback (most recent call last):
      File "check_bnb_install.py", line 1, in <module>
        import bitsandbytes as bnb
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/__init__.py", line 5, in <module>
        from .optim import adam
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/__init__.py", line 5, in <module>
        from .adam import Adam, Adam8bit, Adam32bit
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/adam.py", line 5, in <module>
        from bitsandbytes.optim.optimizer import Optimizer2State
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/optimizer.py", line 6, in <module>
        import bitsandbytes.functional as F
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/functional.py", line 13, in <module>
        lib = ct.cdll.LoadLibrary(os.path.dirname(__file__) + '/libbitsandbytes.so')
      File "/miniconda/envs/pytorch_env/lib/python3.7/ctypes/__init__.py", line 442, in LoadLibrary
        return self._dlltype(name)
      File "/miniconda/envs/pytorch_env/lib/python3.7/ctypes/__init__.py", line 364, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: /miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37
    

    Confirmed for CUDA 10.1 for compute capability 7.5 (V100).

    bug 
    opened by TimDettmers 2
  • 'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    I am trying to train GPT-J with 8bit weights. It's working well on GPU. But When I try to use it on CPU, it gives this error

    'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    I have used dequantize_blockwise from bitsandbytes.functional. Following is the class in which its used:

    class DequantizeAndLinear(torch.autograd.Function):
    
        def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor,
                    absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor):
            weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
            ctx.save_for_backward(input, weights_quantized, absmax, code)
            ctx._has_bias = bias is not None
            return F.linear(input, weights_deq, bias)
    
        def backward(ctx, grad_output: torch.Tensor):
            assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3]
            input, weights_quantized, absmax, code = ctx.saved_tensors
            # grad_output: [*batch, out_features]
            weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
            grad_input = grad_output @ weights_deq
            grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None
            return grad_input, None, None, None, grad_bias
    
    

    Is it possible to run it on CPUor should I have to run it only GPU ?

    opened by HumzaSami00 0
  • Adding Code of Conduct file

    Adding Code of Conduct file

    This is pull request was created automatically because we noticed your project was missing a Code of Conduct file.

    Code of Conduct files facilitate respectful and constructive communities by establishing expected behaviors for project contributors.

    This PR was crafted with love by Facebook's Open Source Team.

    CLA Signed 
    opened by facebook-github-bot 0
  • Adding Contributing file

    Adding Contributing file

    This is pull request was created automatically because we noticed your project was missing a Contributing file.

    CONTRIBUTING files explain how a developer can contribute to the project - which you should actively encourage.

    This PR was crafted with love by Facebook's Open Source Team.

    CLA Signed 
    opened by facebook-github-bot 0
  • 8-bit optimizer crashes when fine-tuning gpt2-large

    8-bit optimizer crashes when fine-tuning gpt2-large

    Using the bnb.optim.Adam8bit optimizer in place of torch.optim.Adam causes a crash after a handful of batches:

    12it [00:22, 1.82s/it]Error an illegal memory access was encountered at line 198 in file /home/alyssa/gpt_math/bitsandbytes/csrc/ops.cu

    I am fine-tuning Huggingface's version of the gpt2-large model on an Ampere 3090 GPU with CUDA version 11.6 and nVidia driver version 510.73.05. I have tried compiling bitsandbytes on my machine from source, and the set_optim_to_run_embedding_in_fp32 trick from https://github.com/huggingface/transformers/issues/14819; neither of them affected the behavior. Running with the standard pytorch Adam optimizer works fine. nvidia-smi shows 16 GB of memory used on a GPU with 24 GB, so it shouldn't be running out of RAM or anywhere close to that.

    opened by rationalism 0
  • bfloat16 grads are not supported

    bfloat16 grads are not supported

    Is there any plans to support models/grads with bfloat16 type? Bfloat gained quite the popularity lately as every ampere GPU supports the type, and eliminates the need for loss scaling compared to float16. This is what I get when I try to initialize bnb.AdamW with a bfloat16 casted model: ValueError: Gradient+optimizer bit data type combination not supported: grad torch.bfloat16, optimizer torch.uint8

    opened by kurumuz 0
  • Check dtype of input tensors is correct

    Check dtype of input tensors is correct

    If a 16-bit float tensor on the CPU was passed as the input to quantize_blockwise or the output buffer for dequantize_blockwise, the code was previously passing its address to the c[de]quantize_blockwise_cpu_fp32 method, silently casting it to a 32-bit float* and resulting in segfaults.

    A similar issue occurs if the absmax/code arguments to dequantize_blockwise are (somehow) 16-bit, resulting in illegal memory accesses on the GPU.

    It took me a little while to track down the causes because of the cryptic errors; so I figured it was worth suggesting these changes. I've only been using the blockwise methods, so it's possible there are similar issues in other parts of the code - might be worth checking :)

    This PR also includes a couple unrelated typo fixes.

    Thanks for your work on this library, it's nice to squeeze the most I can out of my paltry GPU memory :)

    CLA Signed 
    opened by acarapetis 3
Releases(0.26.0)
  • 0.26.0(Nov 29, 2021)

    This release has important bug fixes for the StableEmbedding layer and it introduces new optimizers AdaGrad, and AdamW. The 0.26.0 release also features a new, lightweight embedding class, bnb.nn.Embedding which uses 32-bit optimizers but no layer norm. This layer allows for the easy use of pretrained models that do not use embedding layer norms. Now available on pip.

    Changelog

    Features:

    • Added Adagrad (without grad clipping) as 32-bit and 8-bit block-wise optimizer.
    • Added AdamW (copy of Adam with weight decay init 1e-2). #10
    • Introduced ModuleConfig overrides which can be seamlessly be used at initialization time of a module.
    • Added bnb.nn.Embedding layer which runs at 32-bit but without the layernorm. This works well if you need to fine-tune pretrained models that do not have a embedding layer norm. #19

    Bug fixes:

    • Fixed a bug where weight decay was incorrectly applied to 32-bit Adam. #13
    • Fixed an unsafe use of eval. #8
    • Fixed a bug where the StableEmbedding layer 32-bit optimizer override would not work without registering the whole model first (bnb.optim.GlobalOptimManager.get_instance().register_parameters(model.parameters())). #13 #15

    Docs:

    • Added instructions how to solve "__fatbinwrap_" errors.
    Source code(tar.gz)
    Source code(zip)
  • 0.25.0(Oct 22, 2021)

    This release offers full support for all GPUs for Kepler (GTX 600) or better. It introduces skip_zeros which ensures correct optimizer updates for sparse gradients. While some pieces are still missing, some features for 8-bit optimizer research are added. AnalysisAdam allows the tracking and analysis of 8-bit vs 32-bit Adam quantization errors.

    Features:

    • Added skip_zeros for block-wise and 32-bit optimizers. This ensures correct updates for sparse gradients and sparse models.
    • Added support for Kepler GPUs. (#4)
    • Added Analysis Adam to track 8-bit vs 32-bit quantization errors over time.
    • Make compilation more user-friendly.

    Bug fixes:

    • fixed "undefined symbol: __fatbinwrap_38" error for P100 GPUs on CUDA 10.1 (#5)

    Docs:

    • Added docs with instructions to compile from source.
    Source code(tar.gz)
    Source code(zip)
Owner
Facebook Research
Facebook Research
Multi Camera Calibration

Multi Camera Calibration 'modules/camera_calibration/app/camera_calibration.cpp' is for calculating extrinsic parameter of each individual cameras. 'm

7 Dec 01, 2022
A semantic segmentation toolbox based on PyTorch

Introduction vedaseg is an open source semantic segmentation toolbox based on PyTorch. Features Modular Design We decompose the semantic segmentation

407 Dec 15, 2022
Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust.

Subspace Adversarial Training Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust. However,

15 Sep 02, 2022
Emulation and Feedback Fuzzing of Firmware with Memory Sanitization

BaseSAFE This repository contains the BaseSAFE Rust APIs, introduced by "BaseSAFE: Baseband SAnitized Fuzzing through Emulation". The example/ directo

Security in Telecommunications 138 Dec 16, 2022
This code is the implementation of the paper "Coherence-Based Distributed Document Representation Learning for Scientific Documents".

Introduction This code is the implementation of the paper "Coherence-Based Distributed Document Representation Learning for Scientific Documents". If

tsc 0 Jan 11, 2022
COVID-VIT: Classification of Covid-19 from CT chest images based on vision transformer models

COVID-ViT COVID-VIT: Classification of Covid-19 from CT chest images based on vision transformer models This code is to response to te MIA-COV19 compe

17 Dec 30, 2022
Code release for NeX: Real-time View Synthesis with Neural Basis Expansion

NeX: Real-time View Synthesis with Neural Basis Expansion Project Page | Video | Paper | COLAB | Shiny Dataset We present NeX, a new approach to novel

536 Dec 20, 2022
Analyzes your GitHub Profile and presents you with a report on how likely you are to become the next MLH Fellow!

Fellowship Prediction GitHub Profile Comparative Analysis Tool Built with BentoML Table of Contents: Features Disclaimer Technologies Used Contributin

Damir Temir 51 Dec 29, 2022
EssentialMC2 Video Understanding

EssentialMC2 Introduction EssentialMC2 is a complete system to solve video understanding tasks including MHRL(representation learning), MECR2( relatio

Alibaba 106 Dec 11, 2022
Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

ACE Please find the preliminary version published at BMVC 2020 in the folder BMVC_version, and its extended journal version in Journal_version. Datase

28 Dec 25, 2022
IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling

IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling This is my code, data and approach for the IEEE-CIS Technical Challen

3 Sep 18, 2022
Python package for missing-data imputation with deep learning

MIDASpy Overview MIDASpy is a Python package for multiply imputing missing data using deep learning methods. The MIDASpy algorithm offers significant

MIDASverse 77 Dec 03, 2022
Dataset and Code for the paper "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021), and "Depth-only Object Tracking" (BMVC2021)

DeT and DOT Code and datasets for "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021) "Depth-only Object Tracking" (BMVC2021) @InProceedings

Yan Song 55 Dec 15, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

TransMaS This repository is the official pytorch implementation of the following paper: NIPS2021 Mixed Supervised Object Detection by TransferringMask

BCMI 49 Jul 27, 2022
KakaoBrain KoGPT (Korean Generative Pre-trained Transformer)

KoGPT KoGPT (Korean Generative Pre-trained Transformer) https://github.com/kakaobrain/kogpt https://huggingface.co/kakaobrain/kogpt Model Descriptions

Kakao Brain 799 Dec 28, 2022
Code for the paper "Functional Regularization for Reinforcement Learning via Learned Fourier Features"

Reinforcement Learning with Learned Fourier Features State-space Soft Actor-Critic Experiments Move to the state-SAC-LFF repository. cd state-SAC-LFF

Alex Li 10 Nov 11, 2022
Neural Scene Graphs for Dynamic Scene (CVPR 2021)

Implementation of Neural Scene Graphs, that optimizes multiple radiance fields to represent different objects and a static scene background. Learned representations can be rendered with novel object

151 Dec 26, 2022
Official implementation of "Membership Inference Attacks Against Self-supervised Speech Models"

Introduction Official implementation of "Membership Inference Attacks Against Self-supervised Speech Models". In this work, we demonstrate that existi

Wei-Cheng Tseng 7 Nov 01, 2022
Final project for Intro to CS class.

Financial Analysis Web App https://share.streamlit.io/mayurk1/fin-web-app-final-project/webApp.py 1. Project Description This project is a technical a

Mayur Khanna 1 Dec 10, 2021