NeuralCompression is a Python repository dedicated to research of neural networks that compress data

Overview

NeuralCompression

LICENSE Build and Test

What's New

About

NeuralCompression is a Python repository dedicated to research of neural networks that compress data. The repository includes tools such as JAX-based entropy coders, image compression models, video compression models, and metrics for image and video evaluation.

NeuralCompression is alpha software. The project is under active development. The API will change as we make releases, potentially breaking backwards compatibility.

Installation

NeuralCompression is a project currently under development. You can install the repository in development mode.

PyPI Installation

First, install PyTorch according to the directions from the PyTorch website. Then, you should be able to run

pip install neuralcompression

to get the latest version from PyPI.

Development Installation

First, clone the repository and navigate to the NeuralCompression root directory. To match your local environment to the test environment, run

pip install -r dev-requirements.txt

Then, you can install the package in development mode by running

pip install -e .

If you are not interested in matching the test environment, then you only need to apply the second step to install.

Repository Structure

We use a 2-tier repository structure. The neuralcompression package contains a core set of tools for doing neural compression research. Code committed to the core package requires stricter linting, high code quality, and rigorous review. The projects folder contains code for reproducing papers and training baselines. Code in this folder is not linted aggressively, we don't enforce type annotations, and it's okay to omit unit tests.

The 2-tier structure enables rapid iteration and reproduction via code in projects that is built on a backbone of high-quality code in neuralcompression.

neuralcompression

  • neuralcompression - base package
    • data - PyTorch data loaders for various data sets
    • entropy_coders - lossless compression algorithms in JAX
      • craystack - an implementation of the rANS algorithm with the craystack API
    • functional - methods for image warping, information cost, etc.
    • layers - building blocks for compression models
    • metrics - torchmetrics classes for assessing model performance
    • models - complete compression models

projects

Getting Started

For an example of package usage, see the Scale Hyperprior for an example of how to train an image compression model in PyTorch Lightning. See DVC for a video compression example.

Contributions

Please read our CONTRIBUTING guide and our CODE_OF_CONDUCT prior to submitting a pull request.

We test all pull requests. We rely on this for reviews, so please make sure any new code is tested. Tests for neuralcompression go in the tests folder in the root of the repository. Tests for individual projects go in those projects' own tests folder.

We use black for formatting, isorst for import sorting, flake8 for linting, mypy for type checking. We enforce these on the neuralcompression package, but not in the projects folder.

License

NeuralCompression is MIT licensed, as found in the LICENSE file.

Cite

If you find NeuralCompression useful in your work, feel free to cite

@misc{muckley2021neuralcompression,
    author={Matthew Muckley and Jordan Juravsky and Daniel Severo and Mannat Singh and Quentin Duval and Karen Ullrich},
    title={NeuralCompression},
    howpublished={\url{https://github.com/facebookresearch/NeuralCompression}},
    year={2021}
}
Comments
  • Dependency and Build Fixes

    Dependency and Build Fixes

    This is a potpourri of fixes.

    Dependencies Currently, we require very specific package versions for any install. This is too restrictive for a research package where researchers may have a variety of reasons to carefully tailor their environment. This PR resolves the issue by allowing general installation to have flexible packages. To maintain CI stability and functionality, the fixed version requirements have been moved to dev.

    Build We build our CPP extension using load instead of setup.py. This removes PyTorch as a dependency at install time and removes the need for us to distribute binaries. This required adding ninja as a dependency. Hopefully we can get rid of it eventually, but this temporarily should fix distribution.

    CI testing isort was misconfigured and missed a lot of import changes. I fixed the config and ran it on all files to update import sorting.

    Other While implementing this I also found a few other minor issues with respect to versioning, tests, and formatting that I resolved.

    This PR makes NeuralCompression require Python 3.8 due to its usage of importlib.

    Testing Testing done via CI.

    CLA Signed 
    opened by mmuckley 4
  • Documentation builds on ReadTheDocs are failing

    Documentation builds on ReadTheDocs are failing

    Bug

    Documentation builds on ReadTheDocs are failing.

    Steps

    See here: https://readthedocs.org/projects/neuralcompression/builds/15172473/

    Expected behavior

    Docs should build without error.

    Environment

    ReadTheDocs

    Context

    See here: https://readthedocs.org/projects/neuralcompression/builds/15172473/

    bug 
    opened by mmuckley 4
  • `ContinuousEntropy` layer

    `ContinuousEntropy` layer

    Abstract base class (ABC) for implementing continuous entropy layers.

    The abstract class pre-computes integer probability tables based on a prior distribution, which can be used across different platforms by a range encoder and decoder. The class also provides abstract methods for compression, decompression, quantization, and reconstruction.

    enhancement CLA Signed 
    opened by 0x00b1 4
  • `NonNegativeParameterization` layer

    `NonNegativeParameterization` layer

    closes #86

    Non-negative parameterization as required by generalized divisive normalization (GDN) activations. The parameter is subjected to an invertible transformation that slows down the learning rate for small values.

    A brief usage example:

    import torch
    from torch.nn import Module, Parameter
    import torch.nn.functional
    
    from torch import Tensor
    
    from ._non_negative_parameterization import NonNegativeParameterization
    
    
    class GeneralizedDivisiveNormalization(Module):
        def __init__(
            self,
            in_channels: int,
            inverse: bool = False,
            beta_min: float = 1e-6,
            gamma_init: float = 0.1,
        ):
            super(GeneralizedDivisiveNormalization, self).__init__()
    
            self._inverse = inverse
    
            self._reparameterized_beta = NonNegativeParameterization(
                torch.ones(in_channels),
                minimum=beta_min,
            )
    
            self._beta = Parameter(
                self._reparameterized_beta.initialized,
            )
    
            self._reparameterized_gamma = NonNegativeParameterization(
                gamma_init * torch.eye(in_channels),
            )
    
            self._gamma = Parameter(
                self._reparameterized_gamma.initialized,
            )
    
        def forward(self, x: Tensor) -> Tensor:
            _, channels, _, _ = x.size()
    
            y = torch.nn.functional.conv2d(
                x ** 2,
                torch.reshape(
                    self._reparameterized_gamma(self._gamma),
                    (channels, channels, 1, 1)
                ),
                self._reparameterized_beta(self._beta),
            )
    
            if self._inverse:
                return x * torch.sqrt(y)
    
            return x * torch.rsqrt(y)
    
    import torch
    import torch.testing
    
    from neuralcompression.layers import GeneralizedDivisiveNormalization
    
    
    class TestGeneralizedDivisiveNormalization:
        def test_backward(self):
            x = torch.rand((1, 32, 16, 16), requires_grad=True)
    
            generalized_divisive_normalization = GeneralizedDivisiveNormalization(32)
    
            y = generalized_divisive_normalization(x)
    
            y.backward(x)
    
            assert y.shape == x.shape
    
            assert x.grad is not None
    
            assert x.grad.shape == x.shape
    
            torch.testing.assert_allclose(
                x / torch.sqrt(1 + 0.1 * (x ** 2)),
                y,
            )
    
            generalized_divisive_normalization = GeneralizedDivisiveNormalization(
                32,
                inverse=True,
            )
    
            y = generalized_divisive_normalization(x)
    
            y.backward(x)
    
            assert y.shape == x.shape
    
            assert x.grad is not None
    
            assert x.grad.shape == x.shape
    
            torch.testing.assert_allclose(
                x * torch.sqrt(1 + 0.1 * (x ** 2)),
                y,
            )
    
    enhancement CLA Signed 
    opened by 0x00b1 4
  • pad image at inference time, remove resize

    pad image at inference time, remove resize

    Changes

    At inference time, pad the image instead of doing an interpolation-based resize, which gives poor results (e.g. on PSNR) when the input image heigh and/or width is not exactly divisible by the downsampling factor (=2^{number of downsampling layers}).

    CLA Signed 
    opened by desi-ivanova 3
  • Replace license docstrings with comments

    Replace license docstrings with comments

    The intention was three-fold:

    1. consolidate copyright formatting across .py sources
    2. simplify the implementation of #100
    3. remove copyright headers from module documentation

    Changes

    • [x] replaces license docstrings with comments
    CLA Signed 
    opened by 0x00b1 3
  • survival_function op

    survival_function op

    closes #75

    Survival function of x. Generally defined as 1 - distribution.cdf(x).

    Unit test tests whether the returned result matches the returned result of scipy.stats.norm.sf.

    enhancement CLA Signed 
    opened by 0x00b1 3
  • Update triggers for CI

    Update triggers for CI

    This PR alters the triggers for continuous integration. Previously, we triggered all tests on both pushes and pull requests. This meant that within a PR we would have "duplicate" (but not really duplicate) checks. What we really want on a PR is just the PR check, so we'll keep that.

    The other thing that's nice to have is to trigger CI when pushing to a branch. That is what this PR will remove, but to replace it we add workflow_dispatch which allows a user to trigger CI with the GitHub UI. So we remove quite a bit of duplicated tests at the cost of making users click a button if they want to test their code before PR.

    Note that this PR only applies to people who push to branches of the repository.

    CLA Signed 
    opened by mmuckley 3
  • HiFiC modules

    HiFiC modules

    Implements the following modules from Mentzer, et al. (2020)

    • HiFiCDiscriminator
    • HiFiCEncoder
    • HiFiCGenerator
    @misc{mentzer2020highfidelity,
          title={High-Fidelity Generative Image Compression}, 
          author={Fabian Mentzer and George Toderici and Michael Tschannen and Eirikur Agustsson},
          year={2020},
          eprint={2006.09965},
          archivePrefix={arXiv},
          primaryClass={eess.IV}
    }
    

    Originally implemented in TensorFlow Compression (TFC) by the author (@relational).

    CLA Signed 
    opened by 0x00b1 3
  • remove metadata from __init__.py

    remove metadata from __init__.py

    Changes

    The metadata special variables from neuralcompression.__init__ were removed.

    This metadata was not currently used and is now available from setup.cfg

    CLA Signed 
    opened by 0x00b1 2
  • Update PyTorch to 1.10.0

    Update PyTorch to 1.10.0

    Closes #127.

    Changes

    • [x] Replaced torch.testing.assert_equal with torch.testing.assert_close
    • [x] Updates torch to 1.10.0
    • [x] Updates torchvision to 0.11.1
    enhancement CLA Signed 
    opened by 0x00b1 2
  • Implement PQ-MIM compression paper

    Implement PQ-MIM compression paper

    opened by mmuckley 0
  • Upstream Google autoencoder models to CompressAI

    Upstream Google autoencoder models to CompressAI

    At the moment we have several model implementations that are already implemented in CompressAI (e.g., Scale Hyperprior, Mean-Scale Hyperprior). At this point CompressAI has pretty good adoption, so we should be able to remove these from our repository and upstream the dependency.

    By default CompressAI doesn't handle reflective image padding for users, so if desired we could include wrappers like that in PR #185 to handle this for users unfamiliar with the functionality of these models.

    enhancement 
    opened by mmuckley 1
Releases(v0.2.1)
  • v0.2.1(Jan 12, 2022)

    This release covers a few small fixes from PRs #171 and #172.

    Dependencies

    • To retrieve versioning information, we now use importlib. This is included only with Python >= 3.8, so NeuralCompression will now only run on versions of Python at least as recent as 3.8. (#171).
    • Install requirements are flexible, whereas dev requirements are fixed (#171). This should improve CI stability while allowing researchers flexibility in tuning their research environment while using NeuralCompression.
    • torch has been removed as a build dependency (#172).
    • Other build dependencies have been modified to be flexible (#172).

    Build System

    • C++ code from _pmf_to_quantized_cdf introduced compilation requirements when running setup.py. Since we didn't configure our build system to handle specific operating systems, this caused a failed release upload to PyPI. The build system has been altered to use torch.utils.cpp_extension.load, which defers compilation to the the user after package installation. We would like to improve this further at some point, but the modifications from #171 gets the package stable. Note: there is a reasonable chance this could fail on non-Linux OS's such as Windows. Those users will still be able to use other package features that don't rely on _pmf_to_quantized_cdf.

    Other

    • Fixed a linting issue where isort was not checking in CI if packages were properly sorted. (#171).
    • Fixed a random test issue (#171).
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Dec 13, 2021)

    NeuralCompression is a PyTorch-based Python package intended to simplify neural network-based compression research. It is similar to (and shares some of the functionality) of fantastic libraries like TensorFlow Compression and Compress AI.

    The major theme of v0.2.0 release is autoencoders, particularly features useful for implementing existing models by Ballé and features useful to expand on these models in forthcoming research. In addition, 0.2.0 sees some code organization changes and published documentation. I recommend reading the new “Image Compression” example to see some of these changes.

    API Additions

    Data (neuralcompression.data)

    Distributions (neuralcompression.distributions)

    • NoisyNormal: normal distribution with additive identically distributed (i.i.d.) uniform noise.
    • UniformNoise: adapts a continuous distribution via additive identically distributed (i.i.d.) uniform noise.

    Functional (neuralcompression.functional)

    • estimate_tails: estimates approximate tail quantiles.
    • log_cdf: logarithm of the distribution’s cumulative distribution function (CDF).
    • log_expm1: logarithm of e^{x} - 1.
    • log_ndtr: logarithm of the normal cumulative distribution function (CDF).
    • log_survival_function: logarithm of x for a distribution’s survival function.
    • lower_bound: torch.maximum with a gradient for x < bound.
    • lower_tail: approximates lower tail quantile for range coding.
    • ndtr: the normal cumulative distribution function (CDF).
    • pmf_to_quantized_cdf: transforms a probability mass function (PMF) into a quantized cumulative distribution function (CDF) for entropy coding.
    • quantization_offset: computes a distribution-dependent quantization offset.
    • soft_round_conditional_mean: conditional mean of x given noisy soft rounded values.
    • soft_round_inverse: inverse of soft_round.
    • soft_round: differentiable approximation of torch.round.
    • survival_function: survival function of x. Generally defined as 1 - distribution.cdf(x).
    • upper_tail: approximates upper tail quantile for range coding.

    Layers (neuralcompression.layers)

    • AnalysisTransformation2D: applies the 2D analysis transformation over an input signal.
    • ContinuousEntropy: base class for continuous entropy layers.
    • GeneralizedDivisiveNormalization: applies generalized divisive normalization for each channel across a batch of data.
    • HyperAnalysisTransformation2D: applies the 2D hyper analysis transformation over an input signal.
    • HyperSynthesisTransformation2D: applies the 2D hyper synthesis transformation over an input signal.
    • NonNegativeParameterization: the parameter is subjected to an invertible transformation that slows down the learning rate for small values.
    • RateMSEDistortionLoss: rate-distortion loss.
    • SynthesisTransformation2D: applies the 2D synthesis transformation over an input signal.

    Models (neuralcompression.models)

    End-to-end Optimized Image Compression

    End-to-end Optimized Image Compression
    Johannes Ballé, Valero Laparra, Eero P. Simoncelli
    https://arxiv.org/abs/1611.01704
    
    • PriorAutoencoder: base class for implementing prior autoencoder architectures.
    • FactorizedPriorAutoencoder

    High-Fidelity Generative Image Compression

    High-Fidelity Generative Image Compression
    Fabian Mentzer, George Toderici, Michael Tschannen, Eirikur Agustsson
    https://arxiv.org/abs/2006.09965
    
    • HiFiCEncoder
    • HiFiCDiscriminator
    • HiFiCGenerator

    Variational Image Compression with a Scale Hyperprior

    Variational Image Compression with a Scale Hyperprior
    Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston
    https://arxiv.org/abs/1802.01436
    
    • HyperpriorAutoencoder: base class for implementing hyperprior autoencoder architectures.
    • MeanScaleHyperpriorAutoencoder
    • ScaleHyperpriorAutoencoder

    API Changes

    • neuralcompression.functional.hsv2rgb is now neuralcompression.functional.hsv_to_rgb.
    • neuralcompression.functional.learned_perceptual_image_patch_similarity is now neuralcompression.functional.lpips.

    Acknowledgements

    Thank you to the following people for their advice:

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jul 19, 2021)

Owner
Facebook Research
Facebook Research
Audio Domain Adaptation for Acoustic Scene Classification using Disentanglement Learning

Audio Domain Adaptation for Acoustic Scene Classification using Disentanglement Learning Reference Abeßer, J. & Müller, M. Towards Audio Domain Adapt

Jakob Abeßer 2 Jul 06, 2022
Spline is a tool that is capable of running locally as well as part of well known pipelines like Jenkins (Jenkinsfile), Travis CI (.travis.yml) or similar ones.

Welcome to spline - the pipeline tool Important note: Since change in my job I didn't had the chance to continue on this project. My main new project

Thomas Lehmann 29 Aug 22, 2022
(ICCV'21) Official PyTorch implementation of Relational Embedding for Few-Shot Classification

Relational Embedding for Few-Shot Classification (ICCV 2021) Dahyun Kang, Heeseung Kwon, Juhong Min, Minsu Cho [paper], [project hompage] We propose t

Dahyun Kang 82 Dec 24, 2022
An implementation of IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification

IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification The repostiory consists of the code, results and data set links for

12 Dec 26, 2022
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Phil Wang 128 Dec 24, 2022
Resources related to our paper "CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain"

CLIN-X (CLIN-X-ES) & (CLIN-X-EN) This repository holds the companion code for the system reported in the paper: "CLIN-X: pre-trained language models a

Bosch Research 4 Dec 05, 2022
Sequence-tagging using deep learning

Classification using Deep Learning Requirements PyTorch version = 1.9.1+cu111 Python version = 3.8.10 PyTorch-Lightning version = 1.4.9 Huggingface

Vineet Kumar 2 Dec 20, 2022
This repo. is an implementation of ACFFNet, which is accepted for in Image and Vision Computing.

Attention-Guided-Contextual-Feature-Fusion-Network-for-Salient-Object-Detection This repo. is an implementation of ACFFNet, which is accepted for in I

5 Nov 21, 2022
Implementation of CVPR 2020 Dual Super-Resolution Learning for Semantic Segmentation

Dual super-resolution learning for semantic segmentation 2021-01-02 Subpixel Update Happy new year! The 2020-12-29 update of SISR with subpixel conv p

Sam 79 Nov 24, 2022
Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals.

Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals This repo contains the Pytorch implementation of our paper: Unsupervised Seman

Wouter Van Gansbeke 335 Dec 28, 2022
Submanifold sparse convolutional networks

Submanifold Sparse Convolutional Networks This is the PyTorch library for training Submanifold Sparse Convolutional Networks. Spatial sparsity This li

Facebook Research 1.8k Jan 06, 2023
The Python ensemble sampling toolkit for affine-invariant MCMC

emcee The Python ensemble sampling toolkit for affine-invariant MCMC emcee is a stable, well tested Python implementation of the affine-invariant ense

Dan Foreman-Mackey 1.3k Dec 31, 2022
Our implementation used for the MICCAI 2021 FLARE Challenge titled 'Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements'.

Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements Our implementation used for the MICCAI 2021 FLARE C

Franz Thaler 3 Sep 27, 2022
The Agriculture Domain of ERPNext comes with features to record crops and land

Agriculture The Agriculture Domain of ERPNext comes with features to record crops and land, track plant, soil, water, weather analytics, and even trac

Frappe 21 Jan 02, 2023
DeepAL: Deep Active Learning in Python

DeepAL: Deep Active Learning in Python Python implementations of the following active learning algorithms: Random Sampling Least Confidence [1] Margin

Kuan-Hao Huang 583 Jan 03, 2023
A compendium of useful, interesting, inspirational usage of pandas functions, each example will be an ipynb file

Pandas_by_examples A compendium of useful/interesting/inspirational usage of pandas functions, each example will be an ipynb file What is this reposit

Guangyuan(Frank) Li 32 Nov 20, 2022
The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".

Codebase for learning control flow in transformers The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformer

Csordás Róbert 24 Oct 15, 2022
An Ensemble of CNN (Python 3.5.1 Tensorflow 1.3 numpy 1.13)

An Ensemble of CNN (Python 3.5.1 Tensorflow 1.3 numpy 1.13)

0 May 06, 2022
A foreign language learning aid using a neural network to predict probability of translating foreign words

Langy Langy is a reading-focused foreign language learning aid orientated towards young children. Reading is an activity that every child knows. It is

Shona Lowden 6 Nov 17, 2021