ConformalLayers: A non-linear sequential neural network with associative layers

Overview

ConformalLayers: A non-linear sequential neural network with associative layers

ConformalLayers is a conformal embedding of sequential layers of Convolutional Neural Networks (CNNs) that allows associativity between operations like convolution, average pooling, dropout, flattening, padding, dilation, and stride. Such embedding allows associativity between layers of CNNs, considerably reducing the amount of operations to perform inference in type of neural networks.

This repository is a implementation of ConformalLayers written in Python using Minkowski Engine and PyTorch as backend. This implementation is a first step into the usage of activation functions, like ReSPro, that can be represented as tensors, depending on the geometry model.

Please cite our SIBGRAPI'21 paper if you use this code in your research. The paper presents a complete description of the library:

@InProceedings{sousa_et_al-sibgrapi-2021,
  author    = {Sousa, Eduardo V. and Fernandes, Leandro A. F. and Vasconcelos, Cristina N.},
  title     = {{C}onformal{L}ayers: a non-linear sequential neural network with associative layers},
  booktitle = {Proceedings of the 2021 34th SIBGRAPI Conference on Graphics, Patterns and Images},
  year      = {2021},
}

Please, let Eduardo Vera Sousa (http://www.ic.uff.br/~eduardovera), Leandro A. F. Fernandes (http://www.ic.uff.br/~laffernandes) and Cristina Nader Vasconcelos(http://www2.ic.uff.br/~crisnv/index.php) know if you want to contribute to this project. Also, do not hesitate to contact them if you encounter any problems.

Contents:

  1. Requirements
  2. How to Install ConformalLayers
  3. Running Examples
  4. Compiling and Running Unit Tests
  5. Documentation
  6. License

1. Requirements

Make sure that you have the following tools before attempting to use ConformalLayers.

Required tools:

Optional tool to use ConformalLayers:

  • Virtual enviroment to create an isolated workspace for a Python application.

  • Docker to create a container to run ConformalLayers

2. How to Install ConformalLayers

No magic needed here. Just run:

python setup.py install

3. Running Examples

The basic steps for running the examples of ConformalLayers look like this:

cd <ConformalLayers-dir>/Experiments/<experiment-name>

For Experiments I and II, each file refers to the experiment described on the main paper. Thus, in order to run BaseReSProNet with FashionMNIST dataset, for example, all you have to do is:

python BaseReSProNet.py --dataset=FashionMNIST

The values that can be used for the dataset argument are

  • MNIST
  • FashionMNIST
  • CIFAR10

The loader of each dataset is described in Experiments/utils/datasets.py file.

Other arguments for the script files in Experiments I and II are:

  • epochs (int value)
  • batch_size (int value)
  • learning_rate (float value)
  • optimizer (adam or rmsprop)
  • dropout (float value)

For Experiments III and IV, since we compute the amount of memory used, we used an external file to orchestrate the calls and make sure we have a clean environment for the next iterations. Such orchestrator is writen on the files with the suffix _manager.py.

You can also run the files that corresponds to each architecture individually, without the orchestrator. To run D3ModNetCL architecture, for example, just run

python D3ModNetCL.py

The arguments for the non-orchestrated scripts in Experiments III and IV are:

  • num_inferences (int value)
  • batch_size (int value)
  • depth (int value, Experiment III only)

The files in networks folder contains the description of each architecture used in our experiments and presents the usage of the classes and methods of our library.

4. Running Unit Tests

The basic steps for running the unit tests of ConformalLayers look like this:

cd <ConformalLayers-dir>/tests

To run all tests, simply run

python test_all.py

To run the tests for each module, run:

python test_<module_name>.py

5. Documentation

Here you find a brief description of the namespaces, macros, classes, functions, procedures, and operators available for the user. All methods are available with C++ and most of them with Python. The detailed documentation is not ready yet.

Contents:

Modules

Here we present the main modules implemented in our framework. Most of the modules are used just like in PyTorch, so users with some background on this framework benefits from this implementation. For users not familiar with PyTorch, the usage still quite simple and intuitive.

Module Description
Conv1d, Conv2d, Conv3d, ConvNd Convolution operation implemented for n-D signals
AvgPool1d, AvgPool2d, AvgPool3d, AvgPoolNd Average pooling operation implemented for n-D signals
BaseActivation The abstract class for the activation function layer. To extend the library, one shall implement this class
ReSPro The layer that corresponds to the ReSPro activation function. Such function is a linear function with non-linear behavior that can be encoded as a tensor. The non-linearity of this function is controlled by a parameter α (passed as argument) that can be provided or inferred from the data
Regularization In this version, Dropout is only regularization available. In this approach, during the training phase, we randomly shut down some neurons with a probability p, passed as argument to this module

These modules are composed into ConformalLayers in a very similar way to the pure PyTorch-based way. The class ConformalLayers plays an important role in this task, as you can see by comparing the code snippets below:

# This one is built with pure PyTorch
import torch.nn as nn

class D3ModNet(nn.Module):
    def __init__(self):
        super(D3ModNet, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
        )
        self.fc1 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.shape[0], -1)
        x = self.fc1(x)
        return x
# This one is built with ConformalLayers
import torch.nn as nn
import ConformalLayers as cl

class D3ModNetCL(nn.Module):
    def __init__(self):
        super(D3ModNetCL, self).__init__()
        self.features = cl.ConformalLayers(
            cl.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
            cl.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
            cl.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
        )
        self.fc1 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.shape[0], -1)
        x = self.fc1(x)
        return x

They look pretty much the same code, right? That's because we've implemented ConformalLayers to be a transition smoothest as possible to the PyTorch user. Most of the modules has almost the same method signatures of the ones provided by PyTorch.

Convolution

The convolution operation implemented in ConformalLayers on the modules ConvNd, Conv1d, Conv2d and Conv3d is almost the same one implemented on PyTorch but we do not allow bias. This is mostly due to the construction of our logic when building the representation with tensors. Although we have a few ideas on how to include bias on this representation, they are not included in the current version. The parameters are detailed below and are originally available in PyTorch convolution documentation page. The exception here relies on the padding_mode parameter, that is always set to 'zeros' in our implementation.

  • in_channels (int) – Number of channels in the input image

  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (int or tuple) – Size of the convolving kernel

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int, tuple or str, optional) – Padding added to both sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

Pooling

In our current implementation, we only support average pooling, which is implemented on modules AvgPoolNd, AvgPool1d, AvgPool2d and AvgPool3d. The parameters list, originally available in PyTorch average pooling documentation page, is described below:

  • kernel_size – the size of the window

  • stride – the stride of the window. Default value is kernel_size

  • padding – implicit zero padding to be added on both sides

  • ceil_mode – when True, will use ceil instead of floor to compute the output shape

  • count_include_pad – when True, will include the zero-padding in the averaging calculation

Activation

Our activation module has ReSPro activation function implemented natively. By using Reflections, Scalings and Projections on an hypersphere in higher dimensions, we created a non-linear differentiable associative activation function that can be represented in tensor form. It has only one parameter, that controls how close to linear or non-linear is the curve. More details are available on the main paper.

  • alpha (float, optional) - controls the non-linearity of the curve. If it is not provided, it's automatically estimated.

Regularization

On regularization module we have Dropout implemented in this version. It is based on the idea of randomly shutting down some neurons in order to prevent overfitting. It takes only two parameters, listed below. This list was originally available in PyTorch documentation page.

  • p – probability of an element to be zeroed. Default: 0.5

  • inplace – If set to True, will do this operation in-place. Default: False

6. License

This software is licensed under the GNU General Public License v3.0. See the LICENSE file for details.

You might also like...
 Improving Deep Network Debuggability via Sparse Decision Layers
Improving Deep Network Debuggability via Sparse Decision Layers

Improving Deep Network Debuggability via Sparse Decision Layers This repository contains the code for our paper: Leveraging Sparse Linear Layers for D

Accelerate Neural Net Training by Progressively Freezing Layers
Accelerate Neural Net Training by Progressively Freezing Layers

FreezeOut A simple technique to accelerate neural net training by progressively freezing layers. This repository contains code for the extended abstra

a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

Pytorch code for paper
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

This is a model made out of Neural Network specifically a Convolutional Neural Network model
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

Code image classification of MNIST dataset using different architectures: simple linear NN, autoencoder, and highway network

Deep Learning for image classification pip install -r http://webia.lip6.fr/~baskiotisn/requirements-amal.txt Train an autoencoder python3 train_auto

Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Comments
  • how would you add bias?

    how would you add bias?

    the readme mentions you have a few ideas on how to do so, curious what they are. lack of bias seems to hurt performance, based on the conformallayers paper

    opened by alok 1
  • Typo when importing progress_bar in Experiment 1

    Typo when importing progress_bar in Experiment 1

    Upon trying to run Experiment I, the following error message appears.

    $ python BaseReSProNet.py --dataset=FashionMNIST
    Traceback (most recent call last):
      File "BaseReSProNet.py", line 11, in <module>
        from Experiments.utils.utils import progress_bar
    ModuleNotFoundError: No module named 'Experiments.utils.utils'
    

    I believe it is a typo; according to the source code, it should be

    from Experiments.utils.utils import progress_bar

    Environment:

    • Running the Minkowski Engine Docker built with https://github.com/NVIDIA/MinkowskiEngine/blob/master/docker/Dockerfile
    • Ubuntu 20.04
    bug 
    opened by wilderlopes 1
  • Missing setup.py

    Missing setup.py

    Hi everybody,

    Congrats on the paper! I am looking forward to reproducing it. However, I noticed the setup.py (used to install your library according to the documentation) is missing from the repo. Is there any other we could install/run it?

    enhancement 
    opened by wilderlopes 1
Releases(v1.2.1)
Owner
Prograf-UFF
Graphics Processing Research Laboratory
Prograf-UFF
Prml - Repository of notes, code and notebooks in Python for the book Pattern Recognition and Machine Learning by Christopher Bishop

Pattern Recognition and Machine Learning (PRML) This project contains Jupyter notebooks of many the algorithms presented in Christopher Bishop's Patte

Gerardo Durán-Martín 1k Jan 07, 2023
Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"

Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds Björn Michele1), Alexandre Boulch1), Gilles Puy1), Maxime Bucher1) and Rena

valeo.ai 15 Dec 22, 2022
Release of SPLASH: Dataset for semantic parse correction with natural language feedback in the context of text-to-SQL parsing

SPLASH: Semantic Parsing with Language Assistance from Humans SPLASH is dataset for the task of semantic parse correction with natural language feedba

Microsoft Research - Language and Information Technologies (MSR LIT) 35 Oct 31, 2022
Image Deblurring using Generative Adversarial Networks

DeblurGAN arXiv Paper Version Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. Our netwo

Orest Kupyn 2.2k Jan 01, 2023
Implementation of ICCV19 Paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network"

OANet implementation Pytorch implementation of OANet for ICCV'19 paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network", by

Jiahui Zhang 225 Dec 05, 2022
This repository contains the code to replicate the analysis from the paper "Moving On - Investigating Inventors' Ethnic Origins Using Supervised Learning"

Replication Code for 'Moving On' - Investigating Inventors' Ethnic Origins Using Supervised Learning This repository contains the code to replicate th

Matthias Niggli 0 Jan 04, 2022
A simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

This is a simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

crispengari 3 Jan 08, 2022
Neural Logic Inductive Learning

Neural Logic Inductive Learning This is the implementation of the Neural Logic Inductive Learning model (NLIL) proposed in the ICLR 2020 paper: Learn

36 Nov 28, 2022
[SDM 2022] Towards Similarity-Aware Time-Series Classification

SimTSC This is the PyTorch implementation of SDM2022 paper Towards Similarity-Aware Time-Series Classification. We propose Similarity-Aware Time-Serie

Daochen Zha 49 Dec 27, 2022
(JMLR' 19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Python Outlier Detection (PyOD) Deployment & Documentation & Stats & License PyOD is a comprehensive and scalable Python toolkit for detecting outlyin

Yue Zhao 6.6k Jan 05, 2023
ICNet for Real-Time Semantic Segmentation on High-Resolution Images, ECCV2018

ICNet for Real-Time Semantic Segmentation on High-Resolution Images by Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, Jiaya Jia, details a

Hengshuang Zhao 594 Dec 31, 2022
Sound Event Detection with FilterAugment

Sound Event Detection with FilterAugment Official implementation of Heavily Augmented Sound Event Detection utilizing Weak Predictions (DCASE2021 Chal

43 Aug 28, 2022
Repositório da disciplina de APC, no segundo semestre de 2021

NOTAS FINAIS: https://github.com/fabiommendes/apc2018/blob/master/nota-final.pdf Algoritmos e Programação de Computadores Este é o Git da disciplina A

16 Dec 16, 2022
Car Parking Tracker Using OpenCv

Car Parking Vacancy Tracker Using OpenCv I used basic image processing methods i

Adwait Kelkar 30 Dec 03, 2022
PyTorch implementation of MLP-Mixer

PyTorch implementation of MLP-Mixer MLP-Mixer: an all-MLP architecture composed of alternate token-mixing and channel-mixing operations. The token-mix

Duo Li 33 Nov 27, 2022
RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into tables through jointly extracting intervention, outcome and outcome measure entities and their relations.

Randomised controlled trial abstract result tabulator RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into

2 Sep 16, 2022
TensorFlow-based implementation of "ICNet for Real-Time Semantic Segmentation on High-Resolution Images".

ICNet_tensorflow This repo provides a TensorFlow-based implementation of paper "ICNet for Real-Time Semantic Segmentation on High-Resolution Images,"

HsuanKung Yang 406 Nov 27, 2022
Implementation of experiments in the paper Clockwork Variational Autoencoders (project website) using JAX and Flax

Clockwork VAEs in JAX/Flax Implementation of experiments in the paper Clockwork Variational Autoencoders (project website) using JAX and Flax, ported

Julius Kunze 26 Oct 05, 2022
Official implementation for "QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation" (CVPR 2022)

QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation (CVPR2022) https://arxiv.org/abs/2203.08483 Unpaired image-to-image (I2I

Xueqi Hu 50 Dec 16, 2022
Active and Sample-Efficient Model Evaluation

Active Testing: Sample-Efficient Model Evaluation Hi, good to see you here! 👋 This is code for "Active Testing: Sample-Efficient Model Evaluation". P

Jannik Kossen 19 Oct 30, 2022