exponential adaptive pooling for PyTorch

Related tags

Deep LearningadaPool
Overview

AdaPool: Exponential Adaptive Pooling for Information-Retaining Downsampling

supported versions Library GitHub license


Abstract

Pooling layers are essential building blocks of Convolutional Neural Networks (CNNs) that reduce computational overhead and increase the receptive fields of proceeding convolutional operations. They aim to produce downsampled volumes that closely resemble the input volume while, ideally, also being computationally and memory efficient. It is a challenge to meet both requirements jointly. To this end, we propose an adaptive and exponentially weighted pooling method named adaPool. Our proposed method uses a parameterized fusion of two sets of pooling kernels that are based on the exponent of the Dice-Sørensen coefficient and the exponential maximum, respectively. A key property of adaPool is its bidirectional nature. In contrast to common pooling methods, weights can be used to upsample a downsampled activation map. We term this method adaUnPool. We demonstrate how adaPool improves the preservation of detail through a range of tasks including image and video classification and object detection. We then evaluate adaUnPool on image and video frame super-resolution and frame interpolation tasks. For benchmarking, we introduce Inter4K, a novel high-quality, high frame-rate video dataset. Our combined experiments demonstrate that adaPool systematically achieves better results across tasks and backbone architectures, while introducing a minor additional computational and memory overhead.


[arXiv preprint -- coming soon]

Original
adaPool

Dependencies

All parts of the code assume that torch is of version 1.4 or higher. There might be instability issues on previous versions.

This work relies on the previous repo for exponential maximum pooling (alexandrosstergiou/SoftPool). Before opening an issue please do have a look at that repository as common problems in running or installation have been addressed.

! Disclaimer: This repository is heavily structurally influenced on Ziteng Gao's LIP repo https://github.com/sebgao/LIP

Installation

You can build the repo through the following commands:

$ git clone https://github.com/alexandrosstergiou/adaPool.git
$ cd adaPool-master/pytorch
$ make install
--- (optional) ---
$ make test

Usage

You can load any of the 1D, 2D or 3D variants after the installation with:

# Ensure that you import `torch` first!
import torch
import adapool_cuda

# For function calls
from adaPool import adapool1d, adapool2d, adapool3d, adaunpool
from adaPool import edscwpool1d, edscwpool2d, edscwpool3d
from adaPool import empool1d, empool2d, empool3d
from adaPool import idwpool1d, idwpool2d, idwpool3d

# For class calls
from adaPool import AdaPool1d, AdaPool2d, AdaPool3d
from adaPool import EDSCWPool1d, EDSCWPool2d, EDSCWPool3d
from adaPool import EMPool1d, EMPool2d, EMPool3d
from adaPool import IDWPool1d, IDWPool2d, IDWPool3d
  • (ada/edscw/em/idw)pool<x>d: Are functional interfaces for each of the respective pooling methods.
  • (Ada/Edscw/Em/Idw)Pool<x>d: Are the class version to create objects that can be referenced in the code.

Citation

@article{stergiou2021adapool,
  title={AdaPool: Exponential Adaptive Pooling for Information-Retaining Downsampling},
  author={Stergiou, Alexandros and Poppe, Ronald},
  journal={arXiv preprint},
  year={2021}}

Licence

MIT

You might also like...
PyTorch implementation of ARM-Net: Adaptive Relation Modeling Network for Structured Data.
PyTorch implementation of ARM-Net: Adaptive Relation Modeling Network for Structured Data.

A ready-to-use framework of latest models for structured (tabular) data learning with PyTorch. Applications include recommendation, CRT prediction, healthcare analytics, and etc.

PyTorch implementation for Convolutional Networks with Adaptive Inference Graphs

Convolutional Networks with Adaptive Inference Graphs (ConvNet-AIG) This repository contains a PyTorch implementation of the paper Convolutional Netwo

Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

[CVPR 2021] Official PyTorch Implementation for
[CVPR 2021] Official PyTorch Implementation for "Iterative Filter Adaptive Network for Single Image Defocus Deblurring"

IFAN: Iterative Filter Adaptive Network for Single Image Defocus Deblurring Checkout for the demo (GUI/Google Colab)! The GUI version might occasional

an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch
an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch

revisiting-sepconv This is a reference implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation [1] using PyTorch. Given two f

An implementation of Video Frame Interpolation via Adaptive Separable Convolution using PyTorch
An implementation of Video Frame Interpolation via Adaptive Separable Convolution using PyTorch

This work has now been superseded by: https://github.com/sniklaus/revisiting-sepconv sepconv-slomo This is a reference implementation of Video Frame I

Unofficial pytorch implementation of 'Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization'
Unofficial pytorch implementation of 'Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization'

pytorch-AdaIN This is an unofficial pytorch implementation of a paper, Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization [Hua

This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).
This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).

NeurIPS 2021 (Spotlight): Task-Adaptive Neural Network Search with Meta-Contrastive Learning This is an official PyTorch implementation of Task-Adapti

PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021.

PAML PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021. (Continuously updating ) Int

Comments
  • Installation issue on Google Colab

    Installation issue on Google Colab

    Hi, Thanks for providing a Cuda optimized implementation. While building the lib I encountered an issue with "inf" at limits.cuh.

    CUDA/limits.cuh(119): error: identifier "inf" is undefined
    
    CUDA/limits.cuh(120): error: identifier "inf" is undefined
    
    CUDA/limits.cuh(128): error: identifier "inf" is undefined
    
    CUDA/limits.cuh(129): error: identifier "inf" is undefined
    
    4 errors detected in the compilation of "CUDA/adapool_cuda_kernel.cu".
    error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
    Makefile:2: recipe for target 'install' failed
    make: *** [install] Error 1
    

    The following notebook provides more details with environment informations: https://colab.research.google.com/drive/1T6Nxe2qbjKxXzo2IimFMYBn52qbthlZB?usp=sharing

    opened by okbalefthanded 2
  • Solution: Unresolved extern function '_Z3powdi'”

    Solution: Unresolved extern function '_Z3powdi'”

    cuda11. 0

    When I tried to build your project on win10, I encountered the following problems: “ptxas fatal : Unresolved extern function '_Z3powdi'”

    Reason: Wrong use of pow function in Cu code Solution: for example, pow (x, 2) can be changed to X * X

    opened by Culturenotes 1
  • Does AdaPool2d's beta require fixed image size?

    Does AdaPool2d's beta require fixed image size?

    I'm currently running AdaPool2d as a replacement of MaxPool2d in Resnet's stem similar on how you did it in SoftPool. However, I keep on getting an assertionError in line 1325 as shown below:

    assert isinstance(beta, tuple) or torch.is_tensor(beta), 'Agument `beta` can only be initialized with Tuple or Tensor type objects and should correspond to size (oH, oW)'
    

    Does this mean beta requires a fixed image size, e.g. (224,244)? Or is there a way to make it adaptive across varying image size (e.g. object detection)?

    opened by johnanthonyjose 1
  • The version of pytorch and how to deal with `nan_to_num` function in lower versions

    The version of pytorch and how to deal with `nan_to_num` function in lower versions

    Thank you for this amazing project. I saw it from SoftPool. After installing it, make test, but I got AttributeError: module 'torch' has no attribute 'nan_to_num', after I checked, this function used in idea.py was introduced in Pytorch 1.8.0, so the torch version in the README may need to be updated, or is there an easy way to be compatible with lower versions?

    opened by MaxChanger 1
Releases(v0.2)
Owner
Alexandros Stergiou
Computer Vision and Machine Learning Researcher
Alexandros Stergiou
Godot RL Agents is a fully Open Source packages that allows video game creators

Godot RL Agents The Godot RL Agents is a fully Open Source packages that allows video game creators, AI researchers and hobbiest the opportunity to le

Edward Beeching 326 Dec 30, 2022
Trading Strategies for Freqtrade

Freqtrade Strategies Strategies for Freqtrade, developed primarily in a partnership between @werkkrew and @JimmyNixx from the Freqtrade Discord. Use t

Bryan Chain 242 Jan 07, 2023
docTR by Mindee (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.

docTR by Mindee (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.

Mindee 1.5k Jan 01, 2023
Art Project "Schrödinger's Game of Life"

Repo of the project "Team Creative Quantum AI: Schrödinger's Game of Life" Installation new conda env: conda create --name qcml python=3.8 conda activ

ℍ◮ℕℕ◭ℍ ℝ∈ᛔ∈ℝ 2 Sep 15, 2022
Labelbox is the fastest way to annotate data to build and ship artificial intelligence applications

Labelbox Labelbox is the fastest way to annotate data to build and ship artificial intelligence applications. Use this github repository to help you s

labelbox 1.7k Dec 29, 2022
CityLearn Challenge Multi-Agent Reinforcement Learning for Intelligent Energy Management, 2020, PikaPika team

Citylearn Challenge This is the PyTorch implementation for PikaPika team, CityLearn Challenge Multi-Agent Reinforcement Learning for Intelligent Energ

bigAIdream projects 10 Oct 10, 2022
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
Code for our paper "Graph Pre-training for AMR Parsing and Generation" in ACL2022

AMRBART An implementation for ACL2022 paper "Graph Pre-training for AMR Parsing and Generation". You may find our paper here (Arxiv). Requirements pyt

xfbai 60 Jan 03, 2023
Source code of our work: "Benchmarking Deep Models for Salient Object Detection"

SALOD Source code of our work: "Benchmarking Deep Models for Salient Object Detection". In this works, we propose a new benchmark for SALient Object D

22 Dec 30, 2022
Code for How To Create A Fully Automated AI Based Trading System With Python

AI Based Trading System This code works as a boilerplate for an AI based trading system with yfinance as data source and RobinHood or Alpaca as broker

Rubén 196 Jan 05, 2023
[CVPR 2021] Pytorch implementation of Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs

Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs In this work, we propose a framework HijackGAN, which enables non-linear latent space travers

Hui-Po Wang 46 Sep 05, 2022
ReGAN: Sequence GAN using RE[INFORCE|LAX|BAR] based PG estimators

Sequence Generation with GANs trained by Gradient Estimation Requirements: PyTorch v0.3 Python 3.6 CUDA 9.1 (For GPU) Origin The idea is from paper Se

40 Nov 03, 2022
ZeroVL - The official implementation of ZeroVL

This repository contains source code necessary to reproduce the results presente

31 Nov 04, 2022
An NLP library with Awesome pre-trained Transformer models and easy-to-use interface, supporting wide-range of NLP tasks from research to industrial applications.

简体中文 | English News [2021-10-12] PaddleNLP 2.1版本已发布!新增开箱即用的NLP任务能力、Prompt Tuning应用示例与生成任务的高性能推理! 🎉 更多详细升级信息请查看Release Note。 [2021-08-22]《千言:面向事实一致性的生

6.9k Jan 01, 2023
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT

LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval This repository contains source code and pre-trained/fine-tun

Siqi 65 Dec 26, 2022
Bayesian algorithm execution (BAX)

Bayesian Algorithm Execution (BAX) Code for the paper: Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mut

Willie Neiswanger 38 Dec 08, 2022
Python version of the amazing Reaction Mechanism Generator (RMG).

Reaction Mechanism Generator (RMG) Description This repository contains the Python version of Reaction Mechanism Generator (RMG), a tool for automatic

Reaction Mechanism Generator 284 Dec 27, 2022
Pytorch implementation of "ARM: Any-Time Super-Resolution Method"

ARM-Net Dependencies Python 3.6 Pytorch 1.7 Results Train Data preprocessing cd data_scripts python extract_subimages_test.py python data_augmentation

Bohong Chen 55 Nov 24, 2022
Scaling and Benchmarking Self-Supervised Visual Representation Learning

FAIR Self-Supervision Benchmark is deprecated. Please see VISSL, a ground-up rewrite of benchmark in PyTorch. FAIR Self-Supervision Benchmark This cod

Meta Research 584 Dec 31, 2022
Official implementation of Unfolded Deep Kernel Estimation for Blind Image Super-resolution.

Unfolded Deep Kernel Estimation for Blind Image Super-resolution Hongyi Zheng, Hongwei Yong, Lei Zhang, "Unfolded Deep Kernel Estimation for Blind Ima

Z80 15 Dec 26, 2022