Compact Bilinear Pooling for PyTorch

Overview

Compact Bilinear Pooling for PyTorch.

This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.

This version relies on the FFT implementation provided with PyTorch 0.4.0 onward. For older versions of PyTorch, use the tag v0.3.0.

Installation

Run the setup.py, for instance:

python setup.py install

Usage

class compact_bilinear_pooling.CompactBilinearPooling(input1_size, input2_size, output_size, h1 = None, s1 = None, h2 = None, s2 = None)

Basic usage:

from compact_bilinear_pooling import CountSketch, CompactBilinearPooling

input_size = 2048
output_size = 16000
mcb = CompactBilinearPooling(input_size, input_size, output_size).cuda()
x = torch.rand(4,input_size).cuda()
y = torch.rand(4,input_size).cuda()

z = mcb(x,y)

Test

A couple of test of the implementation of Compact Bilinear Pooling and its gradient can be run using:

python test.py

References

Comments
  • The value in ComplexMultiply_backward function

    The value in ComplexMultiply_backward function

    Hi @gdlg, thanks for this nice work. I'm confused about the backward procedure of complex multiplication. So I hope you can help me to figure it out.

    In forward,

    Z = XY = (Rx + i * Ix)(Ry + i * Iy) = (RxRy - IxIy) + i * (IxRy + RxIy) = Rz + i * Iz
    

    In backward, according the chain rule, it will has

    grad_(L/X) = grad_(L/Z) * grad(Z/X)
               = grad_Z * Y
               = (R_gz + i * I_gz)(Ry + i * Iy)
               = (R_gzRy - I_gzIy) + i * (I_gzRy + R_gzIy)
    

    So, why is this line implemented by using the value = 1 for real part and value = -1 for image part?

    Is there something wrong in my thoughts? Thanks.

    opened by KaiyuYue 8
  • The miss of Rfft

    The miss of Rfft

    When I run the test module, it indicates that the module of pytorch_fft of fft in autograd does not have attribute of Rfft. What version of pytorch_fft should I install to fit this code?

    opened by PeiqinZhuang 8
  • Save the model - TypeError: can't pickle Rfft objects

    Save the model - TypeError: can't pickle Rfft objects

    How do you save and load the model, I'm using torch.save, which cause the following error:

    File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 135, in save
       return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickl                                                                                                                               e_protocol))
     File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 117, in _with_file_like
       return body(f)
     File "xanaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 135, in <lambda>
       return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickl                                                                                                                               e_protocol))
     File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 198, in _save
       pickler.dump(obj)
    TypeError: can't pickle Rfft objects
    
    
    opened by idansc 3
  • Multi GPU support

    Multi GPU support

    I modify

    class CompactBilinearPooling(nn.Module):   
         def forward(self, x, y):    
                return CompactBilinearPoolingFn.apply(self.sketch1.h, self.sketch1.s, self.sketch2.h, self.sketch2.s, self.output_size, x, y)
    

    to

    def forward(self, x):    
        x = x.permute(0, 2, 3, 1) #NCHW to NHWC   
        y = Variable(x.data.clone())    
        out = (CompactBilinearPoolingFn.apply(self.sketch1.h, self.sketch1.s, self.sketch2.h, self.sketch2.s, self.output_size, x, y)).permute(0,3,1,2) #to NCHW    
        out = nn.functional.adaptive_avg_pool2d(out, 1) # N,C,1,1   
        #add an element-wise signed square root layer and an instance-wise l2 normalization    
        out = (torch.sqrt(nn.functional.relu(out)) - torch.sqrt(nn.functional.relu(-out)))/torch.norm(out,2,1,True)   
        return out 
    

    This makes the compact pooling layer can be plugged to PyTorch CNNs more easily:

    model.avgpool = CompactBilinearPooling(input_C, input_C, bilinear['dim'])
    model.fc = nn.Linear(int(model.fc.in_features/input_C*bilinear['dim']), num_classes)

    However, when I run this using multiple GPUs, I got the following error:

    Traceback (most recent call last): File "train3_bilinear_pooling.py", line 400, in run() File "train3_bilinear_pooling.py", line 219, in run train(train_loader, model, criterion, optimizer, epoch) File "train3_bilinear_pooling.py", line 326, in train return _each_epoch('train', train_loader, model, criterion, optimizer, epoch) File "train3_bilinear_pooling.py", line 270, in _each_epoch output = model(input_var) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 319, in call result = self.forward(*input, **kwargs) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 67, in forward replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 72, in replicate return replicate(module, device_ids) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 19, in replicate buffer_copies = comm.broadcast_coalesced(buffers, devices) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/cuda/comm.py", line 55, in broadcast_coalesced for chunk in _take_tensors(tensors, buffer_size): File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/_utils.py", line 232, in _take_tensors if tensor.is_sparse: File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/autograd/variable.py", line 68, in getattr return object.getattribute(self, name) AttributeError: 'Variable' object has no attribute 'is_sparse'

    Do you have any ideas?

    opened by YanWang2014 3
  • AssertionError: False is not true

    AssertionError: False is not true

    Hi, I am back again. When running the test.py, I got the following error File "test.py", line 69, in test_gradients self.assertTrue(torch.autograd.gradcheck(cbp, (x,y), eps=1)) AssertionError: False is not true

    What does this mean?

    opened by YanWang2014 2
  • Support for Pytorch 1.11?

    Support for Pytorch 1.11?

    Hi, torch.fft() and torch.irfft() are no more functions, those are modules. And there appears to be a lof of modification in the parameters. I am currently trying to combine the two types of features with compact bilinear pooling, do you know how to port this code to pytorch 1.11?

    opened by bhosalems 1
  • Training does not converge after joining compact bilinear layer

    Training does not converge after joining compact bilinear layer

    Source code: x = self.features(x) #[4,512,28,28] batch_size = x.size(0) x = (torch.bmm(x, torch.transpose(x, 1, 2)) / 28 ** 2).view(batch_size, -1) x = torch.nn.functional.normalize(torch.sign(x) * torch.sqrt(torch.abs(x) + 1e-10)) x = self.classifiers(x) return x my code: x = self.features(x) #[4,512,28,28] x = x.view(x.shape[0], x.shape[1], -1) #[4,512,784] x = x.permute(0, 2, 1) #[4,784,512] x = self.mcb(x,x) #[4,784,512] batch_size = x.size(0) x = x.sum(1) #对于二维来说,dim=0,对列求和;dim=1对行求和;在这里是三维所以是对列求和 x = torch.nn.functional.normalize(torch.sign(x) * torch.sqrt(torch.abs(x) + 1e-10)) x = self.classifiers(x) return x

    The training does not converge after modification. Why? Is it a problem with my code?

    opened by roseif 3
Releases(v0.4.0)
Owner
Grégoire Payen de La Garanderie
Grégoire Payen de La Garanderie
Optimize Trading Strategies Using Freqtrade

Optimize trading strategy using Freqtrade Short demo on building, testing and optimizing a trading strategy using Freqtrade. The DevBootstrap YouTube

DevBootstrap 139 Jan 01, 2023
This repository contains code used to audit the stability of personality predictions made by two algorithmic hiring systems

Stability Audit This repository contains code used to audit the stability of personality predictions made by two algorithmic hiring systems, Humantic

Data, Responsibly 4 Oct 27, 2022
The Empirical Investigation of Representation Learning for Imitation (EIRLI)

The Empirical Investigation of Representation Learning for Imitation (EIRLI)

Center for Human-Compatible AI 31 Nov 06, 2022
Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic S

Ken Lin 17 Oct 12, 2022
Mining-the-Social-Web-3rd-Edition - The official online compendium for Mining the Social Web, 3rd Edition (O'Reilly, 2018)

Mining the Social Web, 3rd Edition The official code repository for Mining the Social Web, 3rd Edition (O'Reilly, 2019). The book is available from Am

Mikhail Klassen 838 Jan 01, 2023
Code to compute permutation and drop-column importances in Python scikit-learn models

Feature importances for scikit-learn machine learning models By Terence Parr and Kerem Turgutlu. See Explained.ai for more stuff. The scikit-learn Ran

Terence Parr 537 Dec 31, 2022
Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)"

BAM and CBAM Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)" Updat

Jongchan Park 1.7k Jan 01, 2023
A PyTorch Implementation of SphereFace.

SphereFace A PyTorch Implementation of SphereFace. The code can be trained on CASIA-Webface and the best accuracy on LFW is 99.22%. SphereFace: Deep H

carwin 685 Dec 09, 2022
Pretraining on Dynamic Graph Neural Networks

Pretraining on Dynamic Graph Neural Networks Our article is PT-DGNN and the code is modified based on GPT-GNN Requirements python 3.6 Ubuntu 18.04.5 L

7 Dec 17, 2022
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

ARES This repository contains the code for ARES (Adversarial Robustness Evaluation for Safety), a Python library for adversarial machine learning rese

Tsinghua Machine Learning Group 377 Dec 20, 2022
Orchestrating Distributed Materials Acceleration Platform Tutorial

Orchestrating Distributed Materials Acceleration Platform Tutorial This tutorial for orchestrating distributed materials acceleration platform was pre

BIG-MAP 1 Jan 25, 2022
💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena.

Heidelberg-NLP 17 Nov 07, 2022
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration.

A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration. Introduction spinor-gpe is high-level,

2 Sep 20, 2022
Official PyTorch implementation of RIO

Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection Figure 1: Our proposed Resampling at image-level and obect-

NVIDIA Research Projects 17 May 20, 2022
The official PyTorch implementation for NCSNv2 (NeurIPS 2020)

Improved Techniques for Training Score-Based Generative Models This repo contains the official implementation for the paper Improved Techniques for Tr

174 Dec 26, 2022
Implementation of paper "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement"

DCS-Net This is the implementation of "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement" Steps to run the model Edit V

Jack Walters 10 Apr 04, 2022
Combine Tacotron2 and Hifi GAN to generate speech from text

EndToEndTextToSpeech Combine Tacotron2 and Hifi GAN to generate speech from text Download weights Hifi GAN - hifi_gan/checkpoint/ : pretrain 2.5M ste

Phạm Quốc Huy 1 Dec 18, 2021
Home for cuQuantum Python & NVIDIA cuQuantum SDK C++ samples

Welcome to the cuQuantum repository! This public repository contains two sets of files related to the NVIDIA cuQuantum SDK: samples: All C/C++ sample

NVIDIA Corporation 147 Dec 27, 2022
Project Aquarium is a SUSE-sponsored open source project aiming at becoming an easy to use, rock solid storage appliance based on Ceph.

Project Aquarium Project Aquarium is a SUSE-sponsored open source project aiming at becoming an easy to use, rock solid storage appliance based on Cep

Aquarist Labs 73 Jul 21, 2022