PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

Overview

PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

PCACE is a new algorithm for ranking neurons in a CNN architecture in order of importance towards the final classification. PCACE is a statistical method combining Alternating Condition Expectation with Principal Component Analysis to find the maximal correlation coefficient between a hidden neuron and the final class score. This yields a rigorous and standardized method for quantifying the relevance of each neuron towards the final model classification.

Summary of Usage

  1. pcace_resnet_18.py: code for the PCACE algorithm in the ResNet-18 architecture. Uses PyTorch to load the model and requires the ACE package. Caps indicate variables changeable by the user: NUM_IMAGES: the number of input images for PCACE. CLASS: the class to which the input images belong to. LAYER_NAME: name of the convolutional layer to which we apply PCACE. Follows the structure layerx[y].convz. NUM_CHANNELS: number of channels in LAYER_NAME. SIZE: number of pixels in the activation maps of LAYER_NAME. SIZE_X, SIZE_Y: height and width of the activation maps. Must have SIZE = SIZE_X*SIZE_Y. CLASS_IDX: before the softmax, which index corresponds to the class score (class of the set of input images). PCA_COMP: number of components to which PCA wishes to be reduced to. After the algorithm runs, it provides an array results with the PCACE values of all channels, which can then be sorted.

  2. pcace_vgg_16.py: same code an functionality as pcace_resnet_18.py but in the VGG-16 architecture instead of ResNet-18. Computes the PCACE values for any layer in the VGG-16 architecture.

  3. activation_maximization.py: code to visualize the filter activation maximization images with VGG-16 following the code from https://github.com/keisen/tf-keras-vis. Uses Keras to load the model and requires teh tf-keras-vis package. Caps indicate variables changeable by the user: LAYER_NAME: where is the channel whose feature visualization we are trying to see. FILTER_NUMBER: which channel within that layer.

  4. visualize_act_maps_resnet_18.py: code to visualize the activation maps of the top PCACE channels with ResNet-18. As in pcace_resnet_18.py, it uses PyTorch to load the model. Caps indicate variables changeable by the user: LAYER_NAME: name of the convolutional layer to which we apply PCACE. Follows the structure layerx[y].convz. ORDER: an array containing the PCACE channels sorted from lowest to highest value. The good_urls refer to a list containing the URLs of the images that one wishes to visualize.

  5. visualize_act_maps_vgg_16.py: same functionality as in the visualize_act_maps_resnet_18.py code (i.e., visualize the activation maps of the top PCACE channels), but in the VGG-16 architecture instead of ResNet-18.

  6. visualizing_cam.py: producing CAM visualizations with ResNet-18 following the code from https://github.com/zhoubolei/CAM. Uses PyTorch to load the model. Returns the CAM visualization of the input image (in this case, given with a URL).

  7. london_kdd_examples_slevel.csv: The .csv file contains metadata for the 300 street level images we used in our experiments. In our experiments we used images from Google Street View. More information on these images and how to use them are available from here: https://developers.google.com/maps/documentation/streetview/overview. gsv_panoid: correspods to the 'pano' parameter, which is a specific panorama ID for the image. gsv_lat, gsv_lng: corresponds the the location coordinates for the image. Both gsv_panoid and gsv_lat, gsv_lng parameters can be used to access the images used in our experiments.

Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in Tensorflow Lite.

TFLite-msg_chn_wacv20-depth-completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model

Ibai Gorordo 2 Oct 04, 2021
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Code for Paper "Imbalanced Gradients: A Subtle Cause of Overestimated Adv

Hanxun Huang 11 Nov 30, 2022
Official pytorch implementation of the IrwGAN for unaligned image-to-image translation

IrwGAN (ICCV2021) Unaligned Image-to-Image Translation by Learning to Reweight [Update] 12/15/2021 All dataset are released, trained models and genera

37 Nov 09, 2022
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"

This is an official pytorch implementation of ActionCLIP: A New Paradigm for Video Action Recognition [arXiv] Overview Content Prerequisites Data Prep

268 Jan 09, 2023
Weakly Supervised 3D Object Detection from Point Cloud with Only Image Level Annotation

SCCKTIM Weakly Supervised 3D Object Detection from Point Cloud with Only Image-Level Annotation Our code will be available soon. The class knowledge t

1 Nov 12, 2021
⚡ H2G-Net for Semantic Segmentation of Histopathological Images

H2G-Net This repository contains the code relevant for the proposed design H2G-Net, which was introduced in the manuscript "Hybrid guiding: A multi-re

André Pedersen 8 Nov 24, 2022
The code of paper "Block Modeling-Guided Graph Convolutional Neural Networks".

Block Modeling-Guided Graph Convolutional Neural Networks This repository contains the demo code of the paper: Block Modeling-Guided Graph Convolution

22 Dec 08, 2022
Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features

Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features | paper | Official PyTorch implementation for Mul

48 Dec 28, 2022
Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)

Learning to Simulate Dynamic Environments with GameGAN PyTorch code for GameGAN Learning to Simulate Dynamic Environments with GameGAN Seung Wook Kim,

199 Dec 26, 2022
4th place solution for the SIGIR 2021 challenge.

SIGIR-2021 (Tinkoff.AI) How to start Download train and test data: https://sigir-ecom.github.io/data-task.html Place it under sigir-2021/data/. Run py

Tinkoff.AI 4 Jul 01, 2022
CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

UC2 UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu,

Mingyang Zhou 28 Dec 30, 2022
We present a regularized self-labeling approach to improve the generalization and robustness properties of fine-tuning.

Overview This repository provides the implementation for the paper "Improved Regularization and Robustness for Fine-tuning in Neural Networks", which

NEU-StatsML-Research 21 Sep 08, 2022
Fast methods to work with hydro- and topography data in pure Python.

PyFlwDir Intro PyFlwDir contains a series of methods to work with gridded DEM and flow direction datasets, which are key to many workflows in many ear

Deltares 27 Dec 07, 2022
Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation"

CoCosNet Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation" (CVPR 2020 oral). Update: 202

Lingbo Yang 38 Sep 22, 2021
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 05, 2023
Deep motion transfer

animation-with-keypoint-mask Paper The right most square is the final result. Softmax mask (circles): \ Heatmap mask: \ conda env create -f environmen

9 Nov 01, 2022
Classification of Long Sequential Data using Circular Dilated Convolutional Neural Networks

Classification of Long Sequential Data using Circular Dilated Convolutional Neural Networks arXiv preprint: https://arxiv.org/abs/2201.02143. Architec

19 Nov 30, 2022
Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction".

GNN_PPI Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction". Lear

Ursa Zrimsek 2 Dec 14, 2022
[EMNLP 2021] Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training

RoSTER The source code used for Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training, p

Yu Meng 60 Dec 30, 2022