This repository contains code to run experiments in the paper "Signal Strength and Noise Drive Feature Preference in CNN Image Classifiers."

Overview

Signal Strength and Noise Drive Feature Preference in CNN Image Classifiers

This repository contains code to run experiments in the paper "Signal Strength and Noise Drive Feature Preference in CNN Image Classifiers." There are three subdirectories in this repository, the contents of which are described below. This code was tested using PyTorch 1.7.

Synthetic Pairs Matrix

This part of the repository is for running the synthetic pairs matrix experiments in the paper. Here are the commands to run all of the experiments in the paper:

Pairs Matrix 1

python main.py --exp_name pairs_matrix1 --pattern_dir pairs_matrix1 --imgnet_augment

Pairs Matrix 2

python main.py --exp_name pairs_matrix2 --pattern_dir pairs_matrix2 --imgnet_augment

Color Deviation

python main.py --exp_name color_deviation_(your epsilon here) --pattern_dir pairs_matrix1 --hue_perturb blue_circle --hue_perturb_val (your epsilon here) --imgnet_augment

Color Overlap (pattern dirs are already predefined for these. Some overlap values are included, but if you would like to use different ones, you must create them yourself.)

python main.py --exp_name color_overlap_(your overlap here) --pattern_dir color_overlap_(your overlap here) --imgnet_augment

Predictivity

python3 main.py --exp_name predictivity_(your predictivity here) --pattern_dir pairs_matrix1 --pred_drop blue --pred_drop_val (your predictivity here)

When you run one of these experiments, datasets will be created and models trained. Datasets will get created and stored in the directory ./data/exp_name, trained models will get stored in ./models/exp_name, and results will appear in ./results/exp_name. When the experiment is done, there should be a file called master.csv in the directory ./results/exp_name which will contain information including each feature's average preference over the course of the experiment, pixel count, and name. A complete list of commands to generate all data in the paper can be found in the commands.sh file in the pairs_matrix_experiments subdirectory. The training script is adapted from the torchvision training script: https://github.com/pytorch/examples/blob/master/imagenet/main.py.

Texture Bias

Stimuli and helper code is used from the open-sourced code of the paper "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness" (https://github.com/rgeirhos/texture-vs-shape).

To run the experiments from our paper with an ImageNet-trained ResNet-50, you can do the following:

Normal Texture Bias

python main.py

Varying degrees of background interpolation to white (use 0 for completely white, 1 for texture background).

python main.py --bg_interp (your interpolation here)

Resizing

python main.py --bg_interp 0 --size (your fraction of the object size here)

Landscapes

python main.py --bg_interp 0 --landscape

Only full shapes

python main.py --only_complete

Only full shapes masked with masked/interpolated background

python main.py --only_complete --bg_interp (your interpolation here)

A complete list of commands to generate all of the texture bias data from our paper can be found in the commands.sh file in the texture_bias subdirectory.

Excessive Invariance

Running these experiments is a bit more involved. A complete list of commands you must run to reproduce all data and graphs found in the paper can be found in the commands.sh file in the excessive_invariance subdirectory. Comments in the file describe what each step represents.

A PyTorch Implementation of Single Shot MultiBox Detector

SSD: Single Shot MultiBox Object Detector, in PyTorch A PyTorch implementation of Single Shot MultiBox Detector from the 2016 paper by Wei Liu, Dragom

Max deGroot 4.8k Jan 07, 2023
Framework for evaluating ANNS algorithms on billion scale datasets.

Billion-Scale ANN http://big-ann-benchmarks.com/ Install The only prerequisite is Python (tested with 3.6) and Docker. Works with newer versions of Py

Harsha Vardhan Simhadri 132 Dec 24, 2022
Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness (NeurIPS2021) This repository contains code for the paper "Smo

Jongheon Jeong 17 Dec 27, 2022
Pytorch implementation of Zero-DCE++

Zero-DCE++ You can find more details here: https://li-chongyi.github.io/Proj_Zero-DCE++.html. You can find the details of our CVPR version: https://li

Chongyi Li 157 Dec 23, 2022
Implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hashing by Maximizing Bit Entropy

Deep Unsupervised Image Hashing by Maximizing Bit Entropy This is the PyTorch implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hash

62 Dec 30, 2022
Intel® Neural Compressor is an open-source Python library running on Intel CPUs and GPUs

Intel® Neural Compressor targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep l

Intel Corporation 846 Jan 04, 2023
End-to-End Object Detection with Fully Convolutional Network

This project provides an implementation for "End-to-End Object Detection with Fully Convolutional Network" on PyTorch.

472 Dec 22, 2022
SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022

SafePicking Learning Safe Object Extraction via Object-Level Mapping Kentaro Wad

Kentaro Wada 49 Oct 24, 2022
The codebase for our paper "Generative Occupancy Fields for 3D Surface-Aware Image Synthesis" (NeurIPS 2021)

Generative Occupancy Fields for 3D Surface-Aware Image Synthesis (NeurIPS 2021) Project Page | Paper Xudong Xu, Xingang Pan, Dahua Lin and Bo Dai GOF

xuxudong 97 Nov 10, 2022
Orthogonal Over-Parameterized Training

The inductive bias of a neural network is largely determined by the architecture and the training algorithm. To achieve good generalization, how to effectively train a neural network is of great impo

Weiyang Liu 11 Apr 18, 2022
Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning

T2I_CL This is the official Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning Requirements Linux Python

42 Dec 31, 2022
meProp: Sparsified Back Propagation for Accelerated Deep Learning

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

LancoPKU 107 Nov 18, 2022
List of all dependencies affected by node-ipc malicious commit

node-ipc-dependencies-list List of all dependencies affected by node-ipc malicious commit as of 17/3/2022 - 19/3/2022 (timestamp) Please improve upon

99 Oct 15, 2022
The official PyTorch code for 'DER: Dynamically Expandable Representation for Class Incremental Learning' accepted by CVPR2021

DER.ClassIL.Pytorch This repo is the official implementation of DER: Dynamically Expandable Representation for Class Incremental Learning (CVPR 2021)

rhyssiyan 108 Jan 01, 2023
DimReductionClustering - Dimensionality Reduction + Clustering + Unsupervised Score Metrics

Dimensionality Reduction + Clustering + Unsupervised Score Metrics Introduction

11 Nov 15, 2022
i3DMM: Deep Implicit 3D Morphable Model of Human Heads

i3DMM: Deep Implicit 3D Morphable Model of Human Heads CVPR 2021 (Oral) Arxiv | Poject Page This project is the official implementation our work, i3DM

Tarun Yenamandra 60 Jan 03, 2023
A collection of SOTA Image Classification Models in PyTorch

A collection of SOTA Image Classification Models in PyTorch

sithu3 85 Dec 30, 2022
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Qing-Long Zhang 199 Jan 08, 2023
A collection of loss functions for medical image segmentation

A collection of loss functions for medical image segmentation

Jun 3.1k Jan 03, 2023
Label Mask for Multi-label Classification

LM-MLC 一种基于完型填空的多标签分类算法 1 前言 本文主要介绍本人在全球人工智能技术创新大赛【赛道一】设计的一种基于完型填空(模板)的多标签分类算法:LM-MLC,该算法拟合能力很强能感知标签关联性,在多个数据集上测试表明该算法与主流算法无显著性差异,在该比赛数据集上的dev效果很好,但是由

52 Nov 20, 2022