This repository contains code to run experiments in the paper "Signal Strength and Noise Drive Feature Preference in CNN Image Classifiers."

Overview

Signal Strength and Noise Drive Feature Preference in CNN Image Classifiers

This repository contains code to run experiments in the paper "Signal Strength and Noise Drive Feature Preference in CNN Image Classifiers." There are three subdirectories in this repository, the contents of which are described below. This code was tested using PyTorch 1.7.

Synthetic Pairs Matrix

This part of the repository is for running the synthetic pairs matrix experiments in the paper. Here are the commands to run all of the experiments in the paper:

Pairs Matrix 1

python main.py --exp_name pairs_matrix1 --pattern_dir pairs_matrix1 --imgnet_augment

Pairs Matrix 2

python main.py --exp_name pairs_matrix2 --pattern_dir pairs_matrix2 --imgnet_augment

Color Deviation

python main.py --exp_name color_deviation_(your epsilon here) --pattern_dir pairs_matrix1 --hue_perturb blue_circle --hue_perturb_val (your epsilon here) --imgnet_augment

Color Overlap (pattern dirs are already predefined for these. Some overlap values are included, but if you would like to use different ones, you must create them yourself.)

python main.py --exp_name color_overlap_(your overlap here) --pattern_dir color_overlap_(your overlap here) --imgnet_augment

Predictivity

python3 main.py --exp_name predictivity_(your predictivity here) --pattern_dir pairs_matrix1 --pred_drop blue --pred_drop_val (your predictivity here)

When you run one of these experiments, datasets will be created and models trained. Datasets will get created and stored in the directory ./data/exp_name, trained models will get stored in ./models/exp_name, and results will appear in ./results/exp_name. When the experiment is done, there should be a file called master.csv in the directory ./results/exp_name which will contain information including each feature's average preference over the course of the experiment, pixel count, and name. A complete list of commands to generate all data in the paper can be found in the commands.sh file in the pairs_matrix_experiments subdirectory. The training script is adapted from the torchvision training script: https://github.com/pytorch/examples/blob/master/imagenet/main.py.

Texture Bias

Stimuli and helper code is used from the open-sourced code of the paper "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness" (https://github.com/rgeirhos/texture-vs-shape).

To run the experiments from our paper with an ImageNet-trained ResNet-50, you can do the following:

Normal Texture Bias

python main.py

Varying degrees of background interpolation to white (use 0 for completely white, 1 for texture background).

python main.py --bg_interp (your interpolation here)

Resizing

python main.py --bg_interp 0 --size (your fraction of the object size here)

Landscapes

python main.py --bg_interp 0 --landscape

Only full shapes

python main.py --only_complete

Only full shapes masked with masked/interpolated background

python main.py --only_complete --bg_interp (your interpolation here)

A complete list of commands to generate all of the texture bias data from our paper can be found in the commands.sh file in the texture_bias subdirectory.

Excessive Invariance

Running these experiments is a bit more involved. A complete list of commands you must run to reproduce all data and graphs found in the paper can be found in the commands.sh file in the excessive_invariance subdirectory. Comments in the file describe what each step represents.

Data and codes for ACL 2021 paper: Towards Emotional Support Dialog Systems

Emotional-Support-Conversation Copyright © 2021 CoAI Group, Tsinghua University. All rights reserved. Data and codes are for academic research use onl

126 Dec 21, 2022
A pytorch implementation of faster RCNN detection framework (Use detectron2, it's a masterpiece)

Notice(2019.11.2) This repo was built back two years ago when there were no pytorch detection implementation that can achieve reasonable performance.

Ruotian(RT) Luo 1.8k Jan 01, 2023
Build a small, 3 domain internet using Github pages and Wikipedia and construct a crawler to crawl, render, and index.

TechSEO Crawler Build a small, 3 domain internet using Github pages and Wikipedia and construct a crawler to crawl, render, and index. Play with the r

JR Oakes 57 Nov 24, 2022
DANet for Tabular data classification/ regression.

Deep Abstract Networks A pyTorch implementation for AAAI-2022 paper DANets: Deep Abstract Networks for Tabular Data Classification and Regression. Bri

Ronnie Rocket 55 Sep 14, 2022
(Personalized) Page-Rank computation using PyTorch

torch-ppr This package allows calculating page-rank and personalized page-rank via power iteration with PyTorch, which also supports calculation on GP

Max Berrendorf 69 Dec 03, 2022
Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Jina AI 794 Dec 31, 2022
Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper

TransGanFormer (wip) Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GansFormer and TransGan paper. I

Phil Wang 146 Dec 06, 2022
AMTML-KD: Adaptive Multi-teacher Multi-level Knowledge Distillation

AMTML-KD: Adaptive Multi-teacher Multi-level Knowledge Distillation

Frank Liu 26 Oct 13, 2022
Event sourced bank - A wide-and-shallow example using the Python event sourcing library

Event Sourced Bank A "wide but shallow" example of using the Python event sourci

3 Mar 09, 2022
Pytorch implementation of Implicit Behavior Cloning.

Implicit Behavior Cloning - PyTorch (wip) Pytorch implementation of Implicit Behavior Cloning. Install conda create -n ibc python=3.8 pip install -r r

Kevin Zakka 49 Dec 25, 2022
A collection of resources, problems, explanations and concepts that are/were important during my Data Science journey

Data Science Gurukul List of resources, interview questions, concepts I use for my Data Science work. Topics: Basics of Programming with Python + Unde

Smaranjit Ghose 10 Oct 25, 2022
Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"

RealBasicVSR [Paper] This is the official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution, arXiv". This repository contain

Kelvin C.K. Chan 566 Dec 28, 2022
Implementation of Wasserstein adversarial attacks.

Stronger and Faster Wasserstein Adversarial Attacks Code for Stronger and Faster Wasserstein Adversarial Attacks, appeared in ICML 2020. This reposito

21 Oct 06, 2022
Geometric Vector Perceptrons --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Implementation of equivariant GVP-GNNs as described in Learning from Protein Structure with Geometric Vector Perceptrons b

Dror Lab 142 Dec 29, 2022
Code for DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning Pytorch Implementation for DisCo: Remedy Self-supervi

79 Jan 06, 2023
Riemannian Convex Potential Maps

Modeling distributions on Riemannian manifolds is a crucial component in understanding non-Euclidean data that arises, e.g., in physics and geology. The budding approaches in this space are limited b

Facebook Research 61 Nov 28, 2022
TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Prediction.

TalkNet 2 [WIP] TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Predictio

Rishikesh (ऋषिकेश) 69 Dec 17, 2022
LLVM-based compiler for LightGBM gradient-boosted trees. Speeds up prediction by ≥10x.

LLVM-based compiler for LightGBM gradient-boosted trees. Speeds up prediction by ≥10x.

Simon Boehm 183 Jan 02, 2023
for a paper about leveraging discourse markers for training new models

TSLM-DISCOURSE-MARKERS Scope This repository contains: (1) Code to extract discourse markers from wikipedia (TSA). (1) Code to extract significant dis

International Business Machines 6 Nov 02, 2022
Modification of convolutional neural net "UNET" for image segmentation in Keras framework

ZF_UNET_224 Pretrained Model Modification of convolutional neural net "UNET" for image segmentation in Keras framework Requirements Python 3.*, Keras

209 Nov 02, 2022