Analyzing basic network responses to novel classes

Overview

novelty-detection

Analyzing how AlexNet responds to novel classes with varying degrees of similarity to pretrained classes from ImageNet.

If you find this work helpful in your research, please cite:

Eshed, N. (2020). Novelty detection and analysis in convolutional neural networks (Accession No. 27994027)[Master's thesis, Cornell University]. ProQuest Dissertations & Theses Global.

@mastersthesis{eshed_novelty_detection,
  author={Noam Eshed},
  title={Novelty detection and analysis in convolutional neural networks},
  school={Cornell University},
  year={2020},
  publisher={ProQuest Dissertations & Theses Global}
}

Data

in_out_class.csv

This is hand-annotated data from iNaturalist. The most up-to-date version can be found here The data taken directly from iNaturalist includes the biological groups and scientific names of natural things. Annotators included the common English name(s) for each creature, their relation to ImageNet, any relevant notes, and their initials. For details regarding annotation guidelines, see this link.

alexnet_inat_results/

inat_results_top_choice.json

This json file contains the results from testing a pre-trained AlexNet (trained on ImageNet) on images from iNaturalist. It only includes the top one result (i.e. the label chosen by the network) for each image in iNaturalist, and so is most efficient when looking into the distribution of labels chosen for a certain type of creature.

Biological group files

Each of these folders contains all of the results of testing a pre-trained AlexNet (trained on ImageNet) on images from iNaturalist in the given biological group. This includes all possible labels, their scores, and their confidence values for each image. Since ImageNet has 1000 classes, that means that each image in iNaturalist has 3 vectors of length 1000 to store the label, score, and confidence value information. Each of the files within these folders contains the data for a single species within the given biological group

Code

class_in_or_out.py

This script plots the distribution of the top n CNN labels for all (or part) of the image data. Looking at all species of interest, it averages the frequency of the top n labels. Note that the top n labels are not necessarily in the same order for each species, and so the labels themselves are ignored.

The species each fall under one of four annotated ImageNet relationship categories: in ImageNet, not in ImageNet, parent in ImageNet, and relative in Imagenet. These annotations are taken from in_out_class.csv. The plots may be stratified by these relationship categories.

As an example, this code can plot the frequency of the top 10 labels over all bird images, and split by the species' relationship to Imagenet. The resulting plot will show the average distribution of label frequencies. The top label frequency, for example, is the frequency of the top occuring label over all images averaged over a given species, regardless of what that top label actually was.

This plot shows the frequency of the top 20 labels over all bird species in iNaturalist:

Bird Label Frequencies

plot_result_distribution.py

This script plots the distribution of CNN labels over each species. It does so by counting the number of occurrences of each label over many images of that species and normalizing the result to get a frequency distribution rather than an occurrence count distribution. There is an option to color and label each point according to the average confidence of the label. This can help us understand what common mistakes the network makes when classifying images of a given species.

In this example plot, we can see the distribution of all labels guessed by the network in the set of African Penguin images. It shows that approximately 19% of the images are classified as magpie, 19% as goose, etc. Interestingly, the king_penguin label is only awarded to 5% of the images and is tied for the 5th most common label.

African Penguin Distribution

alexnet_novelty.py

This script tests AlexNet (pretrained on ImageNet) on all of the data from iNaturalist and saves the result into the alexnet_inat_results/ folder.

Owner
Noam Eshed
Noam Eshed
Instance-based label smoothing for improving deep neural networks generalization and calibration

Instance-based Label Smoothing for Neural Networks Pytorch Implementation of the algorithm. This repository includes a new proposed method for instanc

Mohamed Maher 1 Aug 13, 2022
BRepNet: A topological message passing system for solid models

BRepNet: A topological message passing system for solid models This repository contains the an implementation of BRepNet: A topological message passin

Autodesk AI Lab 42 Dec 30, 2022
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 05, 2023
Code & Experiments for "LILA: Language-Informed Latent Actions" to be presented at the Conference on Robot Learning (CoRL) 2021.

LILA LILA: Language-Informed Latent Actions Code and Experiments for Language-Informed Latent Actions (LILA), for using natural language to guide assi

Sidd Karamcheti 11 Nov 25, 2022
Learning Off-Policy with Online Planning, CoRL 2021

LOOP: Learning Off-Policy with Online Planning Accepted in Conference of Robot Learning (CoRL) 2021. Harshit Sikchi, Wenxuan Zhou, David Held Paper In

Harshit Sikchi 24 Nov 22, 2022
Multi-query Video Retreival

Multi-query Video Retreival

Princeton Visual AI Lab 17 Nov 22, 2022
PCGNN - Procedural Content Generation with NEAT and Novelty

PCGNN - Procedural Content Generation with NEAT and Novelty Generation Approach — Metrics — Paper — Poster — Examples PCGNN - Procedural Content Gener

Michael Beukman 8 Dec 10, 2022
Analysis of rationale selection in neural rationale models

Neural Rationale Interpretability Analysis We analyze the neural rationale models proposed by Lei et al. (2016) and Bastings et al. (2019), as impleme

Yiming Zheng 3 Aug 31, 2022
ICLR 2021 i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning

Introduction PyTorch code for the ICLR 2021 paper [i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning]. @inproceedings{lee2021i

Kibok Lee 68 Nov 27, 2022
Composing methods for ML training efficiency

MosaicML Composer contains a library of methods, and ways to compose them together for more efficient ML training.

MosaicML 2.8k Jan 08, 2023
Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto-Seg-Loss By Hao Li, Chenxin Tao, Xizhou Zhu, Xiaogang Wang, Gao Huang, Jifeng Dai This is the official implementation of the ICLR 2021 paper Auto

61 Dec 21, 2022
Official Implementation of Neural Splines

Neural Splines: Fitting 3D Surfaces with Inifinitely-Wide Neural Networks This repository contains the official implementation of the CVPR 2021 (Oral)

Francis Williams 56 Nov 29, 2022
Creating Artificial Life with Reinforcement Learning

Although Evolutionary Algorithms have shown to result in interesting behavior, they focus on learning across generations whereas behavior could also be learned during ones lifetime.

Maarten Grootendorst 49 Dec 21, 2022
Informal Persian Universal Dependency Treebank

Informal Persian Universal Dependency Treebank (iPerUDT) Informal Persian Universal Dependency Treebank, consisting of 3000 sentences and 54,904 token

Roya Kabiri 0 Jan 05, 2022
It's final year project of Diploma Engineering. This project is based on Computer Vision.

Face-Recognition-Based-Attendance-System It's final year project of Diploma Engineering. This project is based on Computer Vision. Brief idea about ou

Neel 10 Nov 02, 2022
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search

BossNAS This repository contains PyTorch evaluation code, retraining code and pretrained models of our paper: BossNAS: Exploring Hybrid CNN-transforme

Changlin Li 127 Dec 26, 2022
Yolov5 deepsort inference,使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中

使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。

813 Dec 31, 2022
Code for Dual Contrastive Learning for Unsupervised Image-to-Image Translation, NTIRE, CVPRW 2021.

arXiv Dual Contrastive Learning Adversarial Generative Networks (DCLGAN) We provide our PyTorch implementation of DCLGAN, which is a simple yet powerf

119 Dec 04, 2022
Compact Bilinear Pooling for PyTorch

Compact Bilinear Pooling for PyTorch. This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch. This

Grégoire Payen de La Garanderie 234 Dec 07, 2022
This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm.

This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm. It contains the code to reproduce the results presented in the original paper: https://arxiv.org/abs/2112.0

Saman Khamesian 6 Dec 13, 2022