The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

Overview

SCOOD-UDG (ICCV 2021)

paper   projectpage   gdrive  onedrive

This repository is the official implementation of the paper:

Semantically Coherent Out-of-Distribution Detection
Jingkang Yang, Haoqi Wang, Litong Feng, Xiaopeng Yan, Huabin Zheng, Wayne Zhang, Ziwei Liu
Proceedings of the IEEE International Conference on Computer Vision (ICCV 2021)

udg

Dependencies

We use conda to manage our dependencies, and CUDA 10.1 to run our experiments.

You can specify the appropriate cudatoolkit version to install on your machine in the environment.yml file, and then run the following to create the conda environment:

conda env create -f environment.yml
conda activate scood

SC-OOD Dataset

scood

The SC-OOD dataset introduced in the paper can be downloaded here.

gdrive onedrive

Our codebase accesses the dataset from the root directory in a folder named data/ by default, i.e.

├── ...
├── data
│   ├── images
│   └── imglist
├── scood
├── test.py
├── train.py
├── ...

Training

The entry point for training is the train.py script. The hyperparameters for each experiment is specified by a .yml configuration file (examples given in configs/train/).

All experiment artifacts are saved in the specified args.output_dir directory.

python train.py \
    --config configs/train/cifar10_udg.yml \
    --data_dir data \
    --output_dir output/cifar10_udg

Testing

Evaluation for a trained model is performed by the test.py script, with its hyperparameters also specified by a .yml configuration file (examples given in configs/test/)

Within the configuration file, you can also specify which post-processing OOD method to use (e.g. ODIN or Energy-based OOD detector (EBO)).

The evaluation results are saved in a .csv file as specified.

python test.py \
    --config configs/test/cifar10.yml \
    --checkpoint output/cifar10_udg/best.ckpt \
    --data_dir data \
    --csv_path output/cifar10_udg/results.csv

Results

CIFAR-10 (+ Tiny-ImageNet) Results on ResNet18

You can run the following script (specifying the data and output directories) which perform training + testing for our main experimental results:

CIFAR-10, UDG

bash scripts/cifar10_udg.sh data_dir output_dir

We report the mean ± std results from the current codebase as follows, which match the performance reported in our original paper.

Metrics ODIN EBO OE UDG (ours)
FPR95 ↓ 50.76 ± 3.39 50.70 ± 2.86 54.99 ± 4.06 39.94 ± 3.77
AUROC ↑ 82.11 ± 0.24 83.99 ± 1.05 87.48 ± 0.61 93.27 ± 0.64
AUPR In ↑ 73.07 ± 0.40 76.84 ± 1.56 85.75 ± 1.70 93.36 ± 0.56
AUPR Out ↑ 85.06 ± 0.29 85.44 ± 0.73 86.95 ± 0.28 91.21 ± 1.23
[email protected] 0.30 ± 0.04 0.26 ± 0.09 7.09 ± 0.48 16.36 ± 4.33
[email protected] 1.22 ± 0.28 1.46 ± 0.18 13.69 ± 0.78 32.99 ± 4.16
[email protected] 6.13 ± 0.72 8.17 ± 0.96 29.60 ± 5.31 59.14 ± 2.60
[email protected] 39.61 ± 0.72 47.57 ± 3.33 64.33 ± 3.44 81.04 ± 1.46

License and Acknowledgements

This project is open-sourced under the MIT license.

The codebase is refactored by Ang Yi Zhe, and maintained by Jingkang Yang and Ang Yi Zhe.

Citation

If you find our repository useful for your research, please consider citing our paper:

@InProceedings{yang2021scood,
    author = {Yang, Jingkang and Wang, Haoqi and Feng, Litong and Yan, Xiaopeng and Zheng, Huabin and Zhang, Wayne and Liu, Ziwei},
    title = {Semantically Coherent Out-of-Distribution Detection},
    booktitle = {Proceedings of the IEEE International Conference on Computer Vision},
    year = {2021}
}
Owner
Jake YANG
[email protected] PhD Student
Jake YANG
prior-based-losses-for-medical-image-segmentation

Repository for papers: Benchmark: Effect of Prior-based Losses on Segmentation Performance: A Benchmark Midl: A Surprisingly Effective Perimeter-based

Rosana EL JURDI 9 Sep 07, 2022
ACV is a python library that provides explanations for any machine learning model or data.

ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any model or data and different Shapley Values for tree-based mod

Salim Amoukou 85 Dec 27, 2022
Moment-DETR code and QVHighlights dataset

Moment-DETR QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries Jie Lei, Tamara L. Berg, Mohit Bansal For dataset de

Jie Lei 雷杰 133 Dec 22, 2022
Neural network-based build time estimation for additive manufacturing

Neural network-based build time estimation for additive manufacturing Oh, Y., Sharp, M., Sprock, T., & Kwon, S. (2021). Neural network-based build tim

Yosep 1 Nov 15, 2021
This is a simple plugin for Vim that allows you to use OpenAI Codex.

🤖 Vim Codex An AI plugin that does the work for you. This is a simple plugin for Vim that will allow you to use OpenAI Codex. To use this plugin you

Tom Dörr 195 Dec 28, 2022
An implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch.

Neural Architecture Search with Random Labels(RLNAS) Introduction This project provides an implementation for Neural Architecture Search with Random L

18 Nov 08, 2022
PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2.0+

PaddlePaddle Vision Transformers State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 🤖 PaddlePaddle Visual Transformers (PaddleViT or

1k Dec 28, 2022
HAR-stacked-residual-bidir-LSTMs - Deep stacked residual bidirectional LSTMs for HAR

HAR-stacked-residual-bidir-LSTM The project is based on this repository which is presented as a tutorial. It consists of Human Activity Recognition (H

Guillaume Chevalier 287 Dec 27, 2022
A PyTorch Implementation of Single Shot Scale-invariant Face Detector.

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector. Eval python wider_eval_pytorch.

carwin 235 Jan 07, 2023
SpeechNAS Better Trade off between Latency and Accuracy for Large Scale Speaker Verification

SpeechNAS Better Trade off between Latency and Accuracy for Large Scale Speaker Verification

Wentao Zhu 24 May 20, 2022
CTRL-C: Camera calibration TRansformer with Line-Classification

CTRL-C: Camera calibration TRansformer with Line-Classification This repository contains the official code and pretrained models for CTRL-C (Camera ca

57 Nov 14, 2022
"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

NAS-Bench-301 This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search". The

AutoML-Freiburg-Hannover 57 Nov 30, 2022
Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Dominik Klein 189 Dec 21, 2022
The official implementation of ELSA: Enhanced Local Self-Attention for Vision Transformer

ELSA: Enhanced Local Self-Attention for Vision Transformer By Jingkai Zhou, Pich

DamoCV 87 Dec 19, 2022
The source code of the paper "SHGNN: Structure-Aware Heterogeneous Graph Neural Network"

SHGNN: Structure-Aware Heterogeneous Graph Neural Network The source code and dataset of the paper: SHGNN: Structure-Aware Heterogeneous Graph Neural

Wentao Xu 7 Nov 13, 2022
[ICCV 2021 (oral)] Planar Surface Reconstruction from Sparse Views

Planar Surface Reconstruction From Sparse Views Linyi Jin, Shengyi Qian, Andrew Owens, David F. Fouhey University of Michigan ICCV 2021 (Oral) This re

Linyi Jin 89 Jan 05, 2023
An example showing how to use jax to train resnet50 on multi-node multi-GPU

jax-multi-gpu-resnet50-example This repo shows how to use jax for multi-node multi-GPU training. The example is adapted from the resnet50 example in d

Yangzihao Wang 20 Jul 04, 2022
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"

Efficient Neural Architecture Search (ENAS) in PyTorch PyTorch implementation of Efficient Neural Architecture Search via Parameters Sharing. ENAS red

Taehoon Kim 2.6k Dec 31, 2022
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming

Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification.

YerevaNN 75 Nov 06, 2022
MobileNetV1-V2,MobileNeXt,GhostNet,AdderNet,ShuffleNetV1-V2,Mobile+ViT etc.

MobileNetV1-V2,MobileNeXt,GhostNet,AdderNet,ShuffleNetV1-V2,Mobile+ViT etc. ⭐⭐⭐⭐⭐

568 Jan 04, 2023