This is the accompanying toolbox for the paper "A Survey on GANs for Anomaly Detection"

Overview

Anomaly Toolbox

Description

Anomaly Toolbox Powered by GANs.

This is the accompanying toolbox for the paper "A Survey on GANs for Anomaly Detection" (https://arxiv.org/pdf/1906.11632.pdf).

The toolbox is meant to be used by the user to explore the performance of different GAN based architectures (in our work aka "experiments"). It also already provides some datasets to perform experiments on:

We provided the MNIST dataset because the original works extensively use it. On the other hand, we have also added the previously listed datasets both because used by a particular architecture and because they contribute a good benchmark for the models we have implemented.

All the architectures were tested on commonly used datasets such as MNIST, FashionMNIST, CIFAR-10, and KDD99. Some of them were even tested on more specific datasets, such as an X-Ray dataset that, however, we could not provide because of the impossibility of getting the data (privacy reasons).

The user can create their own dataset and use it to test the models.

Quick Start

  • First thing first, install the toolbox
pip install anomaly-toolbox

Then you can choose what experiment to run. For example:

  • Run the GANomaly experiment (i.e., the GANomaly architecture) with hyperparameters tuning enabled, the pre-defined hyperparameters file hparams.json and the MNIST dataset:
anomaly-box.py --experiment GANomalyExperiment --hps-path path/to/config/hparams.json --dataset 
MNIST 
  • Otherwise, you can run all the experiments using the pre-defined hyperparameters file hparams. json and the MNIST dataset:
anomaly-box.py --run-all --hps-path path/to/config/hparams.json --dataset MNIST 

For any other information, feel free to check the help:

anomaly-box.py --help

Contribution

This work is completely open source, and we would appreciate any contribution to the code. Any merge request to enhance, correct or expand the work is welcome.

Notes

The structures of the models inside the toolbox come from their respective papers. We have tried to respect them as much as possible. However, sometimes, due to implementation issues, we had to make some minor-ish changes. For this reason, you could find out that, in some cases, some features such as the number of layers, the size of kernels, or other such things may differ from the originals.

However, you don't have to worry. The heart and purpose of the architectures have remained intact.

Installation

pip install anomaly-toolbox

Usage

Options:
  --experiment [AnoGANExperiment|DeScarGANExperiment|EGBADExperiment|GANomalyExperiment]
                                  Experiment to run.
  --hps-path PATH                 When running an experiment, the path of the
                                  JSON file where all the hyperparameters are
                                  located.  [required]
  --tuning BOOLEAN                If you want to use hyperparameters tuning,
                                  use 'True' here. Default is False.
  --dataset TEXT                  The dataset to use. Can be a ready to use
                                  dataset, or a .py file that implements the
                                  AnomalyDetectionDataset interface
                                  [required]
  --run-all BOOLEAN               Run all the available experiments
  --help                          Show this message and exit.

Datasets and Custom Datasets

The provided datasets are:

and are automatically downloaded when the user makes a specific choice: ["MNIST", "CorruptedMNIST", "SurfaceCracks","MVTecAD"].

The user can also add its own specific dataset. To do this, the new dataset should inherit from the AnomalyDetectionDataset abstract class implementing its own configure method. For a more detailed guide, the user can refer to the README.md file inside the src/anomaly_toolbox/datasets folder. Moreover, in the examples folder, the user can find a dummy.py module with the basic skeleton code to implement a dataset.

References

Owner
Zuru Tech
Open source @ ZURU Tech
Zuru Tech
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
Learning View Priors for Single-view 3D Reconstruction (CVPR 2019)

Learning View Priors for Single-view 3D Reconstruction (CVPR 2019) This is code for a paper Learning View Priors for Single-view 3D Reconstruction by

Hiroharu Kato 38 Aug 17, 2022
multimodal transformer

This repo holds the code to perform experiments with the multimodal autoregressive probabilistic model Transflower. Overview of the repo It is structu

Guillermo Valle 68 Dec 13, 2022
Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)

Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021) Overview of paths used in DIG and IG. w is the word being attributed. The

INK Lab @ USC 17 Oct 27, 2022
A memory-efficient implementation of DenseNets

efficient_densenet_pytorch A PyTorch =1.0 implementation of DenseNets, optimized to save GPU memory. Recent updates Now works on PyTorch 1.0! It uses

Geoff Pleiss 1.4k Dec 25, 2022
Apply AnimeGAN-v2 across frames of a video clip

title emoji colorFrom colorTo sdk app_file pinned AnimeGAN-v2 For Videos 🔥 blue red gradio app.py false AnimeGAN-v2 For Videos Apply AnimeGAN-v2 acro

Nathan Raw 36 Oct 18, 2022
House_prices_kaggle - Predict sales prices and practice feature engineering, RFs, and gradient boosting

House Prices - Advanced Regression Techniques Predicting House Prices with Machine Learning This project is build to enhance my knowledge about machin

Gurpreet Singh 1 Jan 01, 2022
Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Ian Pointer 368 Dec 17, 2022
Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)

Taming Visually Guided Sound Generation • [Project Page] • [ArXiv] • [Poster] • • Listen for the samples on our project page. Overview We propose to t

Vladimir Iashin 226 Jan 03, 2023
Code for Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations

Implementation for Iso-Points (CVPR 2021) Official code for paper Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations paper |

Yifan Wang 66 Nov 08, 2022
A curated list of awesome Active Learning

Awesome Active Learning 🤩 A curated list of awesome Active Learning ! 🤩 Background (image source: Settles, Burr) What is Active Learning? Active lea

BAI Fan 431 Jan 03, 2023
Train DeepLab for Semantic Image Segmentation

Train DeepLab for Semantic Image Segmentation Martin Kersner, [email protected]

Martin Kersner 172 Dec 14, 2022
Classification of Long Sequential Data using Circular Dilated Convolutional Neural Networks

Classification of Long Sequential Data using Circular Dilated Convolutional Neural Networks arXiv preprint: https://arxiv.org/abs/2201.02143. Architec

19 Nov 30, 2022
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering

RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering Authors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou and

Salesforce 72 Dec 05, 2022
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).

Attention Walk ⠀⠀ A PyTorch Implementation of Watch Your Step: Learning Node Embeddings via Graph Attention (NIPS 2018). Abstract Graph embedding meth

Benedek Rozemberczki 303 Dec 09, 2022
A modular PyTorch library for optical flow estimation using neural networks

A modular PyTorch library for optical flow estimation using neural networks

neu-vig 113 Dec 20, 2022
Code for the CVPR2021 workshop paper "Noise Conditional Flow Model for Learning the Super-Resolution Space"

NCSR: Noise Conditional Flow Model for Learning the Super-Resolution Space Official NCSR training PyTorch Code for the CVPR2021 workshop paper "Noise

57 Oct 03, 2022
Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Duong H. Le 18 Jun 13, 2022
Structure-Preserving Deraining with Residue Channel Prior Guidance (ICCV2021)

SPDNet Structure-Preserving Deraining with Residue Channel Prior Guidance (ICCV2021) Requirements Linux Platform NVIDIA GPU + CUDA CuDNN PyTorch == 0.

41 Dec 12, 2022
An SMPC companion library for Syft

SyMPC A library that extends PySyft with SMPC support SyMPC /ˈsɪmpəθi/ is a library which extends PySyft ≥0.3 with SMPC support. It allows computing o

Arturo Marquez Flores 0 Oct 13, 2021