Official PyTorch implementation for "Mixed supervision for surface-defect detection: from weakly to fully supervised learning"

Overview

Mixed supervision for surface-defect detection: from weakly to fully supervised learning [Computers in Industry 2021]

Official PyTorch implementation for "Mixed supervision for surface-defect detection: from weakly to fully supervised learning" published in journal Computers in Industry 2021.

The same code is also an offical implementation of the method used in "End-to-end training of a two-stage neural network for defect detection" published in International Conference on Pattern Recognition 2020.

Citation

Please cite our Computers in Industry 2021 paper when using this code:

@article{Bozic2021COMIND,
  author = {Bo{\v{z}}i{\v{c}}, Jakob and Tabernik, Domen and 
  Sko{\v{c}}aj, Danijel},
  journal = {Computers in Industry},
  title = {{Mixed supervision for surface-defect detection: from weakly to fully supervised learning}},
  year = {2021}
}

How to run:

Requirements

Code has been tested to work on:

  • Python 3.8
  • PyTorch 1.6, 1.8
  • CUDA 10.0, 10.1
  • using additional packages as listed in requirements.txt

Datasets

You will need to download the datasets yourself. For DAGM and Severstal Steel Defect Dataset you will also need a Kaggle account.

  • DAGM available here.
  • KolektorSDD available here.
  • KolektorSDD2 available here.
  • Severstal Steel Defect Dataset available here.

For details about data structure refer to README.md in datasets folder.

Cross-validation splits, train/test splits and weakly/fully labeled splits for all datasets are located in splits directory of this repository, alongside the instructions on how to use them.

Using on other data

Refer to README.md in datasets for instructions on how to use the method on other datasets.

Demo - fully supervised learning

To run fully supervised learning and evaluation on all four datasets run:

./DEMO.sh
# or by specifying multiple GPU ids 
./DEMO.sh 0 1 2

Results will be written to ./results folder.

Replicating paper results

To replicate the results published in the paper run:

./EXPERIMENTS_COMIND.sh
# or by specifying multiple GPU ids 
./EXPERIMENTS_COMIND.sh 0 1 2

To replicate the results from ICPR 2020 paper:

@misc{Bozic2020ICPR,
    title={End-to-end training of a two-stage neural network for defect detection},
    author={Jakob Božič and Domen Tabernik and Danijel Skočaj},
    year={2020},
    eprint={2007.07676},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

run:

./EXPERIMENTS_ICPR.sh
# or by specifying multiple GPU ids 
./EXPERIMENTS_ICPR.sh 0 1 2

Results will be written to ./results-comind and ./results-icpr folders.

Usage of training/evaluation code

The following python files are used to train/evaluate the model:

  • train_net.py Main entry for training and evaluation
  • models.py Model file for network
  • data/dataset_catalog.py Contains currently supported datasets

In order to train and evaluate a network you can also use EXPERIMENTS_ROOT.sh, which contains several functions that will make training and evaluation easier for you. For more details see the file EXPERIMENTS_ROOT.sh.

Running code

Simplest way to train and evaluate a network is to use EXPERIMENTS_ROOT.sh, you can see examples of use in EXPERIMENTS_ICPR.sh and in EXPERIMENTS_COMIND.sh

If you wish to do it the other way you can do it by running train_net.py and passing the parameters as keyword arguments. Bellow is an example of how to train a model for a single fold of KSDD dataset.

python -u train_net.py  \
    --GPU=0 \
    --DATASET=KSDD \
    --RUN_NAME=RUN_NAME \
    --DATASET_PATH=/path/to/dataset \
    --RESULTS_PATH=/path/to/save/results \
    --SAVE_IMAGES=True \
    --DILATE=7 \
    --EPOCHS=50 \
    --LEARNING_RATE=1.0 \
    --DELTA_CLS_LOSS=0.01 \
    --BATCH_SIZE=1 \
    --WEIGHTED_SEG_LOSS=True \
    --WEIGHTED_SEG_LOSS_P=2 \
    --WEIGHTED_SEG_LOSS_MAX=1 \
    --DYN_BALANCED_LOSS=True \
    --GRADIENT_ADJUSTMENT=True \
    --FREQUENCY_SAMPLING=True \
    --TRAIN_NUM=33 \
    --NUM_SEGMENTED=33 \
    --FOLD=0

Some of the datasets do not require you to specify --TRAIN_NUM or --FOLD- After training, each model is also evaluated.

For KSDD you need to combine the results of evaluation from all three folds, you can do this by using join_folds_results.py:

python -u join_folds_results.py \
    --RUN_NAME=SAMPLE_RUN \
    --RESULTS_PATH=/path/to/save/results \
    --DATASET=KSDD 

You can use read_results.py to generate a table of results f0r all runs for selected dataset.
Note: The model is sensitive to random initialization and data shuffles during the training and will lead to different performance with different runs unless --REPRODUCIBLE_RUN is set.

Owner
ViCoS Lab
ViCoS Lab
RRD: Rotation-Sensitive Regression for Oriented Scene Text Detection

RRD: Rotation-Sensitive Regression for Oriented Scene Text Detection For more details, please refer to our paper. Citing Please cite the related works

Minghui Liao 102 Jun 29, 2022
Tensorflow-based CNN+LSTM trained with CTC-loss for OCR

Overview This collection demonstrates how to construct and train a deep, bidirectional stacked LSTM using CNN features as input with CTC loss to perfo

Jerod Weinman 489 Dec 21, 2022
With the virtual keyboard, you can write on the real time images by combining the thumb and index fingers on the letter you want.

Virtual Keyboard With the virtual keyboard, you can write on the real time images by combining the thumb and index fingers on the letter you want. At

Güldeniz Bektaş 5 Jan 23, 2022
graph learning code for ogb

The final code for OGB Installation Requirements: ogb=1.3.1 torch=1.7.0 torch-geometric=1.7.0 torch-scatter=2.0.6 torch-sparse=0.6.9 Baseline models T

PierreHao 20 Nov 10, 2022
Read Japanese manga inside browser with selectable text.

mokuro Read Japanese manga with selectable text inside a browser. See demo: https://kha-white.github.io/manga-demo mokuro_demo.mp4 Demo contains excer

Maciej Budyś 170 Dec 27, 2022
A tool for extracting text from scanned documents (via OCR), with user-defined post-processing.

The project is based on older versions of tesseract and other tools, and is now superseded by another project which allows for more granular control o

Maxim 32 Jul 24, 2022
python ocr using tesseract/ with EAST opencv detector

pytextractor python ocr using tesseract/ with EAST opencv text detector Uses the EAST opencv detector defined here with pytesseract to extract text(de

Danny Crasto 38 Dec 05, 2022
TextField: Learning A Deep Direction Field for Irregular Scene Text Detection (TIP 2019)

TextField: Learning A Deep Direction Field for Irregular Scene Text Detection Introduction The code and trained models of: TextField: Learning A Deep

Yukang Wang 101 Dec 12, 2022
A simple component to display annotated text in Streamlit apps.

Annotated Text Component for Streamlit A simple component to display annotated text in Streamlit apps. For example: Installation First install Streaml

Thiago Teixeira 312 Dec 30, 2022

Installations for running keras-theano on GPU Upgrade pip and install opencv2 cd ~ pip install --upgrade pip pip install opencv-python Upgrade keras

Berat Kurar Barakat 14 Sep 30, 2022
Rest API Written In Python To Classify NSFW Images.

✨ NSFW Classifier API ✨ Rest API Written In Python To Classify NSFW Images. Fastest Solution If you don't want to selfhost it, there's already an inst

Akshay Rajput 23 Dec 30, 2022
Generic framework for historical document processing

dhSegment dhSegment is a tool for Historical Document Processing. Its generic approach allows to segment regions and extract content from different ty

Digital Humanities Laboratory 343 Dec 24, 2022
Programa que viabiliza a OCR (Optical Character Reading - leitura óptica de caracteres) de um PDF.

Este programa tem o intuito de ser um modificador de arquivos PDF. Os arquivos PDFs podem ser 3: PDFs verdadeiros - em que podem ser selecionados o ti

Daniel Soares Saldanha 2 Oct 11, 2021
Reference Code for AAAI-20 paper "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels"

Reference Code for AAAI-20 paper "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels" Please refer to htt

Ke Sun 1 Feb 14, 2022
Document Layout Analysis

Eynollah Document Layout Analysis Introduction This tool performs document layout analysis (segmentation) from image data and returns the results as P

QURATOR-SPK 198 Dec 29, 2022
MONAI Label is a server-client system that facilitates interactive medical image annotation by using AI.

MONAI Label is a server-client system that facilitates interactive medical image annotation by using AI. It is an open-source and easy-to-install ecosystem that can run locally on a machine with one

Project MONAI 344 Dec 23, 2022
SRA's seminar on Introduction to Computer Vision Fundamentals

Introduction to Computer Vision This repository includes basics to : Python Numpy: A python library Git Computer Vision. The aim of this repository is

Society of Robotics and Automation 147 Dec 04, 2022
Distort a video using Seam Carving (video) and Vibrato effect (sound)

Distort videos Applies a Seam Carving algorithm (aka liquid rescale) on every frame of a video, and a vibrato effect on the audio to distort the video

AlexZeGamer 6 Dec 06, 2022
Using python libraries to track hands

Python-HandTracking Using python libraries to track hands on a camera Uses cv2 and mediapipe libraries custom hand tracking module PyCharm IDE Final E

Martin Matsudaira 1 Dec 17, 2021