Our implementation used for the MICCAI 2021 FLARE Challenge titled 'Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements'.

Overview

Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements

Our implementation used for the MICCAI 2021 FLARE Challenge titled Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements.

You need to have the MedicalDataAugmentationTool framework by Christian Payer downloaded and in your PYTHONPATH for the scripts to work.

If you have questions about the code, write me a mail.

Dependencies

The following frameworks/libraries were used in the version as stated. If you run into problems with the libraries, please verify that you have the same version installed.

  • Python 3.9
  • TensorFlow 2.6
  • SimpleITK 2.0
  • Numpy 1.20

Dataset and Preprocessing

The dataset as well as a detailed description of it can be found on the challenge website. Follow the steps described there to download the data.

Define the base_dataset_folder containing the downloaded TrainingImg, TrainingMask and ValidationImg in the script preprocessing/preprocessing.py and execute it to generate TrainingImg_small and TrainingMask_small.

Also, download the setup folder provided in this repository and place it in the base_dataset_folder, the following structure is expected:

.                                       # The `base_dataset_folder` of the dataset
├── TrainingImg                         # Image folder containing all training images
│   ├── train_000_0000.nii.gz            
│   ├── ...                   
│   └── train_360_0000.nii.gz            
├── TrainingMask                        # Image folder containing all training masks
│   ├── train_000.nii.gz            
│   ├── ...                   
│   └── train_360.nii.gz  
├── ValidationImg                       # Image folder containing all validation images
│   ├── validation_000_0000.nii.gz            
│   ├── ...                   
│   └── validation_360_0000.nii.gz  
├── TrainingImg_small                   # Image folder containing all downsampled training images generated by `preprocessing/preprocessing.py`
│   ├── train_000_0000.nii.gz            
│   ├── ...                   
│   └── train_360_0000.nii.gz  
├── TrainingMask_small                  # Image folder containing all downsampled training masks generated by `preprocessing/preprocessing.py`
│   ├── train_000_0000.nii.gz            
│   ├── ...                   
│   └── train_360_0000.nii.gz  
└── setup                               # Setup folder as provided in this repository

Train Models

To train a localization model, run localization/main.py after defining the base_dataset_folder as well as the base_output_folder.

To train a segmentation model, run scn/main.py. Again, base_dataset_folder and base_output_folder need to be set accordingly beforehand.

In both cases in function run(), the variable cv can be set to 0, 1, 2, 3 or 4. The values 1-4 represent the respective cross-validation fold. When choosing 0, all training data is used to train the model, which also deactivates the generation of test outputs.

Further parameters like the number of training iterations (max_iter) and the number of iterations after which to perfrom testing (test_iter) can be modified in __init__() of the MainLoop class.

Generate a SavedModel

To convert a trained network to a SavedModel, the script localization/main_create_model.py respectively scn/main_create_model.py can be used after a model was trained.

Before running the respective script, the variable load_model_base needs to be set to the trained models output folder, e.g., .../localization/cv1/2021-09-27_13-18-59.

Furthermore, load_model_iter should be set to the same value as max_iter used during training the model. The value needs to be set to an iteration for which the network weights have been generated.

Generate tf_utils_module

The script inference/inference_tf_utils_module.py can be used to trace and save the tf.functions used for preprocessing during inference into a SavedModel and generate saved_models/tf_utils_module.

To do so, the input_path and output_path need to be defined in the script. The input_path is expected to contain valid images, we suggest to use the folder ValidationImg.

Inference

The provided inference script can be used to evaluate the performance of our method on unseen data efficiently.

The script inference/inference.py requires that all SavedModels are present in the saved_models folder, i.e., saved_models/localization, saved_models/segmentation and saved_models/tf_utils_module need to contain the respective SavedModel. Either, use the provided SavedModels for inference by copying them from submitted_saved_models to saved_models, or use your own models generated as described above.

Additionally, the input_path and output_path need to be defined in the script. The input_path is expected to contain valid images, we suggest to use the folder ValidationImg.

.                                       # The base folder of this repository.
├── saved_models                        # Required by `inference.py`.
│   ├── localization                    # SavedModel of the localization model.
│   │   ├── assets
│   │   ├── variables
│   │   └── saved_model.pb
│   ├── segmentation                    # SavedModel of the segmentation (scn) model.
│   │   ├── assets
│   │   ├── variables
│   │   └── saved_model.pb
│   └── tf_utils_module                 # SavedModel of the tf.functions used for preprocessing during inference.
│       ├── assets
│       ├── variables
│       └── saved_model.pb
...

Docker

The provided Dockerfile can be used to generate a docker image which can readily be used for inference. The SavedModels are expected in the folder saved_models, either copy the provided SavedModels from submitted_saved_models to saved_models or generate your own. If you have a problem with setting up docker, please refer to the documentation.

To build a docker model, run the following command in the folder containing the Dockerfile.

docker build -t icg .

To run your built docker, use the command below, after defining the input and output directories within the command. We recommend to use ValidationImg as input folder.

If you have multiple GPUs and want to select a specific one to run the docker image, modify /dev/nvidia0 to the respective GPUs identifier, e.g., /dev/nvidia1.

docker container run --gpus all --device /dev/nvidia0 --device /dev/nvidia-uvm --device /dev/nvidia-uvm-tools --device /dev/nvidiactl --name icg --rm -v /PATH/TO/DATASET/ValidationImg/:/workspace/inputs/ -v /PATH/TO/OUTPUT/FOLDER/:/workspace/outputs/ icg:latest /bin/bash -c "sh predict.sh" 

Citation

If you use this code for your research, please cite our paper.

Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements

@article{Thaler2021Efficient,
  title={Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements},
  author={Thaler, Franz and Payer, Christian and Bischof, Horst and {\v{S}}tern, Darko},
  year={2021}
}
Owner
Franz Thaler
Franz Thaler
PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

SJTU-ViSYS 112 Nov 28, 2022
A Pytorch loader for MVTecAD dataset.

MVTecAD A Pytorch loader for MVTecAD dataset. It strictly follows the code style of common Pytorch datasets, such as torchvision.datasets.CIFAR10. The

Jiyuan 1 Dec 27, 2021
The code repository for "PyCIL: A Python Toolbox for Class-Incremental Learning" in PyTorch.

PyCIL: A Python Toolbox for Class-Incremental Learning Introduction • Methods Reproduced • Reproduced Results • How To Use • License • Acknowledgement

Fu-Yun Wang 258 Dec 31, 2022
Entity-Based Knowledge Conflicts in Question Answering.

Entity-Based Knowledge Conflicts in Question Answering Run Instructions | Paper | Citation | License This repository provides the Substitution Framewo

Apple 35 Oct 19, 2022
Lightweight tool to perform MITM attack on local network

ARPSpy - A lightweight tool to perform MITM attack Using many library to perform ARP Spoof and auto-sniffing HTTP packet containing credential. (Never

MinhItachi 8 Aug 28, 2022
A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains (IJCV submission)

wsss-analysis The code of: A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains, arXiv pre-print 2019 paper.

Lyndon Chan 48 Dec 18, 2022
code for "Self-supervised edge features for improved Graph Neural Network training",

Self-supervised edge features for improved Graph Neural Network training Data availability: Here is a link to the raw data for the organoids dataset.

Neal Ravindra 23 Dec 02, 2022
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision Training Efficiency We show the training efficiency of our DSLP model b

Chenyang Huang 36 Oct 31, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Generate vibrant and detailed images using only text.

CLIP Guided Diffusion From RiversHaveWings. Generate vibrant and detailed images using only text. See captions and more generations in the Gallery See

Clay M. 401 Dec 28, 2022
Prototype for Baby Action Detection and Classification

Baby Action Detection Table of Contents About Install Run Predictions Demo About An attempt to harness the power of Deep Learning to come up with a so

Shreyas K 30 Dec 16, 2022
Find-Lane-Line - Use openCV library and Python to detect the road-lane-line

Find-Lane-Line This project is to use openCV library and Python to detect the road-lane-line. Data Pipeline Step one : Color Selection Step two : Cann

Kenny Cheng 3 Aug 17, 2022
A Python Library for Graph Outlier Detection (Anomaly Detection)

PyGOD is a Python library for graph outlier detection (anomaly detection). This exciting yet challenging field has many key applications, e.g., detect

PyGOD Team 757 Jan 04, 2023
When BERT Plays the Lottery, All Tickets Are Winning

When BERT Plays the Lottery, All Tickets Are Winning Large Transformer-based models were shown to be reducible to a smaller number of self-attention h

Sai 16 Nov 10, 2022
Script for getting information in discord

User-info.py Script for getting information in https://discord.com/ Instalação: apt-get update -y apt-get upgrade -y apt-get install git pkg install

Moleey 1 Dec 18, 2021
Object tracking using YOLO and a tracker(KCF, MOSSE, CSRT) in openCV

Object tracking using YOLO and a tracker(KCF, MOSSE, CSRT) in openCV File YOLOv3 weight can be downloaded

Ngoc Quyen Ngo 2 Mar 27, 2022
Script that attempts to force M1 macs into RGB mode when used with monitors that are defaulting to YPbPr.

fix_m1_rgb Script that attempts to force M1 macs into RGB mode when used with monitors that are defaulting to YPbPr. No warranty provided for using th

Kevin Gao 116 Jan 01, 2023
Repo for WWW 2022 paper: Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval

BiDR Repo for WWW 2022 paper: Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval. Requirements torch==

Microsoft 11 Oct 20, 2022
Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection

Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection abstract:Unlike 2D object detection where all RoI featur

DK. Zhang 2 Oct 07, 2022
Libtorch yolov3 deepsort

Overview It is for my undergrad thesis in Tsinghua University. There are four modules in the project: Detection: YOLOv3 Tracking: SORT and DeepSORT Pr

Xu Wei 226 Dec 13, 2022