Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

Overview

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data

arXiv License: MIT

Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl

| Project Page | Paper | Poster | Slides | Video |

1

This repository includes the official and maintained PyTorch implementation of the paper OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data.

Abstract

Convolutional neural networks (CNNs) are the current state-of-the-art meta-algorithm for volumetric segmentation of medical data, for example, to localize COVID-19 infected tissue on computer tomography scans or the detection of tumour volumes in magnetic resonance imaging. A key limitation of 3D CNNs on voxelised data is that the memory consumption grows cubically with the training data resolution. Occupancy networks (O-Nets) are an alternative for which the data is represented continuously in a function space and 3D shapes are learned as a continuous decision boundary. While O-Nets are significantly more memory efficient than 3D CNNs, they are limited to simple shapes, are relatively slow at inference, and have not yet been adapted for 3D semantic segmentation of medical data. Here, we propose Occupancy Networks for Semantic Segmentation (OSS-Nets) to accurately and memory-efficiently segment 3D medical data. We build upon the original O-Net with modifications for increased expressiveness leading to improved segmentation performance comparable to 3D CNNs, as well as modifications for faster inference. We leverage local observations to represent complex shapes and prior encoder predictions to expedite inference. We showcase OSS-Net's performance on 3D brain tumour and liver segmentation against a function space baseline (O-Net), a performance baseline (3D residual U-Net), and an efficiency baseline (2D residual U-Net). OSS-Net yields segmentation results similar to the performance baseline and superior to the function space and efficiency baselines. In terms of memory efficiency, OSS-Net consumes comparable amounts of memory as the function space baseline, somewhat more memory than the efficiency baseline and significantly less than the performance baseline. As such, OSS-Net enables memory-efficient and accurate 3D semantic segmentation that can scale to high resolutions.

If you find this research useful in your work, please cite our paper:

@inproceedings{Reich2021,
        title={{OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data}},
        author={Reich, Christoph and Prangemeier, Tim and Cetin, {\"O}zdemir and Koeppl, Heinz},
        booktitle={British Machine Vision Conference},
        year={2021},
        organization={British Machine Vision Association},
}

Dependencies

All required Python packages can be installed by:

pip install -r requirements.txt

To install the official implementation of the Padé Activation Unit [1] (taken from the official repository) run:

cd pade_activation_unit/cuda
python setup.py build install

The code is tested with PyTorch 1.8.1 and CUDA 11.1 on Linux with Python 3.8.5! Using other PyTorch and CUDA versions newer than PyTorch 1.7.0 and CUDA 10.1 should also be possible.

Data

The BraTS 2020 dataset can be downloaded here and the LiTS dataset can be downloaded here. Please note, that accounts are required to login and downlaod the data on both websites.

The used training and validation split of the BraTS 2020 dataset is available here.

For generating the border maps, necessary if border based sampling is utilized, please use the generate_borders_bra_ts_2020.py and generate_borders_lits.py script.

Trained Models

Table 1. Segmentation results of trained networks. Weights are generally available here and specific models are linked below.

Model Dice () BraTS 2020 IoU () BraTS 2020 Dice () LiTS IoU () LiTS
O-Net [2] 0.7016 0.5615 0.6506 0.4842 - -
OSS-Net A 0.8592 0.7644 0.7127 0.5579 weights BraTS weights LiTS
OSS-Net B 0.8541 0.7572 0.7585 0.6154 weights BraTS weights LiTS
OSS-Net C 0.8842 0.7991 0.7616 0.6201 weights BraTS weights LiTS
OSS-Net D 0.8774 0.7876 0.7566 0.6150 weights BraTS weights LiTS

Usage

Training

To reproduce the results presented in Table 1, we provide multiple sh scripts, which can be found in the scripts folder. Please change the dataset path and CUDA devices according to your system.

To perform training runs with different settings use the command line arguments of the train_oss_net.py file. The train_oss_net.py takes the following command line arguments:

Argument Default value Info
--train False Binary flag. If set training will be performed.
--test False Binary flag. If set testing will be performed.
--cuda_devices "0, 1" String of cuda device indexes to be used. Indexes must be separated by a comma.
--cpu False Binary flag. If set all operations are performed on the CPU. (not recommended)
--epochs 50 Number of epochs to perform while training.
--batch_size 8 Number of epochs to perform while training.
--training_samples 2 ** 14 Number of coordinates to be samples during training.
--load_model "" Path to model to be loaded.
--segmentation_loss_factor 0.1 Auxiliary segmentation loss factor to be utilized.
--network_config "" Type of network configuration to be utilized (see).
--dataset "BraTS" Dataset to be utilized. ("BraTS" or "LITS")
--dataset_path "BraTS2020" Path to dataset.
--uniform_sampling False Binary flag. If set locations are sampled uniformly during training.

Please note that the naming of the different OSS-Net variants differs in the code between the paper and Table 1.

Inference

To perform inference, use the inference_oss_net.py script. The script takes the following command line arguments:

Argument Default value Info
--cuda_devices "0, 1" String of cuda device indexes to be used. Indexes must be separated by a comma.
--cpu False Binary flag. If set all operations are performed on the CPU. (not recommended)
--load_model "" Path to model to be loaded.
--network_config "" Type of network configuration to be utilized (see).
--dataset "BraTS" Dataset to be utilized. ("BraTS" or "LITS")
--dataset_path "BraTS2020" Path to dataset.

During inference the predicted occupancy voxel grid, the mesh prediction, and the label as a mesh are saved. The meshes are saved as PyTorch (.pt) files and also as .obj files. The occupancy grid is only saved as a PyTorch file.

Acknowledgements

We thank Marius Memmel and Nicolas Wagner for the insightful discussions, Alexander Christ and Tim Kircher for giving feedback on the first draft, and Markus Baier as well as Bastian Alt for aid with the computational setup.

This work was supported by the Landesoffensive für wissenschaftliche Exzellenz as part of the LOEWE Schwerpunkt CompuGene. H.K. acknowledges support from the European Re- search Council (ERC) with the consolidator grant CONSYN (nr. 773196). O.C. is supported by the Alexander von Humboldt Foundation Philipp Schwartz Initiative.

References

[1] @inproceedings{Molina2020Padé,
        title={{Pad\'{e} Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks}},
        author={Alejandro Molina and Patrick Schramowski and Kristian Kersting},
        booktitle={International Conference on Learning Representations},
        year={2020}
}
[2] @inproceedings{Mescheder2019,
        title={{Occupancy Networks: Learning 3D Reconstruction in Function Space}},
        author={Mescheder, Lars and Oechsle, Michael and Niemeyer, Michael and Nowozin, Sebastian and Geiger, Andreas},
        booktitle={CVPR},
        pages={4460--4470},
        year={2019}
}
Owner
Christoph Reich
Autonomous systems and electrical engineering student @ Technical University of Darmstadt
Christoph Reich
Python Actor concurrency library

Thespian Actor Library This library provides the framework of an Actor model for use by applications implementing Actors. Thespian Site with Documenta

Kevin Quick 177 Dec 11, 2022
Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

CorDA Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Prerequisite Please create and activate the follo

Qin Wang 60 Nov 30, 2022
Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks

Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks Abstract Facial expression recognition in video

Bogireddy Sai Prasanna Teja Reddy 103 Dec 29, 2022
mmfewshot is an open source few shot learning toolbox based on PyTorch

OpenMMLab FewShot Learning Toolbox and Benchmark

OpenMMLab 514 Dec 28, 2022
code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"

code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"

Facebook Research 94 Oct 26, 2022
Recurrent Conditional Query Learning

Recurrent Conditional Query Learning (RCQL) This repository contains the Pytorch implementation of One Model Packs Thousands of Items with Recurrent C

Dongda 4 Nov 28, 2022
CS506-Spring2022 - Code and Slides for Boston University CS 506

CS 506 - Computational Tools for Data Science Code, slides, and notes for Boston

Lance Galletti 17 May 06, 2022
Live Hand Tracking Using Python

Live-Hand-Tracking-Using-Python Project Description: In this project, we will be

Hassan Shahzad 2 Jan 06, 2022
Adabelief-Optimizer - Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"

AdaBelief Optimizer NeurIPS 2020 Spotlight, trains fast as Adam, generalizes well as SGD, and is stable to train GANs. Release of package We have rele

Juntang Zhuang 998 Dec 29, 2022
PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build

simple, elegant and safe Introduction PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to ha

Johnsz 2 Mar 02, 2022
Deepfake Scanner by Deepware.

Deepware Scanner (CLI) This repository contains the command-line deepfake scanner tool with the pre-trained models that are currently used at deepware

deepware 110 Jan 02, 2023
City-seeds - A random generator of cultural characteristics intended to spark ideas and help draw threads

City Seeds This is a random generator of cultural characteristics intended to sp

Aydin O'Leary 2 Mar 12, 2022
A3C LSTM Atari with Pytorch plus A3G design

NEWLY ADDED A3G A NEW GPU/CPU ARCHITECTURE OF A3C FOR SUBSTANTIALLY ACCELERATED TRAINING!! RL A3C Pytorch NEWLY ADDED A3G!! New implementation of A3C

David Griffis 532 Jan 02, 2023
Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC)

ppg-vc Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC) This repo implements different kinds of PPG-based VC models. Pretrained models. More m

Liu Songxiang 227 Dec 28, 2022
Erpnext app for make employee salary on payroll entry based on one or more project with percentage for all project equal 100 %

Project Payroll this app for make payroll for employee based on projects like project on 30 % and project 2 70 % as account dimension it makes genral

Ibrahim Morghim 8 Jan 02, 2023
LightningFSL: Pytorch-Lightning implementations of Few-Shot Learning models.

LightningFSL: Few-Shot Learning with Pytorch-Lightning In this repo, a number of pytorch-lightning implementations of FSL algorithms are provided, inc

Xu Luo 76 Dec 11, 2022
Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

Real-ESRGAN Colab Demo for Real-ESRGAN . Portable Windows executable file. You can find more information here. Real-ESRGAN aims at developing Practica

Xintao 17.2k Jan 02, 2023
OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet: Differentiable Optimization as a Layer in Neural Networks This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch sourc

CMU Locus Lab 428 Dec 24, 2022
Fewshot-face-translation-GAN - Generative adversarial networks integrating modules from FUNIT and SPADE for face-swapping.

Few-shot face translation A GAN based approach for one model to swap them all. The table below shows our priliminary face-swapping results requiring o

768 Dec 24, 2022
Multi-query Video Retreival

Multi-query Video Retreival

Princeton Visual AI Lab 17 Nov 22, 2022