Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

Overview

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data

arXiv License: MIT

Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl

| Project Page | Paper | Poster | Slides | Video |

1

This repository includes the official and maintained PyTorch implementation of the paper OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data.

Abstract

Convolutional neural networks (CNNs) are the current state-of-the-art meta-algorithm for volumetric segmentation of medical data, for example, to localize COVID-19 infected tissue on computer tomography scans or the detection of tumour volumes in magnetic resonance imaging. A key limitation of 3D CNNs on voxelised data is that the memory consumption grows cubically with the training data resolution. Occupancy networks (O-Nets) are an alternative for which the data is represented continuously in a function space and 3D shapes are learned as a continuous decision boundary. While O-Nets are significantly more memory efficient than 3D CNNs, they are limited to simple shapes, are relatively slow at inference, and have not yet been adapted for 3D semantic segmentation of medical data. Here, we propose Occupancy Networks for Semantic Segmentation (OSS-Nets) to accurately and memory-efficiently segment 3D medical data. We build upon the original O-Net with modifications for increased expressiveness leading to improved segmentation performance comparable to 3D CNNs, as well as modifications for faster inference. We leverage local observations to represent complex shapes and prior encoder predictions to expedite inference. We showcase OSS-Net's performance on 3D brain tumour and liver segmentation against a function space baseline (O-Net), a performance baseline (3D residual U-Net), and an efficiency baseline (2D residual U-Net). OSS-Net yields segmentation results similar to the performance baseline and superior to the function space and efficiency baselines. In terms of memory efficiency, OSS-Net consumes comparable amounts of memory as the function space baseline, somewhat more memory than the efficiency baseline and significantly less than the performance baseline. As such, OSS-Net enables memory-efficient and accurate 3D semantic segmentation that can scale to high resolutions.

If you find this research useful in your work, please cite our paper:

@inproceedings{Reich2021,
        title={{OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data}},
        author={Reich, Christoph and Prangemeier, Tim and Cetin, {\"O}zdemir and Koeppl, Heinz},
        booktitle={British Machine Vision Conference},
        year={2021},
        organization={British Machine Vision Association},
}

Dependencies

All required Python packages can be installed by:

pip install -r requirements.txt

To install the official implementation of the Padé Activation Unit [1] (taken from the official repository) run:

cd pade_activation_unit/cuda
python setup.py build install

The code is tested with PyTorch 1.8.1 and CUDA 11.1 on Linux with Python 3.8.5! Using other PyTorch and CUDA versions newer than PyTorch 1.7.0 and CUDA 10.1 should also be possible.

Data

The BraTS 2020 dataset can be downloaded here and the LiTS dataset can be downloaded here. Please note, that accounts are required to login and downlaod the data on both websites.

The used training and validation split of the BraTS 2020 dataset is available here.

For generating the border maps, necessary if border based sampling is utilized, please use the generate_borders_bra_ts_2020.py and generate_borders_lits.py script.

Trained Models

Table 1. Segmentation results of trained networks. Weights are generally available here and specific models are linked below.

Model Dice () BraTS 2020 IoU () BraTS 2020 Dice () LiTS IoU () LiTS
O-Net [2] 0.7016 0.5615 0.6506 0.4842 - -
OSS-Net A 0.8592 0.7644 0.7127 0.5579 weights BraTS weights LiTS
OSS-Net B 0.8541 0.7572 0.7585 0.6154 weights BraTS weights LiTS
OSS-Net C 0.8842 0.7991 0.7616 0.6201 weights BraTS weights LiTS
OSS-Net D 0.8774 0.7876 0.7566 0.6150 weights BraTS weights LiTS

Usage

Training

To reproduce the results presented in Table 1, we provide multiple sh scripts, which can be found in the scripts folder. Please change the dataset path and CUDA devices according to your system.

To perform training runs with different settings use the command line arguments of the train_oss_net.py file. The train_oss_net.py takes the following command line arguments:

Argument Default value Info
--train False Binary flag. If set training will be performed.
--test False Binary flag. If set testing will be performed.
--cuda_devices "0, 1" String of cuda device indexes to be used. Indexes must be separated by a comma.
--cpu False Binary flag. If set all operations are performed on the CPU. (not recommended)
--epochs 50 Number of epochs to perform while training.
--batch_size 8 Number of epochs to perform while training.
--training_samples 2 ** 14 Number of coordinates to be samples during training.
--load_model "" Path to model to be loaded.
--segmentation_loss_factor 0.1 Auxiliary segmentation loss factor to be utilized.
--network_config "" Type of network configuration to be utilized (see).
--dataset "BraTS" Dataset to be utilized. ("BraTS" or "LITS")
--dataset_path "BraTS2020" Path to dataset.
--uniform_sampling False Binary flag. If set locations are sampled uniformly during training.

Please note that the naming of the different OSS-Net variants differs in the code between the paper and Table 1.

Inference

To perform inference, use the inference_oss_net.py script. The script takes the following command line arguments:

Argument Default value Info
--cuda_devices "0, 1" String of cuda device indexes to be used. Indexes must be separated by a comma.
--cpu False Binary flag. If set all operations are performed on the CPU. (not recommended)
--load_model "" Path to model to be loaded.
--network_config "" Type of network configuration to be utilized (see).
--dataset "BraTS" Dataset to be utilized. ("BraTS" or "LITS")
--dataset_path "BraTS2020" Path to dataset.

During inference the predicted occupancy voxel grid, the mesh prediction, and the label as a mesh are saved. The meshes are saved as PyTorch (.pt) files and also as .obj files. The occupancy grid is only saved as a PyTorch file.

Acknowledgements

We thank Marius Memmel and Nicolas Wagner for the insightful discussions, Alexander Christ and Tim Kircher for giving feedback on the first draft, and Markus Baier as well as Bastian Alt for aid with the computational setup.

This work was supported by the Landesoffensive für wissenschaftliche Exzellenz as part of the LOEWE Schwerpunkt CompuGene. H.K. acknowledges support from the European Re- search Council (ERC) with the consolidator grant CONSYN (nr. 773196). O.C. is supported by the Alexander von Humboldt Foundation Philipp Schwartz Initiative.

References

[1] @inproceedings{Molina2020Padé,
        title={{Pad\'{e} Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks}},
        author={Alejandro Molina and Patrick Schramowski and Kristian Kersting},
        booktitle={International Conference on Learning Representations},
        year={2020}
}
[2] @inproceedings{Mescheder2019,
        title={{Occupancy Networks: Learning 3D Reconstruction in Function Space}},
        author={Mescheder, Lars and Oechsle, Michael and Niemeyer, Michael and Nowozin, Sebastian and Geiger, Andreas},
        booktitle={CVPR},
        pages={4460--4470},
        year={2019}
}
Owner
Christoph Reich
Autonomous systems and electrical engineering student @ Technical University of Darmstadt
Christoph Reich
Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto-Seg-Loss By Hao Li, Chenxin Tao, Xizhou Zhu, Xiaogang Wang, Gao Huang, Jifeng Dai This is the official implementation of the ICLR 2021 paper Auto

61 Dec 21, 2022
PyTorch CZSL framework containing GQA, the open-world setting, and the CGE and CompCos methods.

Compositional Zero-Shot Learning This is the official PyTorch code of the CVPR 2021 works Learning Graph Embeddings for Compositional Zero-shot Learni

EML Tübingen 70 Dec 27, 2022
Tandem Mass Spectrum Prediction with Graph Transformers

MassFormer This is the original implementation of MassFormer, a graph transformer for small molecule MS/MS prediction. Check out the preprint on arxiv

Röst Lab 13 Oct 27, 2022
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetu

3 Dec 05, 2022
List of all dependencies affected by node-ipc malicious commit

node-ipc-dependencies-list List of all dependencies affected by node-ipc malicious commit as of 17/3/2022 - 19/3/2022 (timestamp) Please improve upon

99 Oct 15, 2022
Automatic deep learning for image classification.

AutoDL AutoDL automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few line

wenqi 2 Oct 12, 2022
用强化学习DQN算法,训练AI模型来玩合成大西瓜游戏,提供Keras版本和PARL(paddle)版本

用强化学习玩合成大西瓜 代码地址:https://github.com/Sharpiless/play-daxigua-using-Reinforcement-Learning 用强化学习DQN算法,训练AI模型来玩合成大西瓜游戏,提供Keras版本、PARL(paddle)版本和pytorch版本

72 Dec 17, 2022
Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite.

tflite2tensorflow Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite. 1. Supported Layers No. TFLite Layer TF

Katsuya Hyodo 214 Dec 29, 2022
Locationinfo - A script helps the user to show network information such as ip address

Description This script helps the user to show network information such as ip ad

Roxcoder 1 Dec 30, 2021
Cweqgen - The CW Equation Generator

The CW Equation Generator The cweqgen (pronouced like "Queck-Jen") package provi

2 Jan 15, 2022
Orange Chicken: Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation

Orange Chicken: Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation This repository contains code and data f

Zoey Liu 0 Jan 07, 2022
Repository providing a wide range of self-supervised pretrained models for computer vision tasks.

Hierarchical Pretraining: Research Repository This is a research repository for reproducing the results from the project "Self-supervised pretraining

Colorado Reed 53 Nov 09, 2022
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)

NeuralWOZ This code is official implementation of "NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation". Sungdong Kim, Mi

NAVER AI 31 Oct 25, 2022
Code for Efficient Visual Pretraining with Contrastive Detection

Code for DetCon This repository contains code for the ICCV 2021 paper "Efficient Visual Pretraining with Contrastive Detection" by Olivier J. Hénaff,

DeepMind 56 Nov 13, 2022
This is the code of paper ``Contrastive Coding for Active Learning under Class Distribution Mismatch'' with python.

Contrastive Coding for Active Learning under Class Distribution Mismatch Official PyTorch implementation of ["Contrastive Coding for Active Learning u

21 Dec 22, 2022
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

184 Jan 04, 2023
A general-purpose encoder-decoder framework for Tensorflow

READ THE DOCUMENTATION CONTRIBUTING A general-purpose encoder-decoder framework for Tensorflow that can be used for Machine Translation, Text Summariz

Google 5.5k Jan 07, 2023
Official Repository for the ICCV 2021 paper "PixelSynth: Generating a 3D-Consistent Experience from a Single Image"

PixelSynth: Generating a 3D-Consistent Experience from a Single Image (ICCV 2021) Chris Rockwell, David F. Fouhey, and Justin Johnson [Project Website

Chris Rockwell 95 Nov 22, 2022
Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on [ Paper ] [ Project Page ] This repository contains the code fo

Andrew Jong 97 Dec 13, 2022
Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features

Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features | paper | Official PyTorch implementation for Mul

48 Dec 28, 2022