Pytorch-3dunet - 3D U-Net model for volumetric semantic segmentation written in pytorch

Overview

DOI Build Status

pytorch-3dunet

PyTorch implementation 3D U-Net and its variants:

The code allows for training the U-Net for both: semantic segmentation (binary and multi-class) and regression problems (e.g. de-noising, learning deconvolutions).

2D U-Net

Training the standard 2D U-Net is also possible, see 2DUnet_dsb2018 for example configuration. Just make sure to keep the singleton z-dimension in your H5 dataset (i.e. (1, Y, X) instead of (Y, X)) , because data loading / data augmentation requires tensors of rank 3 always.

Prerequisites

  • Linux
  • NVIDIA GPU
  • CUDA CuDNN

Running on Windows

The package has not been tested on Windows, however some reported using it on Windows. One thing to keep in mind: when training with CrossEntropyLoss: the label type in the config file should be change from long to int64, otherwise there will be an error: RuntimeError: Expected object of scalar type Long but got scalar type Int for argument #2 'target'.

Supported Loss Functions

Semantic Segmentation

  • BCEWithLogitsLoss (binary cross-entropy)
  • DiceLoss (standard DiceLoss defined as 1 - DiceCoefficient used for binary semantic segmentation; when more than 2 classes are present in the ground truth, it computes the DiceLoss per channel and averages the values).
  • BCEDiceLoss (Linear combination of BCE and Dice losses, i.e. alpha * BCE + beta * Dice, alpha, beta can be specified in the loss section of the config)
  • CrossEntropyLoss (one can specify class weights via weight: [w_1, ..., w_k] in the loss section of the config)
  • PixelWiseCrossEntropyLoss (one can specify not only class weights but also per pixel weights in order to give more gradient to important (or under-represented) regions in the ground truth)
  • WeightedCrossEntropyLoss (see 'Weighted cross-entropy (WCE)' in the below paper for a detailed explanation; one can specify class weights via weight: [w_1, ..., w_k] in the loss section of the config)
  • GeneralizedDiceLoss (see 'Generalized Dice Loss (GDL)' in the below paper for a detailed explanation; one can specify class weights via weight: [w_1, ..., w_k] in the loss section of the config). Note: use this loss function only if the labels in the training dataset are very imbalanced e.g. one class having at least 3 orders of magnitude more voxels than the others. Otherwise use standard DiceLoss.

For a detailed explanation of some of the supported loss functions see: Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations Carole H. Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, M. Jorge Cardoso

IMPORTANT: if one wants to use their own loss function, bear in mind that the current model implementation always output logits and it's up to the implementation of the loss to normalize it correctly, e.g. by applying Sigmoid or Softmax.

Regression

  • MSELoss
  • L1Loss
  • SmoothL1Loss
  • WeightedSmoothL1Loss - extension of the SmoothL1Loss which allows to weight the voxel values above (below) a given threshold differently

Supported Evaluation Metrics

Semantic Segmentation

  • MeanIoU - Mean intersection over union
  • DiceCoefficient - Dice Coefficient (computes per channel Dice Coefficient and returns the average) If a 3D U-Net was trained to predict cell boundaries, one can use the following semantic instance segmentation metrics (the metrics below are computed by running connected components on thresholded boundary map and comparing the resulted instances to the ground truth instance segmentation):
  • BoundaryAveragePrecision - Average Precision applied to the boundary probability maps: thresholds the boundary maps given by the network, runs connected components to get the segmentation and computes AP between the resulting segmentation and the ground truth
  • AdaptedRandError - Adapted Rand Error (see http://brainiac2.mit.edu/SNEMI3D/evaluation for a detailed explanation)
  • AveragePrecision - see https://www.kaggle.com/stkbailey/step-by-step-explanation-of-scoring-metric

If not specified MeanIoU will be used by default.

Regression

  • PSNR - peak signal to noise ratio

Installation

  • The easiest way to install pytorch-3dunet package is via conda:
conda create -n 3dunet -c conda-forge -c awolny pytorch-3dunet
conda activate 3dunet

After installation the following commands are accessible within the conda environment: train3dunet for training the network and predict3dunet for prediction (see below).

  • One can also install directly from source:
python setup.py install

Installation tips

Make sure that the installed pytorch is compatible with your CUDA version, otherwise the training/prediction will fail to run on GPU. You can re-install pytorch compatible with your CUDA in the 3dunet env by:

conda install -c pytorch torchvision cudatoolkit=<YOU_CUDA_VERSION> pytorch

Train

Given that pytorch-3dunet package was installed via conda as described above, one can train the network by simply invoking:

train3dunet --config <CONFIG>

where CONFIG is the path to a YAML configuration file, which specifies all aspects of the training procedure.

In order to train on your own data just provide the paths to your HDF5 training and validation datasets in the config.

The HDF5 files should contain the raw/label data sets in the following axis order: DHW (in case of 3D) CDHW (in case of 4D).

One can monitor the training progress with Tensorboard tensorboard --logdir <checkpoint_dir>/logs/ (you need tensorflow installed in your conda env), where checkpoint_dir is the path to the checkpoint directory specified in the config.

Training tips

  1. When training with binary-based losses, i.e.: BCEWithLogitsLoss, DiceLoss, BCEDiceLoss, GeneralizedDiceLoss: The target data has to be 4D (one target binary mask per channel). If you have a 3D binary data (foreground/background), you can just change ToTensor transform for the label to contain expand_dims: true, see e.g. train_config_dice.yaml. When training with WeightedCrossEntropyLoss, CrossEntropyLoss, PixelWiseCrossEntropyLoss the target dataset has to be 3D, see also pytorch documentation for CE loss: https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html
  2. final_sigmoid in the model config section applies only to the inference time: When training with cross entropy based losses (WeightedCrossEntropyLoss, CrossEntropyLoss, PixelWiseCrossEntropyLoss) set final_sigmoid=False so that Softmax normalization is applied to the output. When training with BCEWithLogitsLoss, DiceLoss, BCEDiceLoss, GeneralizedDiceLoss set final_sigmoid=True

Prediction

Given that pytorch-3dunet package was installed via conda as described above, one can run the prediction via:

predict3dunet --config <CONFIG>

In order to predict on your own data, just provide the path to your model as well as paths to HDF5 test files (see test_config_dice.yaml).

Prediction tips

In order to avoid checkerboard artifacts in the output prediction masks the patch predictions are averaged, so make sure that patch/stride params lead to overlapping blocks, e.g. patch: [64 128 128] stride: [32 96 96] will give you a 'halo' of 32 voxels in each direction.

Data Parallelism

By default, if multiple GPUs are available training/prediction will be run on all the GPUs using DataParallel. If training/prediction on all available GPUs is not desirable, restrict the number of GPUs using CUDA_VISIBLE_DEVICES, e.g.

CUDA_VISIBLE_DEVICES=0,1 train3dunet --config <CONFIG>

or

CUDA_VISIBLE_DEVICES=0,1 predict3dunet --config <CONFIG>

Examples

Cell boundary predictions for lightsheet images of Arabidopsis thaliana lateral root

The data can be downloaded from the following OSF project:

Training and inference configs can be found in 3DUnet_lightsheet_boundary.

Sample z-slice predictions on the test set (top: raw input , bottom: boundary predictions):

Cell boundary predictions for confocal images of Arabidopsis thaliana ovules

The data can be downloaded from the following OSF project:

Training and inference configs can be found in 3DUnet_confocal_boundary.

Sample z-slice predictions on the test set (top: raw input , bottom: boundary predictions):

Nuclei predictions for lightsheet images of Arabidopsis thaliana lateral root

The training and validation sets can be downloaded from the following OSF project: https://osf.io/thxzn/

Training and inference configs can be found in 3DUnet_lightsheet_nuclei.

Sample z-slice predictions on the test set (top: raw input, bottom: nuclei predictions):

2D nuclei predictions for Kaggle DSB2018

The data can be downloaded from: https://www.kaggle.com/c/data-science-bowl-2018/data

Training and inference configs can be found in 2DUnet_dsb2018.

Sample predictions on the test image (top: raw input, bottom: nuclei predictions):

Contribute

If you want to contribute back, please make a pull request.

Cite

If you use this code for your research, please cite as:

@article {10.7554/eLife.57613,
article_type = {journal},
title = {Accurate and versatile 3D segmentation of plant tissues at cellular resolution},
author = {Wolny, Adrian and Cerrone, Lorenzo and Vijayan, Athul and Tofanelli, Rachele and Barro, Amaya Vilches and Louveaux, Marion and Wenzl, Christian and Strauss, Sören and Wilson-Sánchez, David and Lymbouridou, Rena and Steigleder, Susanne S and Pape, Constantin and Bailoni, Alberto and Duran-Nebreda, Salva and Bassel, George W and Lohmann, Jan U and Tsiantis, Miltos and Hamprecht, Fred A and Schneitz, Kay and Maizel, Alexis and Kreshuk, Anna},
editor = {Hardtke, Christian S and Bergmann, Dominique C and Bergmann, Dominique C and Graeff, Moritz},
volume = 9,
year = 2020,
month = {jul},
pub_date = {2020-07-29},
pages = {e57613},
citation = {eLife 2020;9:e57613},
doi = {10.7554/eLife.57613},
url = {https://doi.org/10.7554/eLife.57613},
abstract = {Quantitative analysis of plant and animal morphogenesis requires accurate segmentation of individual cells in volumetric images of growing organs. In the last years, deep learning has provided robust automated algorithms that approach human performance, with applications to bio-image analysis now starting to emerge. Here, we present PlantSeg, a pipeline for volumetric segmentation of plant tissues into cells. PlantSeg employs a convolutional neural network to predict cell boundaries and graph partitioning to segment cells based on the neural network predictions. PlantSeg was trained on fixed and live plant organs imaged with confocal and light sheet microscopes. PlantSeg delivers accurate results and generalizes well across different tissues, scales, acquisition settings even on non plant samples. We present results of PlantSeg applications in diverse developmental contexts. PlantSeg is free and open-source, with both a command line and a user-friendly graphical interface.},
keywords = {instance segmentation, cell segmentation, deep learning, image analysis},
journal = {eLife},
issn = {2050-084X},
publisher = {eLife Sciences Publications, Ltd},
}
Comments
  • fix weights unsqueeze in PixelWiseCrossEntropy

    fix weights unsqueeze in PixelWiseCrossEntropy

    First off, thanks for the great library, @wolny ! It has really accelerated my work being able to start with a nice implementation of 3D unets.

    I think there might be a small bug in the PixelWiseCrossEntropy loss. It seems that the weights get passed in as a NxDxHxW tensor and in the "expand weights" code block they should be expanded to NxCxDxHxW tensor to match the target (which has been converted to a one hot encoding). Thus, I think the unsqueeze should be applied to axis 1, not axis 0. In this case the weights would become Nx1xDxHxW, then NxCxDxHxW in the subsequent weights.expand_as(input).

    Without this change, I get the following error when I train with batch size > 1.

    2021-03-12 17:05:50,156 [MainThread] INFO UNet3DTrainer - Training iteration [1/100000]. Epoch [0/99]
    Traceback (most recent call last):
      File "/cluster/home/kyamauch/.local/lib/python3.8/site-packages/pytorch3dunet/train.py", line 33, in <module>
        main()
      File "/cluster/home/kyamauch/.local/lib/python3.8/site-packages/pytorch3dunet/train.py", line 29, in main
        trainer.fit()
      File "/cluster/home/kyamauch/.local/lib/python3.8/site-packages/pytorch3dunet/unet3d/trainer.py", line 246, in fit
        should_terminate = self.train()
      File "/cluster/home/kyamauch/.local/lib/python3.8/site-packages/pytorch3dunet/unet3d/trainer.py", line 273, in train
        output, loss = self._forward_pass(input, target, weight)
      File "/cluster/home/kyamauch/.local/lib/python3.8/site-packages/pytorch3dunet/unet3d/trainer.py", line 408, in _forward_pass
        loss = self.loss_criterion(output, target, weight)
      File "/cluster/apps/nss/gcc-6.3.0/python_gpu/3.8.5/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/cluster/home/kyamauch/.local/lib/python3.8/site-packages/pytorch3dunet/unet3d/losses.py", line 220, in forward
        weights = weights.expand_as(input)
    RuntimeError: The expanded size of the tensor (3) must match the existing size (12) at non-singleton dimension 1.  Target sizes: [12, 3, 70, 70, 70].  Tensor sizes: [1, 12, 70, 70, 70]
    

    Does this change seem right?

    opened by kevinyamauchi 2
  • Update environment.yaml

    Update environment.yaml

    pytorch channel should have higher priority than conda-forge, otherwise the pytorch installation from conda-forge will be used. (And this causes issues with gpu installations)

    opened by constantinpape 1
  • Create command-lines (i.e. console_scripts) when installing from source

    Create command-lines (i.e. console_scripts) when installing from source

    Hi,

    I know that the command lines are installed into the conda environment.

    This code adds commands when installing from source (i.e. python setup.py install). I needed to do this as I ultimately want to call pytorch-3dunet within mpi2/LAMA and don't want to use the conda env due to install issues etc.

    Feel free to merge if it doesn't cause conflicts.

    Kind Regards, Kyle Drover

    opened by dorkylever 1
  • Read data path config as a directory

    Read data path config as a directory

    There may be many hdf5 data files, and it is common putting all data files in a directory. Specify all paths in the config file is somewhat inconvenient and makes the config unreadable.

    opened by songxiaocheng 1
  • Add Squeeze and Excitation and UNETR as an option

    Add Squeeze and Excitation and UNETR as an option

    Squeeze and Excitation UNet and UNETR can be selected as an option to train in config.yml.

    Example (UNETR):

    # use a fixed random seed to guarantee that when you run the code twice you will get the same outcome
    manual_seed: 0
    model:
      name: UNETR
      # number of input channels to the model
      in_channels: 1
      ...
    

    Example (SE UNet):

    # use a fixed random seed to guarantee that when you run the code twice you will get the same outcome
    manual_seed: 0
    model:
      name: ResidualUNetSE3D
      # number of input channels to the model
      in_channels: 1
      ...
    

    Credits for UNETR code.

    opened by imadtoubal 0
Owner
Adrian Wolny
PhD student in Machine Learning @HCIHeidelberg
Adrian Wolny
CNN Based Meta-Learning for Noisy Image Classification and Template Matching

CNN Based Meta-Learning for Noisy Image Classification and Template Matching Introduction This master thesis used a few-shot meta learning approach to

Kumar Manas 2 Dec 09, 2021
Gradient Inversion with Generative Image Prior

Gradient Inversion with Generative Image Prior This repository is an implementation of "Gradient Inversion with Generative Image Prior", accepted to N

MLLab @ Postech 25 Jan 09, 2023
PyTorch implementation of "Continual Learning with Deep Generative Replay", NIPS 2017

pytorch-deep-generative-replay PyTorch implementation of Continual Learning with Deep Generative Replay, NIPS 2017 Results Continual Learning on Permu

Junsoo Ha 127 Dec 14, 2022
Official code for the publication "HyFactor: Hydrogen-count labelled graph-based defactorization Autoencoder".

HyFactor Graph-based architectures are becoming increasingly popular as a tool for structure generation. Here, we introduce a novel open-source archit

Laboratoire-de-Chemoinformatique 11 Oct 10, 2022
Implementation of the paper "Shapley Explanation Networks"

Shapley Explanation Networks Implementation of the paper "Shapley Explanation Networks" at ICLR 2021. Note that this repo heavily uses the experimenta

68 Dec 27, 2022
Learning Domain Invariant Representations in Goal-conditioned Block MDPs

Learning Domain Invariant Representations in Goal-conditioned Block MDPs Beining Han, Chongyi Zheng, Harris Chan, Keiran Paster, Michael R. Zhang, Jim

Chongyi Zheng 3 Apr 12, 2022
A project that uses optical flow and machine learning to detect aimhacking in video clips.

waldo-anticheat A project that aims to use optical flow and machine learning to visually detect cheating or hacking in video clips from fps games. Che

waldo.vision 542 Dec 03, 2022
Bayesian Inference Tools in Python

BayesPy Bayesian Inference Tools in Python Our goal is, given the discrete outcomes of events, estimate the distribution of categories. Using gradient

Max Sklar 99 Dec 14, 2022
MAGMA - a GPT-style multimodal model that can understand any combination of images and language

MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning Authors repo (alphabetical) Constantin (CoEich), Mayukh (Mayukh

Aleph Alpha GmbH 331 Jan 03, 2023
TigerLily: Finding drug interactions in silico with the Graph.

Drug Interaction Prediction with Tigerlily Documentation | Example Notebook | Youtube Video | Project Report Tigerlily is a TigerGraph based system de

Benedek Rozemberczki 91 Dec 30, 2022
A Collection of LiDAR-Camera-Calibration Papers, Toolboxes and Notes

A Collection of LiDAR-Camera-Calibration Papers, Toolboxes and Notes

443 Jan 06, 2023
High-Resolution 3D Human Digitization from A Single Image.

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020) News: [2020/06/15] Demo with Google Colab (i

Meta Research 8.4k Dec 29, 2022
Automatic Video Captioning Evaluation Metric --- EMScore

Automatic Video Captioning Evaluation Metric --- EMScore Overview For an illustration, EMScore can be computed as: Installation modify the encode_text

Yaya Shi 17 Nov 28, 2022
This project contains an implemented version of Face Detection using OpenCV and Mediapipe. This is a code snippet and can be used in projects.

Live-Face-Detection Project Description: In this project, we will be using the live video feed from the camera to detect Faces. It will also detect so

Hassan Shahzad 3 Oct 02, 2021
This repository contains the code used to quantitatively evaluate counterfactual examples in the associated paper.

On Quantitative Evaluations of Counterfactuals Install To install required packages with conda, run the following command: conda env create -f requi

Frederik Hvilshøj 1 Jan 16, 2022
Code release for "Conditional Adversarial Domain Adaptation" (NIPS 2018)

CDAN Code release for "Conditional Adversarial Domain Adaptation" (NIPS 2018) New version: https://github.com/thuml/Transfer-Learning-Library Dataset

THUML @ Tsinghua University 363 Dec 20, 2022
Instance-conditional Knowledge Distillation for Object Detection

Instance-conditional Knowledge Distillation for Object Detection This is a MegEngine implementation of the paper "Instance-conditional Knowledge Disti

MEGVII Research 47 Nov 17, 2022
MISSFormer: An Effective Medical Image Segmentation Transformer

MISSFormer Code for paper "MISSFormer: An Effective Medical Image Segmentation Transformer". Please read our preprint at the following link: paper_add

Fong 22 Dec 24, 2022
PyTorch 1.0 inference in C++ on Windows10 platforms

Serving PyTorch Models in C++ on Windows10 platforms How to use Prepare Data examples/data/train/ - 0 - 1 . . . - n examples/data/test/

Henson 88 Oct 15, 2022
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

Katsuya Hyodo 8 Oct 03, 2022