Here we present the implementation in TensorFlow of our work about liver lesion segmentation accepted in the Machine Learning 4 Health Workshop

Overview

Detection-aided liver lesion segmentation

Here we present the implementation in TensorFlow of our work about liver lesion segmentation accepted in the Machine Learning 4 Health Workshop of NIPS 2017. Check our project page for more information.

In order to develop this code, we used OSVOS and modified it to suit it to the liver lesion segmentation task.

Architecture of the network

In this work we propose a method to segment the liver and its lesions from Computed Tomography (CT) scans using Convolutional Neural Networks (CNNs), that have proven good results in a variety of computer vision tasks, including medical imaging. The network that segments the lesions consists of a cascaded architecture, which first focuses on the region of the liver in order to segment the lesions on it. Moreover, we train a detector to localize the lesions, and mask the results of the segmentation network with the positive detections. The segmentation architecture is based on DRIU(Maninis, 2016), a Fully Convolutional Network (FCN) with side outputs that work on feature maps of different resolutions, to finally benefit from the multi-scale information learned by different stages of the network. The main contribution of this work is the use of a detector to localize the lesions, which we show to be beneficial to remove false positives triggered by the segmentation network.

Our workshop paper is available on arXiv, and related slides here.

If you find this code useful, please cite with the following Bibtex code:

@misc{1711.11069,
Author = {Miriam Bellver and Kevis-Kokitsi Maninis and Jordi Pont-Tuset and Xavier Giro-i-Nieto and Jordi Torres and Luc Van Gool},
Title = {Detection-aided liver lesion segmentation using deep learning},
Year = {2017},
Eprint = {arXiv:1711.11069},
}

Code Instructions

Installation

  1. Clone this repository
git clone https://github.com/imatge-upc/liverseg-2017-nipsws.git
  1. Install if necessary the required dependencies:
  • Python 2.7
  • Tensorflow r1.0 or higher
  • Python dependencies: PIL, numpy, scipy

If you want to test our models, download the different weights. Extract the contents of this folder in the root of the repository, so there is a train_files folder with the following checkpoints:

  • Liver segmentation checkpoint
  • Lesion segmentation checkpoint
  • Lesion detection checkpoint

If you want to train the models by yourself, we provide also the following pretrained models:

  • VGG-16 weights
  • Resnet-50 weights weights

Data

This code was developed to participate in the Liver lesion segmentation challenge (LiTS), but can be used for other segmentation tasks also. The LiTS database consists on 130 CT scans for training and 70 CT scans for testing. These CT scans are compressed in a nifti format. We did our own partition of the training set, we used folders 0 - 104 to train, and 105-130 to test. This code is prepared to do experiments with our partition.

The code expects that the database is inside the LiTS_database folder. Inside there should be the following folders:

  • images_volumes: inside there should be a folder for each CT volume. Inside each of these folders, there should be a .mat file for each CT slice of the volume. The preprocessing required consists in clipping the values outside the range (-150,250) and doing max-min normalization.
  • liver_seg: the same structure as the previous, but with .png for each CT slice with the mask of the liver.
  • item_seg: the same structure as the previous, but with .png for each CT slice with the mask of the lesion.

An example of the structure for a single slice of a CT volume is the following:

LiTS_database/images_volumes/31/100.mat
LiTS_database/liver_seg/31/100.png
LiTS_database/item_seg/31/100.png

We provide a file in matlab to convert the nifti files into this same structure. In our case we used this matlab library. You can use whatever library you decide as long as the file structure and the preprocessing is the same.

cd /utils/matlab_utils
matlab process_database_liver.m

Liver segmentation

1. Train the liver model

In seg_liver_train.py you should indicate a dataset list file. An example is inside seg_DatasetList, training_volume_3.txt. Each line has:

img1 seg_lesion1 seg_liver1 img2 seg_lesion2 seg_liver2 img3 seg_lesion3 seg_liver3

If you just have segmentations of the liver, then repeat seg_lesionX=seg_liverX. If you used the folder structure explained in the previous point, you can use the training and testing_volume_3.txt files.

python seg_liver_train.py

2. Test the liver model

A dataset list with the same format but with the test images is required here. If you don't have annotations, simply put a dummy annotation X.png. There is also an example in seg_DatasetList/testing_volume_3.txt.

python seg_liver_test.py

Lesion detection

This network samples locations around liver and detects whether they have a lesion or not.

1. Crop slices around the liver

In order to train the lesion detector and the lesion segmentation network, we need to crop the CT scans around the liver region. First, we will need to obtain liver predictions for all the dataset, and move them to the LiTS_database folder.

cp -rf ./results/seg_liver_ck ./LiTS_database/seg_liver_ck

And the following lines will crop the images from the database, the ground truth and the liver predictions.

cd utils/crops_methods
python compute_3D_bbs_from_gt_liver.py

This will generate three folders:

LiTS_database/bb_liver_seg_alldatabase3_gt_nozoom_common_bb
LiTS_database/bb_liver_lesion_seg_alldatabase3_gt_nozoom_common_bb
LiTS_database/bb_images_volumes_alldatabase3_gt_nozoom_common_bb
LiTS_database/liver_results

and also a ./utils/crops_list/crops_LiTS_gt.txt file with the coordinates of the crop.

The default version will crop the images, ground truth, and liver predictions, considering the liver ground truth masks instead of the predictions. You can change this option in the same script.

2. Sample locations around liver

Now we need to sample locations around the liver region, in order to train and test the lesion detector. We need a .txt with the following format:

img1 x1 x2 id

Example:

images_volumes/97/444 385.0 277.0 1

whre x1 and x2 are the coordinates of the upper-left vertex of the bounding box and id is the data augmentation option. There are two options in this script. To sample locations for slices with ground truth or without. In the first case, two separate lists will be generated, one for positive locations (/w lesion) and another for negative locations (/wo lesion), in order to train the detector with balanced batches. These lists are already generated so you can use them, they are inside det_DatasetList (for instance, training_positive_det_patches_data_aug.txt for the positive patches of training set).

In case you want to generate other lists, use the following script:

cd utils/sampling
python sample_bbs.py

3. Train lesion detector

Once you sample the positive and negative locations, or decide to use the default lists, you can use the following command to train the detector.

python det_lesion_train.py

4. Test lesion detector

In order to test the detector, you can use the following command:

python det_lesion_test.py

This will create a folder inside detection_results with the task_name given to the experiment, and inside two .txt files, one with the hard results (considering a th of 0.5) and another with soft results with the prob predicted by the detector that a location is unhealthy.

Lesion segmentation

This is the network that segments the lesion. It is trained just backpropagatins gradients through the liver region.

1. Train the lesion model

In order to train the algorithm that does not backpropagate through pixels outside the liver, each line of the .txt list file in this case should have the following format:

img1 seg_lesion1 seg_liver1 result_liver1 img2 seg_lesion2 seg_liver2 result_liver1 img3 seg_lesion3 seg_liver3 result_liver1

An example list file is seg_DatasetList/training_lesion_commonbb_nobackprop_3.txt. If you used the folder structure proposed in the Database section, and you have named the folders of the cropped slices as proposed in the compute_3D_bbs_from_gt_liver.py file, you can use these files for training and testing the algorithm with the following command:

python seg_lesion_train.py

2. Test the lesion model

The command to test the network is the following:

python seg_lesion_test.py

In this case, observe that the script does 4 different steps:

  1. Does inference with the lesion segmentation network
  2. Returns results to the original size (from cropped slices to 512x512)
  3. Masks the results with the liver segmentation masks
  4. Checks positive detections of lesions in the liver. Remove those false positive of the segmentation network using the detection results.

Contact

If you have any general doubt about our work or code which may be of interest for other researchers, please use the public issues section on this github repo. Alternatively, drop us an e-mail at [email protected].

Owner
Image Processing Group - BarcelonaTECH - UPC
Image Processing Group - BarcelonaTECH - UPC
The implementation of FOLD-R++ algorithm

FOLD-R-PP The implementation of FOLD-R++ algorithm. The target of FOLD-R++ algorithm is to learn an answer set program for a classification task. Inst

13 Dec 23, 2022
Code release for "BoxeR: Box-Attention for 2D and 3D Transformers"

BoxeR By Duy-Kien Nguyen, Jihong Ju, Olaf Booij, Martin R. Oswald, Cees Snoek. This repository is an official implementation of the paper BoxeR: Box-A

Nguyen Duy Kien 111 Dec 07, 2022
On the Adversarial Robustness of Visual Transformer

On the Adversarial Robustness of Visual Transformer Code for our paper "On the Adversarial Robustness of Visual Transformers"

Rulin Shao 35 Dec 14, 2022
Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."

Fastformer-PyTorch Unofficial PyTorch implementation of Fastformer based on paper Fastformer: Additive Attention Can Be All You Need. Usage : import t

Hong-Jia Chen 126 Dec 06, 2022
[ICML 2021, Long Talk] Delving into Deep Imbalanced Regression

Delving into Deep Imbalanced Regression This repository contains the implementation code for paper: Delving into Deep Imbalanced Regression Yuzhe Yang

Yuzhe Yang 568 Dec 30, 2022
Zero-shot Learning by Generating Task-specific Adapters

Code for "Zero-shot Learning by Generating Task-specific Adapters" This is the repository containing code for "Zero-shot Learning by Generating Task-s

INK Lab @ USC 11 Dec 17, 2021
Answering Open-Domain Questions of Varying Reasoning Steps from Text

This repository contains the authors' implementation of the Iterative Retriever, Reader, and Reranker (IRRR) model in the EMNLP 2021 paper "Answering Open-Domain Questions of Varying Reasoning Steps

26 Dec 22, 2022
A collection of educational notebooks on multi-view geometry and computer vision.

Multiview notebooks This is a collection of educational notebooks on multi-view geometry and computer vision. Subjects covered in these notebooks incl

Max 65 Dec 09, 2022
Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style disentanglement in image generation and translation" (ICCV 2021)

DiagonalGAN Official Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style Disentanglement in Image Generation and Trans

32 Dec 06, 2022
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022
[ICCV 2021] Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation

EPCDepth EPCDepth is a self-supervised monocular depth estimation model, whose supervision is coming from the other image in a stereo pair. Details ar

Rui Peng 110 Dec 23, 2022
Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"

Time-Sensitive-QA The repo contains the dataset and code for NeurIPS2021 (dataset track) paper Time-Sensitive Question Answering dataset. The dataset

wenhu chen 35 Nov 14, 2022
It is the assignment for COMP 576 in Rice University

COMP-576 It is the assignment for COMP 576 in Rice University There are two programming assignments and one Final Project. Assignment 1: It is a MLP a

Maojie Tang 1 Nov 25, 2021
A practical ML pipeline for data labeling with experiment tracking using DVC.

Auto Label Pipeline A practical ML pipeline for data labeling with experiment tracking using DVC Goals: Demonstrate reproducible ML Use DVC to build a

Todd Cook 4 Mar 08, 2022
Res2Net for Instance segmentation and Object detection using MaskRCNN

Res2Net for Instance segmentation and Object detection using MaskRCNN Since the MaskRCNN-benchmark of facebook is deprecated, we suggest to use our mm

Res2Net Applications 55 Oct 30, 2022
Integrated physics-based and ligand-based modeling.

ComBind ComBind integrates data-driven modeling and physics-based docking for improved binding pose prediction and binding affinity prediction. Given

Dror Lab 44 Oct 26, 2022
K-Means Clustering and Hierarchical Clustering Unsupervised Learning Solution in Python3.

Unsupervised Learning - K-Means Clustering and Hierarchical Clustering - The Heritage Foundation's Economic Freedom Index Analysis 2019 - By David Sal

David Salako 1 Jan 12, 2022
The official start-up code for paper "FFA-IR: Towards an Explainable and Reliable Medical Report Generation Benchmark."

FFA-IR The official start-up code for paper "FFA-IR: Towards an Explainable and Reliable Medical Report Generation Benchmark." The framework is inheri

Mingjie 28 Dec 16, 2022
Dynamic wallpaper generator.

Wiki • About • Installation About This project is a dynamic wallpaper changer. It waits untill you turn on the music, downloads album cover if it's po

3 Sep 18, 2021
aka "Bayesian Methods for Hackers": An introduction to Bayesian methods + probabilistic programming with a computation/understanding-first, mathematics-second point of view. All in pure Python ;)

Bayesian Methods for Hackers Using Python and PyMC The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chap

Cameron Davidson-Pilon 25.1k Jan 02, 2023