Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers

Related tags

Deep LearningSLATER
Overview

Official TensorFlow implementation of the unsupervised reconstruction model using zero-Shot Learned Adversarial TransformERs (SLATER). (https://arxiv.org/abs/2105.08059)

Korkmaz, Y., Dar, S. U., Yurt, M., Ozbey, M., & Cukur, T. (2021). Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers. arXiv preprint arXiv:2105.08059.


Demo

The following commands are used to train and test SLATER to reconstruct undersampled MR acquisitions from single- and multi-coil datasets. You can download pretrained network snaphots and sample datasets from the links given below.

For training the MRI prior we use fully-sampled images, for testing undersampling is performed based on selected acceleration rate. We have used AdamOptimizer in training, RMSPropOptimizer with momentum parameter 0.9 in testing/inference. In the current settings AdamOptimizer is used, you can change underlying optimizer class in dnnlib/tflib/optimizer.py file. You can insert additional paramaters like momentum to the line 87 in the optimizer.py file.

Sample training command for multi-coil (fastMRI) dataset:

python run_network.py --train --gpus=0 --expname=fastmri_t1_train --dataset=fastmri-t1 --data-dir=datasets/multi-coil-datasets/train

Sample reconstruction/test command for fastMRI dataset:

python run_recon_multi_coil.py reconstruct-complex-images --network=pretrained_snapshots/fastmri-t1/network-snapshot-001282.pkl --dataset=fastmri-t1 --acc-rate=4 --contrast=t1 --data-dir=datasets/multi-coil-datasets/test

Sample training command for single-coil (IXI) dataset:

python run_network.py --train --gpus=0 --expname=ixi_t1_train --dataset=ixi_t1 --data-dir=datasets/single-coil-datasets/train

Sample reconstruction/test command for IXI dataset:

python run_recon_single_coil.py reconstruct-magnitude-images --network=pretrained_snapshots/ixi-t1/network-snapshot-001282.pkl --dataset=ixi_t1_test --acc-rate=4 --contrast=t1 --data-dir=datasets/single-coil-datasets/test

Datasets

For IXI dataset image dimensions are 256x256. For fastMRI dataset image dimensions vary with contrasts. (T1: 256x320, T2: 288x384, FLAIR: 256x320).

SLATER requires datasets in the tfrecords format. To create tfrecords file containing new datasets you can use dataset_tool.py:

To create single-coil datasets you need to give magnitude images to dataset_tool.py with create_from_images function by just giving image directory containing images in .png format. We included undersampling masks under datasets/single-coil-datasets/test.

To create multi-coil datasets you need to provide hdf5 files containing fully sampled coil-combined complex images in a variable named 'images_fs' with shape [num_of_images,x,y] (can be modified accordingly). To do this, you can use create_from_hdf5 function in dataset_tool.py.

The MRI priors are trained on coil-combined datasets that are saved in tfrecords files with a 3-channel order of [real, imaginary, dummy]. For test purposes, we included sample coil-sensitivity maps (complex variable with 4-dimensions [x,y,num_of_image,num_of_coils] named 'coil_maps') and undersampling masks (3-dimensions [x,y, num_of_image] named 'map') in the datasets/multi-coil-datasets/test folder in hdf5 format.

Coil-sensitivity-maps are estimated using ESPIRIT (http://people.eecs.berkeley.edu/~mlustig/Software.html). Network implementations use libraries from Gansformer (https://github.com/dorarad/gansformer) and Stylegan-2 (https://github.com/NVlabs/stylegan2) repositories.


Pretrained networks

You can download pretrained network snapshots and datasets from these links. You need to place downloaded folders (datasets and pretrained_snapshots folders) under the main repo to run those sample test commands given above.

Pretrained network snapshots for IXI-T1 and fastMRI-T1 can be downloaded from Google Drive: https://drive.google.com/drive/folders/1_69T1KUeSZCpKD3G37qgDyAilWynKhEc?usp=sharing

Sample training and test datasets for IXI-T1 and fastMRI-T1 can be downloaded from Google Drive: https://drive.google.com/drive/folders/1hLC8Pv7EzAH03tpHquDUuP-lLBasQ23Z?usp=sharing


Notice for training with multi-coil datasets

To train multi-coil (complex) datasets you need to remove/add some lines in training_loop.py:

  • Comment out line 8.
  • Delete comment at line 9.
  • Comment out line 23.

Citation

You are encouraged to modify/distribute this code. However, please acknowledge this code and cite the paper appropriately.

@article{korkmaz2021unsupervised,
  title={Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers},
  author={Korkmaz, Yilmaz and Dar, Salman UH and Yurt, Mahmut and {\"O}zbey, Muzaffer and {\c{C}}ukur, Tolga},
  journal={arXiv preprint arXiv:2105.08059},
  year={2021}
  }

(c) ICON Lab 2021


Prerequisites

  • Python 3.6 --
  • CuDNN 10.1 --
  • TensorFlow 1.14 or 1.15

Acknowledgements

This code uses libraries from the StyleGAN-2 (https://github.com/NVlabs/stylegan2) and Gansformer (https://github.com/dorarad/gansformer) repositories.

For questions/comments please send me an email: [email protected]


Owner
ICON Lab
ICON Lab
A trusty face recognition research platform developed by Tencent Youtu Lab

Introduction TFace: A trusty face recognition research platform developed by Tencent Youtu Lab. It provides a high-performance distributed training fr

Tencent 956 Jan 01, 2023
Source code for our paper "Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures"

Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures Code for the Multiplex Molecular Graph Neural Network (M

shzhang 59 Dec 10, 2022
HomoInterpGAN - Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation

HomoInterpGAN Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation (CVPR 2019, oral) Installation The implementation is base

Ying-Cong Chen 99 Nov 15, 2022
TensorFlow Implementation of "Show, Attend and Tell"

Show, Attend and Tell Update (December 2, 2016) TensorFlow implementation of Show, Attend and Tell: Neural Image Caption Generation with Visual Attent

Yunjey Choi 902 Nov 29, 2022
Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding

🍐 quince Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding 🍐 Installation $ git clone

Andrew Jesson 19 Jun 23, 2022
Space robot - (Course Project) Using the space robot to capture the target satellite that is disabled and spinning, then stabilize and fix it up

Space robot - (Course Project) Using the space robot to capture the target satellite that is disabled and spinning, then stabilize and fix it up

Mingrui Yu 3 Jan 07, 2022
Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Elias Kassapis 31 Nov 22, 2022
AdvStyle - Official PyTorch Implementation

AdvStyle - Official PyTorch Implementation Paper | Supp Discovering Interpretable Latent Space Directions of GANs Beyond Binary Attributes. Huiting Ya

Beryl 37 Oct 21, 2022
Generalized Decision Transformer for Offline Hindsight Information Matching

Generalized Decision Transformer for Offline Hindsight Information Matching [arxiv] If you use this codebase for your research, please cite the paper:

Hiroki Furuta 35 Dec 12, 2022
CountDown to New Year and shoot fireworks

CountDown and Shoot Fireworks About App This is an small application make you re

5 Dec 31, 2022
A JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short.

BraVe This is a JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short. The model provided in this package wa

DeepMind 44 Nov 20, 2022
Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier

LSTMs for Human Activity Recognition Human Activity Recognition (HAR) using smartphones dataset and an LSTM RNN. Classifying the type of movement amon

Guillaume Chevalier 3.1k Dec 30, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 09, 2022
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting (RVM) English | 中文 Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specific

flow-dev 2 Aug 21, 2022
An AutoML Library made with Optuna and PyTorch Lightning

An AutoML Library made with Optuna and PyTorch Lightning Installation Recommended pip install -U gradsflow From source pip install git+https://github.

GradsFlow 294 Dec 17, 2022
SparseInst: Sparse Instance Activation for Real-Time Instance Segmentation, CVPR 2022

SparseInst 🚀 A simple framework for real-time instance segmentation, CVPR 2022 by Tianheng Cheng, Xinggang Wang†, Shaoyu Chen, Wenqiang Zhang, Qian Z

Hust Visual Learning Team 458 Jan 05, 2023
Official code for Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset

Official code for our Interspeech 2021 - Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset [1]*. Visually-grounded spoken language datasets c

Ian Palmer 3 Jan 26, 2022
TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"

Simulated+Unsupervised (S+U) Learning in TensorFlow TensorFlow implementation of Learning from Simulated and Unsupervised Images through Adversarial T

Taehoon Kim 569 Dec 29, 2022
Streaming over lightweight data transformations

Description Data augmentation libarary for Deep Learning, which supports images, segmentation masks, labels and keypoints. Furthermore, SOLT is fast a

Research Unit of Medical Imaging, Physics and Technology 256 Jan 08, 2023
Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging

BERT Got a Date: Introducing Transformers to Temporal Tagging Satya Almasian*, Dennis Aumiller*, and Michael Gertz Heidelberg University Contact us vi

54 Dec 04, 2022