MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research

Overview

Moose-logo

๐ŸฆŒ About MOOSE

MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research.The pipeline is based on nn-UNet and has the capability to segment 120 unique tissue classes from a whole-body 18F-FDG PET/CT image.

๐Ÿ—‚ Required folder structure

MOOSE inherently performs batchwise analysis. Once you have all the patients to be analysed in a main directory, MOOSE performs the analysis sequentially. The output folders that will be created by the script itself are highlighted using CAPS. Organising the folder structure is the sole responsibility of the user.

โ”œโ”€โ”€ main_folder                     # The mother folder that holds all the patient folders (folder name can be anything)
โ”‚   โ”œโ”€โ”€ patient_folder_1            # Individual patient folder (folder name can be anything)
โ”‚       โ”œโ”€โ”€ fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ INFERENCE               # Auto-generated 
โ”‚       โ”œโ”€โ”€ MOOSE-TEMP              # Auto-generated
โ”‚       โ”œโ”€โ”€ LABELS                  # Auto-generated: contains all the generated labels.
โ”‚       โ”œโ”€โ”€ CT-NIFTI                # Auto-generated 
โ”‚       โ”œโ”€โ”€ PT-NIFTI                # Auto-generated
โ”‚       โ”œโ”€โ”€ RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis.
โ”œโ”€โ”€ patient_folder_2    
โ”‚       โ”œโ”€โ”€ fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ INFERENCE               # Auto-generated 
โ”‚       โ”œโ”€โ”€ MOOSE-TEMP              # Auto-generated
โ”‚       โ”œโ”€โ”€ LABELS                  # Auto-generated: contains all the generated labels.
โ”‚       โ”œโ”€โ”€ CT-NIFTI                # Auto-generated 
โ”‚       โ”œโ”€โ”€ PT-NIFTI                # Auto-generated
โ”‚       โ”œโ”€โ”€ RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis.
โ”‚   .
โ”‚   .
โ”‚   .
โ”œโ”€โ”€ patient_folder_n
โ”‚       โ”œโ”€โ”€ fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ INFERENCE               # Auto-generated 
โ”‚       โ”œโ”€โ”€ MOOSE-TEMP              # Auto-generated
โ”‚       โ”œโ”€โ”€ LABELS                  # Auto-generated: contains all the generated labels.
โ”‚       โ”œโ”€โ”€ CT-NIFTI                # Auto-generated 
โ”‚       โ”œโ”€โ”€ PT-NIFTI                # Auto-generated
โ”‚       โ”œโ”€โ”€ RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis.

โ›”๏ธ Hard requirements

The entire script has been ONLY tested on Ubuntu linux OS, with the following hardware capabilities:

  • Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz
  • 256 GB of RAM (Very important for total-body datasets)
  • 1 x Nvidia GeForce RTX 3090 Ti We are testing different configurations now, but the RAM (256 GB) seems to be a hard requirement.

โš™๏ธ Installation

Kindly copy the code below and paste it on your ubuntu terminal, the installer should ideally take care of the rest. Also pay attention to the installation process as the FSL installation requires you to answer some questions. A fresh install would approximately take 30 minutes.

git clone https://github.com/LalithShiyam/MOOSE.git
cd MOOSE
source ./moose_installer.sh

NOTE: Do not forget to source the .bashrc file again

source ~/.bashrc

๐Ÿ–ฅ Usage

  • For running the moose directly from the command-line terminal using the default options - please use the following command. In general, MOOSE performs the error analysis (refer paper) in similarity space and assumes that the given (if given) PET image is static.
#syntax:
moose -f path_to_main_folder 

#example: 
moose -f '/home/kyloren/Documents/main_folder'
  • For notifying the program if the given 18F-FDG PET is static (-dp False) or dynamic (-dp True) and for switching on (-ea True) or off (-ea False) the error analysis error analysis in 'similarity space', use the following command with appropriate syntax.
#syntax:
moose -f path_to_main_folder -ea False -dp True 

#example for performing error analysis for a static PET/CT image: 
moose -f '/home/kyloren/Documents/main_folder' -ea True -dp False

#example for performing error analysis for a dynamic PET/CT image:
moose -f '/home/kyloren/Documents/main_folder' -ea True -dp True

#example for not performing error analysis:
moose -f '/home/kyloren/Documents/main_folder' -ea False -dp False

For the purpose of interactive execution, we have created a notebook version of the script and can be found inside the 'notebooks' folder: ~/MOOSE/MOOSE/notebooks.

๐Ÿ“ˆ Results

  • The multi-label atlas for each subject will be stored in the auto-generated labels folder under the subject's respective directory (refer folder structure). The label-index to region correspondence is stored in the excel sheet: MOOSE-Label-Index-Correspondene-Dual-organs-without-split.xlsx, which can be found inside the ~/MOOSE/MOOSE/similarity-space folder.
  • In addition, an auto-generated Segmentation-Risk-of-error-analysis-XXXX.xlsx file will be created in the individual subject-directory ('XXXX'). The excel file highlights segmentations that might be erroneously segmented. The excel sheet is supposed to serve as an quality control measure.

๐Ÿ“– Citations

๐Ÿ™ Acknowledgement

This research is supported through an IBM University Cloud Award (https://www.research.ibm.com/university/)

๐Ÿ™‹ FAQ

[1] Will MOOSE only work on whole-body 18F-FDG PET/CT datasets?

MOOSE ideally works on whole-body (head to toe) PET/CT datasets, but also works on semi whole-body PET/CT datasets (head to pelvis). Unfortunately, we haven't tested other field-of-views. We will post the evaluations soon.

[2] Will MOOSE only work on multimodal 18F-FDG PET/CT datasets or can it also be applied to CT only? or PET only?

MOOSE automatically infers the modality type using the DICOM header tags. MOOSE builds the entire atlas with 120 tissues if the user provides multimodal 18F-FDG PET/CT datasets. The user can also provide CT only DICOM folder, MOOSE will infer the modality type and segment only the non-cerebral tissues (36/120 tissues) and will not segment the 83 subregions of the brain. MOOSE will definitely not work if only provided with 18F-FDG PET images.

[3] Will MOOSE work on non-DICOM formats?

Unfortunately the current version accepts only DICOM formats. In the future, we will try to enable non-DICOM formats for processing as well.

Comments
  • BUG:IndexError: list index out of range

    BUG:IndexError: list index out of range

    I am running MOOSE in a patient folder with two subfolders for CT and PET under DCIOM format. However, I am getting this error message:

    moose_ct_atlas = ie.segment_ct(ct_file[0], out_dir) File "/export/moose/moose-0.1.0/src/inferenceEngine.py", line 78, in segment_ct out_label = fop.get_files(out_dir, pathlib.Path(nifti_img).stem + '*')[0] IndexError: list index out of range

    Any suggestion, please?

    Thanks,

    opened by Ompsda 14
  • Let users know if environment variables are not loaded

    Let users know if environment variables are not loaded

    Is your feature request related to a problem? Please describe. If the environment variables are not loaded, MOOSE fails silently like so:

    โœ” Converted DICOM images in /home/user/Data/... to NIFTI
    - Only CT data found in folder /home/user/Data/..., MOOSE will construct noncerebral tissue atlas (n=37) based on CT 
    - Initiating CT segmentation protocols
    - CT image to be segmented: /home/user/Data/...._0000\.nii\.gz                            
    โœ” Segmented abdominal organs from /home/user/Data/..._0000.nii.gz                                     
    Traceback (most recent call last):                                                                                                                                                                                 
        File "/usr/local/bin/moose", line 131, in <module>
            ct_atlas = ie.segment_ct(ct_file[0], out_dir)                                                                                                                                                             
        File "/home/user/Code/MOOSE/src/inferenceEngine.py", line 78, in segment_ct                                                                                                                                        
            out_label = fop.get_files(out_dir, pathlib.Path(nifti_img).stem + '*')[0]                
    IndexError: list index out of range
    

    Describe the solution you'd like It would be nice to let the user know that the problem is that the nnUNet_raw_data_base, nnUNet_preprocessed, etc. env variables are not set.

    enhancement 
    opened by chris-clem 8
  • BUG: sitk::ERROR: The file MOOSE-Split-unified-PET-CT-atlas.nii.gz does not exist.

    BUG: sitk::ERROR: The file MOOSE-Split-unified-PET-CT-atlas.nii.gz does not exist.

    Hi,

    I am trying to run MOOSE on a bunch of patients with whole-body CTs. For two of the patient, MOOSE fails with the following error

    โœ” Segmented psoas from /home/user/Data/....IMA_0000.nii.gz                                              
    - Conducting automatic error analysis in similarity space for: /home/user/Data/.../labels/MOOSE-Non-cerebral-tissues-CT-....nii.gz
    Traceback (most recent call last):
      File "/usr/local/bin/moose", line 139, in <module>                                                                                                                                                        
        ea.similarity_space(ct_atlas, sim_space_dir, segmentation_error_stats)                                                                                                                                         
      File "/home/user/Code/MOOSE/src/errorAnalysis.py", line 147, in similarity_space
        shape_parameters = iop.get_shape_parameters(split_atlas)
      File "/home/user/Code/MOOSE/src/imageOp.py", line 86, in get_shape_parameters
        label_img = SimpleITK.Cast(SimpleITK.ReadImage(label_image), SimpleITK.sitkInt32)
      File "/home/user/miniconda3/envs/moose/lib/python3.9/site-packages/SimpleITK/extra.py", line 346, in ReadImage
        return reader.Execute()
      File "/home/user/miniconda3/envs/moose/lib/python3.9/site-packages/SimpleITK/SimpleITK.py", line 8015, in Execute
        return _SimpleITK.ImageFileReader_Execute(self)
    RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:97:
    sitk::ERROR: The file "/home/user/Data/.../labels/sim_space/similarity-space/MOOSE-Split-unified-PET-CT-atlas.nii.gz" does not exist.
    

    Do you know what could be the problem if the file not existing? It works for the other patients.

    opened by chris-clem 6
  • BUG: Brain label error still persists

    BUG: Brain label error still persists

    Need to manually start again:

    Calculated SUV image for SUV extraction!

    • Brain found in field-of-view of PET/CT data...
    • Cropping brain from PET image using the aligned CT brain mask Traceback (most recent call last): File "/usr/local/bin/moose", line 214, in cropped_pet_brain = iop.crop_image_using_mask(image_to_crop=pet_file[0], File "/home/mz/Documents/Softwares/MOOSE-V.1.0/src/imageOp.py", line 228, in crop_image_using_mask bbox = np.asarray(label_shape_filter.GetBoundingBox(1)) File "/usr/local/lib/python3.8/dist-packages/SimpleITK/SimpleITK.py", line 36183, in GetBoundingBox return _SimpleITK.LabelShapeStatisticsImageFilter_GetBoundingBox(self, label) RuntimeError: Exception thrown in SimpleITK LabelShapeStatisticsImageFilter_GetBoundingBox: /tmp/SimpleITK-build/ITK-prefix/include/ITK-5.2/itkLabelMap.hxx:151: ITK ERROR: LabelMap(0x9547bd0): No label object with label 1.
    bug 
    opened by josefyu 3
  • Feat: Multimoose

    Feat: Multimoose

    Currently MOOSE is running on server configuration. So there is a good chance that the user is using a DGX or so. In that case, it would make sense to fully utilise the capabilities of the hardware. Similar to falcon, moose should run in parallel based on the hardware capabilities.

    enhancement 
    opened by LalithShiyam 3
  • Brain cropping fails with dynamic datasets

    Brain cropping fails with dynamic datasets

    The following error occurred after using Moose with dynamic datasets of Vision lung cancer patients. All other segmentations and SUV extraction properly worked. No error occurred after re-running Moose with the corresponding static dataset.

    Brain found in field-of-view of PET/CT data...                         
    - Cropping brain from PET image using the aligned CT brain mask
    Traceback (most recent call last):
      File "/usr/local/bin/moose", line 215, in <module>
        cropped_pet_brain = iop.crop_image_using_mask(image_to_crop=pet_file[0],
      File "/home/mz/Documents/Softwares/MOOSE/src/imageOp.py", line 237, in crop_image_using_mask
        out_of_bounds = upper_bounds >= img_dim
    ValueError: operands could not be broadcast together with shapes (3,) (4,)
    
    opened by DariaFerrara 2
  • BUG: WSL does not have unzip installed and moose falls silently due to wrong installation.

    BUG: WSL does not have unzip installed and moose falls silently due to wrong installation.

    MOOSE fails with index error when trying to run on WSL, due to wrong installation. There is no moose-files folder created when the algorithm is installed.

    Steps to reproduce the behavior: Install through WSL as described in github.

    Moose-files folder should be created when installed, and moose should run as required.

    Screenshots of the errors: image image

    Windows 11 22H2

    opened by paula-m 1
  • Feat: Batch remove temporary files of faulty processed data folders

    Feat: Batch remove temporary files of faulty processed data folders

    When MOOSE fails to infer the dataset, the command is stopped and the folders are left with temporary files given in this structure:

    Newly created folders: CT, PT, labels, stats, temp and 2 .JSON files.

    In order to clean these datasets and make them executable again, it would be nice to have a command to revert them into their original states. The command which can manually be used is listed here.

    [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name CT -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name PT -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name labels -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name temp -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name stats -exec rm -rf {} \;

    opened by josefyu 1
  • Feat: Find presence of brain using a CNN

    Feat: Find presence of brain using a CNN

    Right now MOOSE breaks when there is no brain in the PET image. The elegant way would be to figure out if there is a brain in the FOV of PET and initiate the segmentation protocols accordingly. It seems to be quite hard to determine if a given image has a brain in the field of view with hand-engineered features. The smartest way would be to generate a MIP or the middle slice of the PET image (if given) and use a 2D CNN based binary classifier for figuring if the brain is in the FOV or not.

    The game plan is the following:

    • [x] Extract the middle slice (coronal plane)

    • [x] Convert it from DICOM to .png and transform the PET intensities between 0-255 (Graylevels)

    • [x] Curate 80 slices (50 PET with no brain, 50 PET with a brain) and perform the training.

    • [x] Implement a 2D CNN binary-classifier (PyTorch <3 fastai)

    • [x] Make sure the data augmentations of the 2D CNN have random cropping

    • [x] Then use the trained model to infer whether a given volume has a brain or not.

    bug enhancement 
    opened by LalithShiyam 1
  • Feat: Create docker image for MOOSEv0.1.0

    Feat: Create docker image for MOOSEv0.1.0

    Problem. Since MOOSE is pretty much used in servers, it might be worthwhile to have a Docker Image for MOOSEv0.1.0.

    Solution Need to make one with the docker image hosted at IBM cloud.

    enhancement 
    opened by LalithShiyam 0
  • BUG: MOOSE fails with dynamic PET

    BUG: MOOSE fails with dynamic PET

    MOOSE fails when presented with a dynamic PET in the latest version. It works as expected with static 3D images.

    MOOSE probably doesn't need to do anything special with the 4D dynamic images, but it should probably still produce the segmented CT output. Additionally, it would be great to have a registration between the CT and the final frame of the PET. Motion correction of the PET could then be performed with FALCON, and mapped back to the CT.

    enhancement 
    opened by aaron-rohn 0
  • Skip patient instead of terminate in case of an error

    Skip patient instead of terminate in case of an error

    Hello,

    would it be possible to skip a patient and process the next one in case of an error (e.g. empty CT dir) and not stop the process?

    And then maybe in the end you get a list of the patient IDs that failed.

    opened by chris-clem 3
  • Manage MOOSE env vars

    Manage MOOSE env vars

    Dear MOOSE team,

    I mentioned the following issue in another issue and wanted to create a new one for it:

    I don't know if adding the env variables to `.bashrc` is the best place to do it. Some users might use zsh and others might use nnUnet seperately.  
    

    Originally posted by @chris-clem in https://github.com/QIMP-Team/MOOSE/issues/42#issuecomment-1286930959.

    As a quick solution, I added a env_vars.sh file in the MOOSE repo dir that I source instead of .bashrc. In the meantime, I have searched how people are handling the problem in general and found the following possibilities:

    1. Create a .env file in the repo dir and load it with python-dotenv as explained here.
    2. Create a .env file in the repo dir and recommend users to use direnv, which then automatically loads the env variables when changing in the MOOSE dir.
    3. Recommend users to create a MOOSE conda environment and enable loading and unloading the env vars when activating/ deactivating the conda environment as described here.

    The downside of 1. is that it requires a new dependency, the downside of 2. that it requires a new program, and the downside of 3. is that it requires conda for managing the environment.

    What do you think is the best option?

    opened by chris-clem 5
  • Feat: Prune/Compress the nnUNet models for performance gains.

    Feat: Prune/Compress the nnUNet models for performance gains.

    Problem

    Inference is a tad bit slow when it comes to large datasets.

    Solution Performance gains can be achieved by using Intel's Neural Compressor: https://github.com/intel/neural-compressor/tree/master/examples/pytorch/image_recognition/3d-unet/quantization/ptq/eager. And Intel has already provided an example on how to do so. So we just need to implement this for getting a lean model (still need to check the performance gains)

    *Alternate solution

    Is to bring in a fast resampling function (torch or others...)

    enhancement 
    opened by LalithShiyam 4
  • Feat: Reduce memory requirement for MOOSE during inference

    Feat: Reduce memory requirement for MOOSE during inference

    Problem MOOSE is based on nnUNet and the current inference takes a lot of memory on total-body datasets (uEXPLORER/QUADRA, upper limit: 256 GB). This is not a normal memory usage for most of the users. The memory usage bottleneck is explained here: https://github.com/MIC-DKFZ/nnUNet/issues/896

    Solution The solution seems to be to find a 'faster/memory efficient' resampling scheme than the skimage resampling scheme. People have already suggested solutions for speed, based on https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html and an elaborate description can be found here: https://github.com/MIC-DKFZ/nnUNet/issues/1093.

    But the memory consumption is still a problem. @dhaberl @Keyn34 : Consider these alternative options of Nvidia's cuCIM cucim.skimage.transform.resize in combination with Dask for block processing (chunks consume way less memory and I have used this for kinetic modelling).

    Impact This would result in a faster inference time and hopefully also obviates memory bottleneck for MOOSE and for any model inference via nnUNet.

    enhancement 
    opened by LalithShiyam 2
  • Analysis request: MOOSE + PET-Parameter extraction of PCA cohort

    Analysis request: MOOSE + PET-Parameter extraction of PCA cohort

    Analysis request for prostate cancer cohort as follows:

    • [x] MOOSE cohort -> Validation of Segmentations by me
      • [ ] Extract PET-Parameters from MOOSEd Segments
    • [x ] Delete all hand drawn PET-Segmentations starting with cubic*
    • [ ] Merge all the remaining Segmentations (pb*, sv*, pln*...) on a patient level by the following convention:
      • [ ] all Segmentations to a Master_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: pb* + sv* -> Prostate_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: dln* + pln* + rln* -> Lymph_node_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: bone* -> Bone_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: adrenal* + liver* + pleura* + lung* + rectum* + skin* + peritoneal* + org* + organ* + psoas* + testis* + lung* + cavern* -> Organ_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
    Analysis request 
    opened by KCHK1234 8
  • Bug: Nasal mucosa as skeletal muscle

    Bug: Nasal mucosa as skeletal muscle

    In case of mucosal congestion in the nasal cavity and paranasal sinuses -> missclassification as skeletal muscles. This appears often but I think the effects are minor, hence MINOR bug. All instances recorded

    bug 
    opened by KCHK1234 2
Releases(moose-v0.1.4)
  • moose-v0.1.4(Oct 22, 2022)

    What's Changed

    • Feature: Adding checks for environment variables by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/43
    • Bug: nnUNet broke suddenly due to version issues, now MOOSE installation file will always build the latest version of nnUNet from the git repo (https://github.com/MIC-DKFZ/nnUNet/issues/1132)! Please re-install MOOSE, if MOOSE doesn't work due to this bug.

    Full Changelog: https://github.com/QIMP-Team/MOOSE/compare/moose-v0.1.3...moose-v0.1.4

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.3(Jul 16, 2022)

    What's Changed

    • Created CODE_OF_CONDUCT.md by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/32
    • Updated README.md by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/35
    • Created a docker image for MOOSEv0.1.0 by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/37

    Full Changelog: https://github.com/QIMP-Team/MOOSE/compare/moose-v0.1.2...moose-v0.1.3

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.2(Jul 7, 2022)

  • moose-v0.1.1-rc(Jun 27, 2022)

    What's Changed

    • BUG: Fixed moose_uninstaller to remove env variables. by @LalithShiyam in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/28

    Full Changelog: https://github.com/QIMP-Team/MOOSE-v0.1.0/compare/moose-v0.1.0-rc...moose-v0.1.1-rc

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.0-rc(Jun 27, 2022)

    What's Changed

    • The source code has been made modular to ensure maintainability.
    • MOOSE now generates log files for each run, which makes it easier to debug.
    • The output messages are much cleaner and organised, with clean progress bars.
    • FSL dependency is completely removed. We use nibabel now.
    • MOOSE now creates a stats folder which contains the following metrics in a '.csv' file:
    • SUV (mean, max, std, max, min) values, if PET images are provided
    • HU units (mean, max, std, max, min)
    • Volume metrics from CT
    • MOOSE now has a binary classifier (fastai-based) which figures out if a given PET volume has a brain in the field-of-view, works most of the times.
    • Automated affine alignment between PET/CT, if both images are present. Just to ensure spatial alignment.

    New Contributors

    • @LalithShiyam made their first contribution in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/4
    • @Keyn34 made their first contribution in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/11

    Full Changelog: https://github.com/QIMP-Team/MOOSE-v0.1.0/commits/moose-v0.1.0-rc

    ** To-do:

    • [ ] Docker image for the current version
    Source code(tar.gz)
    Source code(zip)
Owner
QIMP team
Our vision is to enable a wider adoption of fully-quantitative molecular image information in the context of personalized medicine.
QIMP team
An Unpaired Sketch-to-Photo Translation Model

Unpaired-Sketch-to-Photo-Translation We have released our code at https://github.com/rt219/Unsupervised-Sketch-to-Photo-Synthesis This project is the

38 Oct 28, 2022
Alternatives to Deep Neural Networks for Function Approximations in Finance

Alternatives to Deep Neural Networks for Function Approximations in Finance Code companion repo Overview This is a repository of Python code to go wit

15 Dec 17, 2022
ไธ€ไธชๅคšๆจกๆ€ๅ†…ๅฎน็†่งฃ็ฎ—ๆณ•ๆก†ๆžถ๏ผŒๅ…ถไธญๅŒ…ๅซๆ•ฐๆฎๅค„็†ใ€้ข„่ฎญ็ปƒๆจกๅž‹ใ€ๅธธ่งๆจกๅž‹ไปฅๅŠๆจกๅž‹ๅŠ ้€Ÿ็ญ‰ๆจกๅ—ใ€‚

Overview ๆžถๆž„่ฎพ่ฎก ๆ’ไปถไป‹็ป ๅฎ‰่ฃ…ไฝฟ็”จ ๆก†ๆžถ็ฎ€ไป‹ ๆ–นไพฟไฝฟ็”จ๏ผŒๆ”ฏๆŒๅคšๆจกๆ€๏ผŒๅคšไปปๅŠก็š„็ปŸไธ€่ฎญ็ปƒๆก†ๆžถ ่ƒฝๅŠ›ๅˆ—่กจ๏ผš bert + ๅˆ†็ฑปไปปๅŠก ่‡ชๅฎšไน‰ไปปๅŠก่ฎญ็ปƒ๏ผˆๆ’ไปถๆณจๅ†Œ๏ผ‰ ๆก†ๆžถ่ฎพ่ฎก ๆก†ๆžถ้‡‡็”จๅˆ†ๅฑ‚็š„ๆ€ๆƒณ็ป„็ป‡ๆจกๅž‹่ฎญ็ปƒๆต็จ‹ใ€‚ DATA ๅฑ‚่ดŸ่ดฃ่ฏปๅ–็”จๆˆทๆ•ฐๆฎ๏ผŒๆ นๆฎ field ็ฎก็†ๆ•ฐๆฎใ€‚ Parser ๅฑ‚่ดŸ่ดฃ่ฝฌๆขๅŽŸ

Tencent 265 Dec 22, 2022
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch

CoCa - Pytorch Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch. They were able to elegantly fit in contras

Phil Wang 565 Dec 30, 2022
Official repository for ABC-GAN

ABC-GAN The work represented in this repository is the result of a 14 week semesterthesis on photo-realistic image generation using generative adversa

IgorSusmelj 10 Jun 23, 2022
StyleGAN-Human: A Data-Centric Odyssey of Human Generation

StyleGAN-Human: A Data-Centric Odyssey of Human Generation Abstract: Unconditional human image generation is an important task in vision and graphics,

stylegan-human 762 Jan 08, 2023
A paper using optimal transport to solve the graph matching problem.

GOAT A paper using optimal transport to solve the graph matching problem. https://arxiv.org/abs/2111.05366 Repo structure .github: Files specifying ho

neurodata 8 Jan 04, 2023
This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider

SBEVNet: End-to-End Deep Stereo Layout Estimation This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by D

Divam Gupta 19 Dec 17, 2022
[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)

Are Transformers More Robust Than CNNs? Pytorch implementation for NeurIPS 2021 Paper: Are Transformers More Robust Than CNNs? Our implementation is b

Yutong Bai 145 Dec 01, 2022
Repository for open research on optimizers.

Open Optimizers Repository for open research on optimizers. This is a test in sharing research/exploration as it happens. If you use anything from thi

Ariel Ekgren 6 Jun 24, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
Unoffical reMarkable AddOn for Firefox.

reMarkable for Firefox (Download) This repo converts the offical reMarkable Chrome Extension into a Firefox AddOn published here under the name "Unoff

Jelle Schutter 45 Nov 28, 2022
Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow.

Denoised-Smoothing-TF Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow. Denoised Smoothing is

Sayak Paul 19 Dec 11, 2022
A Keras implementation of YOLOv3 (Tensorflow backend)

keras-yolo3 Introduction A Keras implementation of YOLOv3 (Tensorflow backend) inspired by allanzelener/YAD2K. Quick Start Download YOLOv3 weights fro

7.1k Jan 03, 2023
Learning Logic Rules for Document-Level Relation Extraction

LogiRE Learning Logic Rules for Document-Level Relation Extraction We propose to introduce logic rules to tackle the challenges of doc-level RE. Equip

41 Dec 26, 2022
Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"

TR-BERT Source code and dataset for "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference". The code is based on huggaface's transformers.

THUNLP 37 Oct 30, 2022
Code for ACL2021 long paper: Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases

LANKA This is the source code for paper: Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases (ACL 2021, long paper) Referen

Boxi Cao 30 Oct 24, 2022
Code and dataset for AAAI 2021 paper FixMyPose: Pose Correctional Describing and Retrieval Hyounghun Kim, Abhay Zala, Graham Burri, Mohit Bansal.

FixMyPose / เคซเคฟเค•เฅเคธเคฎเคพเค‡เคชเฅ‹เฅ› Code and dataset for AAAI 2021 paper "FixMyPose: Pose Correctional Describing and Retrieval" Hyounghun Kim*, Abhay Zala*, Grah

4 Sep 19, 2022
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

LVI-SAM This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono

Tixiao Shan 1.1k Dec 27, 2022
Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence

Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence, etc. This article aims to provide an introduction on how to make use of the S

RISHABH MISHRA 1 Feb 13, 2022