7th place solution of Human Protein Atlas - Single Cell Classification on Kaggle

Overview

kaggle-hpa-2021-7th-place-solution

Code for 7th place solution of Human Protein Atlas - Single Cell Classification on Kaggle.

A description of the method can be found in this post in the kaggle discussion.

Dataset Preparation

Resize Images

# Resize train images to 768x768
python scripts/hap_segmenter/create_cell_mask.py resize_image \
    --input_directory data/input/hpa-single-cell-image-classification.zip/train \
    --output_directory data/input/hpa-768768.zip \
    --image_size 768
# Resize train images to 1536x1536
python scripts/hap_segmenter/create_cell_mask.py resize_image \
    --input_directory data/input/hpa-single-cell-image-classification.zip/train \
    --output_directory data/input/hpa-1536.zip \
    --image_size 1536

# Resize test images to 768x768
python scripts/hpa_segmenter/create_cell_mask.py resize_image \
    --input_directory /kaggle/input/hpa-single-cell-image-classification/test \
    --output_directory data/input/hpa-768-test.zip \
    --image_size 768
# Resize test images to 1536x1536
python scripts/hpa_segmenter/create_cell_mask.py resize_image \
    --input_directory /kaggle/input/hpa-single-cell-image-classification/test \
    --output_directory data/input/hpa-1536-test.zip \
    --image_size 1536

You can specify a directory in a zip file in the same way as a normal directory.

Download Public HPA

Download all images in kaggle_2021.tsv in this dataset, resize them into 768x768 and 1536x1536, and archive them as data/input/hpa-public-768.zip and data/input/hpa-public-1536.zip.

Create Cell Mask

# Create cell masks for the Kaggle train set with 1536x1536
python scripts/hpa_segmenter/create_cell_mask.py create_cell_mask \
    --input_directory data/input/hpa-1536.zip \
    --output_directory data/input/hpa-1536-mask-v2.zip \
    --label_cell_scale_factor 1.0

# Resize the masks to 768x768
python scripts/hpa_segmenter/create_cell_mask.py resize_cell_mask \
    --input_directory data/input/hpa-1536-mask-v2.zip \
    --output_directory data/input/hpa-768-mask-v2-from-1536.zip \
    --image_size 768

# Create cell masks for the Public HPA dataset with 1536x1536
python scripts/hpa_segmenter/create_cell_mask.py create_cell_mask \
    --input_directory data/input/hpa-public-1536.zip/hpa-public-1536 \
    --output_directory data/input/hpa-public-1536-mask-v2.zip \
    --label_cell_scale_factor 1.0

# Resize the masks to 768x768
python scripts/hpa_segmenter/create_cell_mask.py resize_cell_mask \
    --input_directory data/input/hpa-public-1536-mask-v2.zip \
    --output_directory data/input/hpa-public-768-mask-v2-from-1536.zip \
    --image_size 768

# Create cell masks for the test set with the original resolution
# Run with `--label_cell_scale_factor = 0.5` to save inference time
python scripts/hpa_segmenter/create_cell_mask.py create_cell_mask \
    --input_directory /kaggle/input/hpa-single-cell-image-classification/test \
    --output_directory data/input/hpa-test-mask-v2.zip \
    --label_cell_scale_factor 0.5

# Resize the masks to 1536x1536
python scripts/hpa_segmenter/create_cell_mask.py resize_cell_mask \
    --input_directory data/input/hpa-test-mask-v2.zip \
    --output_directory data/input/hpa-test-mask-v2-1536.zip \
    --image_size 1536

# Resize the masks to 768x768
python scripts/hpa_segmenter/create_cell_mask.py resize_cell_mask \
    --input_directory data/input/hpa-test-mask-v2.zip \
    --output_directory data/input/hpa-test-mask-v2-768.zip \
    --image_size 768

Create Input for Cell-level Classifier

# Create cell-level inputs for the Kaggle train set using 768x768 images as fixed scale image.
python scripts/hap_segmenter/create_cell_mask.py crop_and_resize_cell \
    --image_directory data/input/hpa-768768.zip \
    --cell_mask_directory data/input/hpa-768-mask-v2-from-1536.zip \
    --output_directory data/input/hpa-cell-crop-v2-192-from-768.zip \
    --image_size 192

# Create cell-level inputs for the Public HPA dataset using 768x768 images as fixed scale image.
python scripts/hap_segmenter/create_cell_mask.py crop_and_resize_cell \
    --image_directory data/input/hpa-public-768.zip \
    --cell_mask_directory data/input/hpa-public-768-mask-v2-from-1536.zip \
    --output_directory data/input/hpa-public-cell-crop-v2-192-from-768.zip \
    --image_size 192

# Create cell-level inputs for the Kaggle train set using 1536x1536 images as fixed scale image.
python scripts/hap_segmenter/create_cell_mask.py crop_and_resize_cell \
    --image_directory data/input/hpa-1536.zip \
    --cell_mask_directory data/input/hpa-1536-mask-v2.zip \
    --output_directory data/input/hpa-cell-crop-v2-192-from-1536.zip \
    --image_size 192

# Create cell-level inputs for the Public HPA dataset using 1536x1536 images as fixed scale image.
python scripts/hap_segmenter/create_cell_mask.py crop_and_resize_cell \
    --image_directory data/input/hpa-public-1536.zip \
    --cell_mask_directory data/input/hpa-public-1536-mask-v2.zip \
    --output_directory data/input/hpa-public-cell-crop-v2-192-from-1536.zip \
    --image_size 192

# Create cell-level inputs for the test set using 768x768 images as fixed scale image.
python scripts/hpa_segmenter/create_cell_mask.py crop_and_resize_cell \
    --image_directory data/input/hpa-768768-test.zip \
    --cell_mask_directory data/input/hpa-test-mask-v2-768.zip \
    --output_directory data/input/hpa-test-cell-crop-v2-192-from-768.zip \
    --image_size 192

# Create cell-level inputs for the test set using 1536x1536 images as fixed scale image.
python scripts/hpa_segmenter/create_cell_mask.py crop_and_resize_cell \
    --image_directory data/input/hpa-1536-test.zip \
    --cell_mask_directory data/input/hpa-test-mask-v2-1536.zip \
    --output_directory data/input/hpa-test-cell-crop-v2-192-from-1536.zip \
    --image_size 192

Training

# Train image-level classifier
python scripts/cam_consistency_training/run.py train \
    --config_path scripts/cam_consistency_training/configs/${CONFIG_NAME}.yaml

# Train cell-level classifier
python scripts/cell_crop/run.py train \
    --config_path scripts/cell_crop/configs/${CONFIG_NAME}.yaml

If you want to train on multiple GPUs, use a launcher like torch.distributed.launch and pass --local_rank option. You can override the fields in the config by passing an argument like field_name=${value} (e.g. fold_index=1). We trained 5 folds for all models used in the final submission pipeline. The config files are located in scripts/cam_consistency_training/configs and scripts/cell_crop/configs. We trained the models in the following order.

  1. scripts/cam_consistency_training/configs/eff-b2-focal-alpha1-cutmix-pubhpa-maskv2.yaml
  2. scripts/cam_consistency_training/configs/eff-b5-focal-alpha1-cutmix-pubhpa-maskv2.yaml
  3. scripts/cam_consistency_training/configs/eff-b7-focal-alpha1-cutmix-pubhpa-maskv2.yaml
  4. scripts/cam_consistency_training/configs/eff-b2-cutmix-pubhpa-768-to-1536.yaml
  5. Do predict_valid and concat_valid_predictions (described below) for each model and save the average of the output files under data/working/consistency_training/b2-1536-b2-b5-b7-768-avg/.
  6. scripts/cam_consistency_training/configs/eff-b2-focal-stage2-b2b2b5b7avg.yaml
  7. scripts/cell_crop/configs/resnest50-bce-from768-cutmix-softpl.yaml
  8. Do predict_valid and concat_valid_predictions for each model and save the average of the output files under data/working/image-level-and-cell-crop-both-5folds/.
  9. scripts/cam_consistency_training/configs/eff-b2-focal-stage3.yaml
  10. scripts/cam_consistency_training/configs/eff-b2-focal-stage3-cos.yaml
  11. scripts/cell_crop/configs/resnest50-bce-from768-stage3.yaml
  12. scripts/cell_crop/configs/resnest50-bce-from1536-stage3-cos.yaml

Inference

Validation Set

# Image-level classifier inference
python scripts/cam_consistency_training/run.py predict_valid \
    --config_path scripts/cam_consistency_training/configs/${CONFIG_NAME}.yaml

# Cell-level classifier inference
python scripts/cell_crop/run.py predict_valid \
    --config_path scripts/cell_crop/configs/${CONFIG_NAME}.yaml

# Concatenate the predictions for each fold to obtain the OOF prediction for the entire training data
python scripts/cam_consistency_training/run.py concat_valid_predictions \
    --config_path scripts/cam_consistency_training/configs/${CONFIG_NAME}.yaml
python scripts/cell_crop/run.py concat_valid_predictions \
    --config_path scripts/cell_crop/configs/${CONFIG_NAME}.yaml

Test Set

# Image-level classifier inference
python scripts/cam_consistency_training/run.py predict_test \
    --config_path scripts/cam_consistency_training/configs/${CONFIG_NAME}.yaml

# Cell-level classifier inference
python scripts/cell_crop/run.py predict_test \
    --config_path scripts/cell_crop/configs/${CONFIG_NAME}.yaml

# Make our final submission with post-processing
python scripts/average_predictions.py \
    --orig_size_cell_mask_directory data/input/hpa-test-mask-v2.zip \
    "data/working/consistency_training/eff-b2-focal-stage3/0" \
    "data/working/consistency_training/eff-b2-focal-stage3/1" \
    "data/working/consistency_training/eff-b2-focal-stage3/2" \
    "data/working/consistency_training/eff-b2-focal-stage3/3" \
    "data/working/consistency_training/eff-b2-focal-stage3/4" \
    "data/working/consistency_training/eff-b2-focal-stage3-cos/0" \
    "data/working/consistency_training/eff-b2-focal-stage3-cos/1" \
    "data/working/consistency_training/eff-b2-focal-stage3-cos/2" \
    "data/working/consistency_training/eff-b2-focal-stage3-cos/3" \
    "data/working/consistency_training/eff-b2-focal-stage3-cos/4" \
    "data/working/cell_crop/resnest50-bce-from768-stage3/0" \
    "data/working/cell_crop/resnest50-bce-from768-stage3/1" \
    "data/working/cell_crop/resnest50-bce-from768-stage3/2" \
    "data/working/cell_crop/resnest50-bce-from768-stage3/3" \
    "data/working/cell_crop/resnest50-bce-from768-stage3/4" \
    "data/working/cell_crop/resnest50-bce-from1536-stage3-cos/0" \
    "data/working/cell_crop/resnest50-bce-from1536-stage3-cos/1" \
    "data/working/cell_crop/resnest50-bce-from1536-stage3-cos/2" \
    "data/working/cell_crop/resnest50-bce-from1536-stage3-cos/3" \
    "data/working/cell_crop/resnest50-bce-from1536-stage3-cos/4" \
    --edge_area_threshold 80000 --center_area_threshold 32000

Use the code on Kaggle Notebook

Use docker to zip the source code and the wheels of the dependencies and upload them as a dataset.

docker run --rm -it -v /path/to/this/repo:/tmp/workspace -w /tmp/workspace/ gcr.io/kaggle-images/python bash ./build_zip.sh

In Kaggle Notebook, when you copy the code as shown below, you can run it the same way as your local environment.

# Make a working directory
!mkdir -p /kaggle/tmp

# Change the current directory
cd /kaggle/tmp

# Copy source code from the uploaded dataset
!cp -r /kaggle/input/<your-dataset-name>/* .

# You can use it as well as local environment
!python scripts/hpa_segmenter/create_cell_mask.py create_cell_mask ...
COVID-Net Open Source Initiative

The COVID-Net models provided here are intended to be used as reference models that can be built upon and enhanced as new data becomes available

Linda Wang 1.1k Dec 26, 2022
Elastic weight consolidation technique for incremental learning.

Overcoming-Catastrophic-forgetting-in-Neural-Networks Elastic weight consolidation technique for incremental learning. About Use this API if you dont

Shivam Saboo 89 Dec 22, 2022
A testcase generation tool for Persistent Memory Programs.

PMFuzz PMFuzz is a testcase generation tool to generate high-value tests cases for PM testing tools (XFDetector, PMDebugger, PMTest and Pmemcheck) If

Systems Research at ShiftLab 14 Jul 24, 2022
AugLiChem - The augmentation library for chemical systems.

AugLiChem Welcome to AugLiChem! The augmentation library for chemical systems. This package supports augmentation for both crystaline and molecular sy

BaratiLab 17 Jan 08, 2023
Text Summarization - WCN — Weighted Contextual N-gram method for evaluation of Text Summarization

Text Summarization WCN — Weighted Contextual N-gram method for evaluation of Text Summarization In this project, I fine tune T5 model on Extreme Summa

Aditya Shah 1 Jan 03, 2022
The 2nd Version Of Slothybot

SlothyBot Go to this website: "https://bitly.com/SlothyBot" The 2nd Version Of Slothybot. The Bot Has Many Features, Such As: Moderation Commands; Kic

Slothy 0 Jun 01, 2022
MaskTrackRCNN for video instance segmentation based on mmdetection

MaskTrackRCNN for video instance segmentation Introduction This repo serves as the official code release of the MaskTrackRCNN model for video instance

411 Jan 05, 2023
Cookiecutter PyTorch Lightning

Cookiecutter PyTorch Lightning Instructions # install cookiecutter pip install cookiecutter

Mazen 8 Nov 06, 2022
Dados coletados e programas desenvolvidos no processo de iniciação científica

Iniciacao_cientifica_FAPESP_2020-14845-6 Dados coletados e programas desenvolvidos no processo de iniciação científica Os arquivos .py são os programa

1 Jan 10, 2022
Implementation of TimeSformer, a pure attention-based solution for video classification

TimeSformer - Pytorch Implementation of TimeSformer, a pure and simple attention-based solution for reaching SOTA on video classification.

Phil Wang 602 Jan 03, 2023
Hyperbolic Procrustes Analysis Using Riemannian Geometry

Hyperbolic Procrustes Analysis Using Riemannian Geometry The code in this repository creates the figures presented in this article: Please notice that

Ronen Talmon's Lab 2 Jan 08, 2023
Confidence Propagation Cluster aims to replace NMS-based methods as a better box fusion framework in 2D/3D Object detection

CP-Cluster Confidence Propagation Cluster aims to replace NMS-based methods as a better box fusion framework in 2D/3D Object detection, Instance Segme

Yichun Shen 41 Dec 08, 2022
Large-scale Hyperspectral Image Clustering Using Contrastive Learning, CIKM 21 Workshop

Spectral-spatial contrastive clustering (SSCC) Yaoming Cai, Yan Liu, Zijia Zhang, Zhihua Cai, and Xiaobo Liu, Large-scale Hyperspectral Image Clusteri

Yaoming Cai 4 Nov 02, 2022
A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization

sam.pytorch A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization ( Foret+2020) Paper, Official implementa

Ryuichiro Hataya 102 Dec 28, 2022
simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

Ramón Casero 1 Jan 07, 2022
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmenta

NVIDIA Research Projects 3.2k Dec 30, 2022
MiniSom is a minimalistic implementation of the Self Organizing Maps

MiniSom Self Organizing Maps MiniSom is a minimalistic and Numpy based implementation of the Self Organizing Maps (SOM). SOM is a type of Artificial N

Giuseppe Vettigli 1.2k Jan 03, 2023
Code for the CVPR 2021 paper "Triple-cooperative Video Shadow Detection"

Triple-cooperative Video Shadow Detection Code and dataset for the CVPR 2021 paper "Triple-cooperative Video Shadow Detection"[arXiv link] [official l

Zhihao Chen 24 Oct 04, 2022
This repo contains the code for the paper "Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroimaging" that has been accepted to NeurIPS 2021.

Dugh-NeurIPS-2021 This repo contains the code for the paper "Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroi

Ali Hashemi 5 Jul 12, 2022
Negative Sample is Negative in Its Own Way: Tailoring Negative Sentences forImage-Text Retrieval

NSGDC Some codes in this repo are copied/modified from opensource implementations made available by UNITER, PyTorch, HuggingFace, OpenNMT, and Nvidia.

Zhihao Fan 2 Nov 07, 2022