A PyTorch implementation of SIN: Superpixel Interpolation Network

Related tags

Deep LearningSIN
Overview

SIN: Superpixel Interpolation Network

This is is a PyTorch implementation of the superpixel segmentation network introduced in our PRICAI-2021 paper:

SIN: Superpixel Interpolation Network

Prerequisites

The training code was mainly developed and tested with python 3.6, PyTorch 1.4, CUDA 10, and Ubuntu 18.04.

Demo

The demo script run_demo.py provides the superpixels with grid size 16 x 16 using our pre-trained model (in /pretrained_ckpt). Please feel free to provide your own images by copying them into /demo/inputs, and run

python run_demo.py --data_dir=./demo/inputs --data_suffix=jpg --output=./demo 

The results will be generate in a new folder under /demo called spixel_viz.

Data preparation

To generate training and test dataset, please first download the data from the original BSDS500 dataset, and extract it to . Then, run

cd data_preprocessing
python pre_process_bsd500.py --dataset=
   
     --dump_root=
    
     
python pre_process_bsd500_ori_sz.py --dataset=
     
       --dump_root=
      
       
cd ..

      
     
    
   

The code will generate three folders under the , named as /train, /val, and /test, and three .txt files record the absolute path of the images, named as train.txt, val.txt, and test.txt.

Training

Once the data is prepared, we should be able to train the model by running the following command

python main.py --data=
   
     --savepath=
    

    
   

if we wish to continue a train process or fine-tune from a pre-trained model, we can run

python main.py --data=
   
     --savepath=
    
      --pretrained=
      

     
    
   

The code will start from the recorded status, which includes the optimizer status and epoch number.

The training log can be viewed from the tensorboard session by running

tensorboard --logdir=
   
     --port=8888

   

Testing

We provide test code to generate: 1) superpixel visualization and 2) the.csv files for evaluation.

To test on BSDS500, run

python run_infer_bsds.py --data_dir=
   
     --output=
    
      --pretrained=
     

     
    
   

To test on NYUv2, please follow the intruction on the superpixel benchmark to generate the test dataset, and then run

python run_infer_nyu.py --data_dir=
   
     --output=
    
      --pretrained=
     

     
    
   

To test on other datasets, please first collect all the images into one folder , and then convert them into the same format (e.g. .png or .jpg) if necessary, and run

python run_demo.py --data_dir=
   
     --data_suffix=
    
      --output=
     
       --pretrained=
      

      
     
    
   

Superpixels with grid size 16 x 16 will be generated by default. To generate the superpixel with a different grid size, we simply need to resize the images into the approporate resolution before passing them through the code. Please refer to run_infer_nyu.py for the details.

Evaluation

We use the code from superpixel benchmark for superpixel evaluation. A detailed instruction is available in the repository, please

(1) download the code and build it accordingly;

(2) edit the variables $SUPERPIXELS, IMG_PATH and GT_PATH in /eval_spixel/my_eval.sh,

(3) run

cp /eval_spixel/my_eval.sh 
   
    /examples/bash/
cd  
    
     /examples/
bash my_eval.sh

    
   

several files should be generated in the map_csv folders in the corresponding test outputs;

(4) run

cd eval_spixel
python copy_resCSV.py --src=
   
     --dst=
    

    
   

(5) open /eval_spixel/plot_benchmark_curve.m , set the our1l_res_path as and modify the num_list according to the test setting. The default setting is for our BSDS500 test set.;

(6) run the plot_benchmark_curve.m, the ASA Score, CO Score, and BR-BP curve of our method should be shown on the screen. If you wish to compare our method with the others, you can first run the method and organize the data as we state above, and uncomment the code in the plot_benchmark_curve.m to generate a similar figure shown in our papers.

Acknowledgement

The code is implemented based on superpixel_fcn. We would like to express our sincere thanks to the contributors.

Cite

If you use SIN in your work please cite our paper:

@article{yuan2021sin,
title={SIN: Superpixel Interpolation Network},
author={Qing Yuan, Songfeng Lu, Yan Huang, Wuxin Sha},
booktitle={PRICAI},
year={2021}
}

The fastai book, published as Jupyter Notebooks

English / Spanish / Korean / Chinese / Bengali / Indonesian The fastai book These notebooks cover an introduction to deep learning, fastai, and PyTorc

fast.ai 17k Jan 07, 2023
My course projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU)

ML2021Spring There are my projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU) Course Web : https://speech.ee.

Ding-Li Chen 15 Aug 29, 2022
Collect super-resolution related papers, data, repositories

Collect super-resolution related papers, data, repositories

WangChaofeng 1.7k Jan 03, 2023
Learning based AI for playing multi-round Koi-Koi hanafuda card games. Have fun.

Koi-Koi AI Learning based AI for playing multi-round Koi-Koi hanafuda card games. Platform Python PyTorch PySimpleGUI (for the interface playing vs AI

Sanghai Guan 10 Nov 20, 2022
Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Movement Primitives Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This

DFKI Robotics Innovation Center 63 Jan 06, 2023
Neon: an add-on for Lightbulb making it easier to handle component interactions

Neon Neon is an add-on for Lightbulb making it easier to handle component interactions. Installation pip install git+https://github.com/neonjonn/light

Neon Jonn 9 Apr 29, 2022
An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities.

Playground for CLIP-like models Demo Colab Link GradCAM Visualization Naive Zero-shot Detection Smarter Zero-shot Detection Captcha Solver Changelog 2

Kevin Zakka 101 Dec 30, 2022
Image Segmentation Evaluation

Image Segmentation Evaluation Martin Keršner, [email protected] Evaluation

Martin Kersner 273 Oct 28, 2022
Implicit Model Specialization through DAG-based Decentralized Federated Learning

Federated Learning DAG Experiments This repository contains software artifacts to reproduce the experiments presented in the Middleware '21 paper "Imp

Operating Systems and Middleware Group 5 Oct 16, 2022
People movement type classifier with YOLOv4 detection and SORT tracking.

Movement classification The goal of this project would be movement classification of people, in other words, walking (normal and fast) and running. Yo

4 Sep 21, 2021
Facilitates implementing deep neural-network backbones, data augmentations

Introduction Nowadays, the training of Deep Learning models is fragmented and unified. When AI engineers face up with one specific task, the common wa

40 Dec 29, 2022
9th place solution

AllDataAreExt-Galixir-Kaggle-HPA-2021-Solution Team Members Qishen Ha is Master of Engineering from the University of Tokyo. Machine Learning Engineer

daishu 5 Nov 18, 2021
A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Imaging Transcriptomics Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some prop

Alessio Giacomel 10 Dec 27, 2022
Interpretable-contrastive-word-mover-s-embedding

Interpretable-contrastive-word-mover-s-embedding Paper Datasets Here is a Dropbox link to the datasets used in the paper: https://www.dropbox.com/sh/n

0 Nov 02, 2021
OntoProtein: Protein Pretraining With Ontology Embedding

OntoProtein This is the implement of the paper "OntoProtein: Protein Pretraining With Ontology Embedding". OntoProtein is an effective method that mak

ZJUNLP 80 Dec 14, 2022
Net2net - Network-to-Network Translation with Conditional Invertible Neural Networks

Net2Net Code accompanying the NeurIPS 2020 oral paper Network-to-Network Translation with Conditional Invertible Neural Networks Robin Rombach*, Patri

CompVis Heidelberg 206 Dec 20, 2022
Code and hyperparameters for the paper "Generative Adversarial Networks"

Generative Adversarial Networks This repository contains the code and hyperparameters for the paper: "Generative Adversarial Networks." Ian J. Goodfel

Ian Goodfellow 3.5k Jan 08, 2023
The Codebase for Causal Distillation for Language Models.

Causal Distillation for Language Models Zhengxuan Wu*,Atticus Geiger*, Josh Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, Noah D.

Zen 20 Dec 31, 2022
A multi-entity Transformer for multi-agent spatiotemporal modeling.

baller2vec This is the repository for the paper: Michael A. Alcorn and Anh Nguyen. baller2vec: A Multi-Entity Transformer For Multi-Agent Spatiotempor

Michael A. Alcorn 56 Nov 15, 2022
Evaluation framework for testing segmentation networks in PyTorch

Evaluation framework for testing segmentation networks in PyTorch. What segmentation network to choose for next Kaggle competition? This benchmark knows the answer!

Eugene Khvedchenya 37 Apr 27, 2022