ICCV2021 - Mining Contextual Information Beyond Image for Semantic Segmentation

Related tags

Deep Learningmcibi
Overview

Introduction

The official repository for "Mining Contextual Information Beyond Image for Semantic Segmentation". Our full code has been merged into sssegmentation.

Abstract

This paper studies the context aggregation problem in semantic image segmentation. The existing researches focus on improving the pixel representations by aggregating the contextual information within individual images. Though impressive, these methods neglect the significance of the representations of the pixels of the corresponding class beyond the input image. To address this, this paper proposes to mine the contextual information beyond individual images to further augment the pixel representations. We first set up a feature memory module, which is updated dynamically during training, to store the dataset-level representations of various categories. Then, we learn class probability distribution of each pixel representation under the supervision of the ground-truth segmentation. At last, the representation of each pixel is augmented by aggregating the dataset-level representations based on the corresponding class probability distribution. Furthermore, by utilizing the stored dataset-level representations, we also propose a representation consistent learning strategy to make the classification head better address intra-class compactness and inter-class dispersion. The proposed method could be effortlessly incorporated into existing segmentation frameworks (e.g., FCN, PSPNet, OCRNet and DeepLabV3) and brings consistent performance improvements. Mining contextual information beyond image allows us to report state-of-the-art performance on various benchmarks: ADE20K, LIP, Cityscapes and COCO-Stuff.

Framework

img

Performance

COCOStuff-10k

Model Backbone Crop Size Schedule Train/Eval Set mIoU/mIoU (ms+flip) Download
DeepLabV3 R-50-D8 512x512 LR/POLICY/BS/EPOCH: 0.001/poly/16/110 train/test 38.84%/39.68% model | log
DeepLabV3 R-101-D8 512x512 LR/POLICY/BS/EPOCH: 0.001/poly/16/110 train/test 39.84%/41.49% model | log
DeepLabV3 S-101-D8 512x512 LR/POLICY/BS/EPOCH: 0.001/poly/32/150 train/test 41.18%/42.15% model | log
DeepLabV3 HRNetV2p-W48 512x512 LR/POLICY/BS/EPOCH: 0.001/poly/16/110 train/test 39.77%/41.35% model | log
DeepLabV3 ViT-Large 512x512 LR/POLICY/BS/EPOCH: 0.001/poly/16/110 train/test 44.01%/45.23% model | log

ADE20k

Model Backbone Crop Size Schedule Train/Eval Set mIoU/mIoU (ms+flip) Download
DeepLabV3 R-50-D8 512x512 LR/POLICY/BS/EPOCH: 0.01/poly/16/130 train/val 44.39%/45.95% model | log
DeepLabV3 R-101-D8 512x512 LR/POLICY/BS/EPOCH: 0.01/poly/16/130 train/val 45.66%/47.22% model | log
DeepLabV3 S-101-D8 512x512 LR/POLICY/BS/EPOCH: 0.004/poly/16/180 train/val 46.63%/47.36% model | log
DeepLabV3 HRNetV2p-W48 512x512 LR/POLICY/BS/EPOCH: 0.004/poly/16/180 train/val 45.79%/47.34% model | log
DeepLabV3 ViT-Large 512x512 LR/POLICY/BS/EPOCH: 0.01/poly/16/130 train/val 49.73%/50.99% model | log

CityScapes

Model Backbone Crop Size Schedule Train/Eval Set mIoU (ms+flip) Download
DeepLabV3 R-50-D8 512x1024 LR/POLICY/BS/EPOCH: 0.01/poly/16/440 trainval/test 79.90% model | log
DeepLabV3 R-101-D8 512x1024 LR/POLICY/BS/EPOCH: 0.01/poly/16/440 trainval/test 82.03% model | log
DeepLabV3 S-101-D8 512x1024 LR/POLICY/BS/EPOCH: 0.01/poly/16/500 trainval/test 81.59% model | log
DeepLabV3 HRNetV2p-W48 512x1024 LR/POLICY/BS/EPOCH: 0.01/poly/16/500 trainval/test 82.55% model | log

LIP

Model Backbone Crop Size Schedule Train/Eval Set mIoU/mIoU (flip) Download
DeepLabV3 R-50-D8 473x473 LR/POLICY/BS/EPOCH: 0.01/poly/32/150 train/val 53.73%/54.08% model | log
DeepLabV3 R-101-D8 473x473 LR/POLICY/BS/EPOCH: 0.01/poly/32/150 train/val 55.02%/55.42% model | log
DeepLabV3 S-101-D8 473x473 LR/POLICY/BS/EPOCH: 0.007/poly/40/150 train/val 56.21%/56.34% model | log
DeepLabV3 HRNetV2p-W48 473x473 LR/POLICY/BS/EPOCH: 0.007/poly/40/150 train/val 56.40%/56.99% model | log

Citation

If this code is useful for your research, please consider citing:

@article{jin2021mining,
  title={Mining Contextual Information Beyond Image for Semantic Segmentation},
  author={Jin, Zhenchao and Gong, Tao and Yu, Dongdong and Chu, Qi and Wang, Jian and Wang, Changhu and Shao, Jie},
  journal={arXiv preprint arXiv:2108.11819},
  year={2021}
}
Owner
student
[ECCV 2020] Gradient-Induced Co-Saliency Detection

Gradient-Induced Co-Saliency Detection Zhao Zhang*, Wenda Jin*, Jun Xu, Ming-Ming Cheng ⭐ Project Home » The official repo of the ECCV 2020 paper Grad

Zhao Zhang 35 Nov 25, 2022
The source codes for TME-BNA: Temporal Motif-Preserving Network Embedding with Bicomponent Neighbor Aggregation.

TME The source codes for TME-BNA: Temporal Motif-Preserving Network Embedding with Bicomponent Neighbor Aggregation. Our implementation is based on TG

2 Feb 10, 2022
Pytorch implementation of Depth-conditioned Dynamic Message Propagation forMonocular 3D Object Detection

DDMP-3D Pytorch implementation of Depth-conditioned Dynamic Message Propagation forMonocular 3D Object Detection, a paper on CVPR2021. Instroduction T

Li Wang 32 Nov 09, 2022
Code for ICCV2021 paper SPEC: Seeing People in the Wild with an Estimated Camera

SPEC: Seeing People in the Wild with an Estimated Camera [ICCV 2021] SPEC: Seeing People in the Wild with an Estimated Camera, Muhammed Kocabas, Chun-

Muhammed Kocabas 187 Dec 26, 2022
Large-scale Hyperspectral Image Clustering Using Contrastive Learning, CIKM 21 Workshop

Spectral-spatial contrastive clustering (SSCC) Yaoming Cai, Yan Liu, Zijia Zhang, Zhihua Cai, and Xiaobo Liu, Large-scale Hyperspectral Image Clusteri

Yaoming Cai 4 Nov 02, 2022
The official github repository for Towards Continual Knowledge Learning of Language Models

Towards Continual Knowledge Learning of Language Models This is the official github repository for Towards Continual Knowledge Learning of Language Mo

Joel Jang | 장요엘 65 Jan 07, 2023
NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations

NL-Augmenter 🦎 → 🐍 The NL-Augmenter is a collaborative effort intended to add transformations of datasets dealing with natural language. Transformat

684 Jan 09, 2023
Single Image Random Dot Stereogram for Tensorflow

TensorFlow-SIRDS Single Image Random Dot Stereogram for Tensorflow SIRDS is a means to present 3D data in a 2D image. It allows for scientific data di

Greg Peatfield 5 Aug 10, 2022
A project for developing transformer-based models for clinical relation extraction

Clinical Relation Extration with Transformers Aim This package is developed for researchers easily to use state-of-the-art transformers models for ext

uf-hobi-informatics-lab 101 Dec 19, 2022
Official code for 'Weakly-supervised Video Anomaly Detection with Robust Temporal Feature Magnitude Learning' [ICCV 2021]

RTFM This repo contains the Pytorch implementation of our paper: Weakly-supervised Video Anomaly Detection with Robust Temporal Feature Magnitude Lear

Yu Tian 242 Jan 08, 2023
Deep Q-Learning Network in pytorch (not actively maintained)

pytoch-dqn This project is pytorch implementation of Human-level control through deep reinforcement learning and I also plan to implement the followin

Hung-Tu Chen 342 Jan 01, 2023
Athena is the only tool that you will ever need to optimize your portfolio.

Athena Portfolio optimization is the process of selecting the best portfolio (asset distribution), out of the set of all portfolios being considered,

Indrajit 1 Mar 25, 2022
Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021.

Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021. Bobo Xi, Jiaojiao Li, Yunsong Li and Qian Du. Code f

Bobo Xi 7 Nov 03, 2022
SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022

SafePicking Learning Safe Object Extraction via Object-Level Mapping Kentaro Wad

Kentaro Wada 49 Oct 24, 2022
Hierarchical Uniform Manifold Approximation and Projection

HUMAP Hierarchical Manifold Approximation and Projection (HUMAP) is a technique based on UMAP for hierarchical non-linear dimensionality reduction. HU

Wilson Estécio Marcílio Júnior 160 Jan 06, 2023
💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena.

Heidelberg-NLP 17 Nov 07, 2022
[ICML 2021] A fast algorithm for fitting robust decision trees.

GROOT: Growing Robust Trees Growing Robust Trees (GROOT) is an algorithm that fits binary classification decision trees such that they are robust agai

Cyber Analytics Lab 17 Nov 21, 2022
Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)

Learning to Simulate Dynamic Environments with GameGAN PyTorch code for GameGAN Learning to Simulate Dynamic Environments with GameGAN Seung Wook Kim,

199 Dec 26, 2022
Code for the ECIR'22 paper "Evaluating the Robustness of Retrieval Pipelines with Query Variation Generators"

Query Variation Generators This repository contains the code and annotation data for the ECIR'22 paper "Evaluating the Robustness of Retrieval Pipelin

Gustavo Penha 12 Nov 20, 2022