FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.

Related tags

Deep LearningFastFCN
Overview

FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation

[Project] [Paper] [arXiv] [Home]

PWC

Official implementation of FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.
A Faster, Stronger and Lighter framework for semantic segmentation, achieving the state-of-the-art performance and more than 3x acceleration.

@inproceedings{wu2019fastfcn,
  title     = {FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation},
  author    = {Wu, Huikai and Zhang, Junge and Huang, Kaiqi and Liang, Kongming and Yu Yizhou},
  booktitle = {arXiv preprint arXiv:1903.11816},
  year = {2019}
}

Contact: Hui-Kai Wu ([email protected])

Update

2020-04-15: Now support inference on a single image !!!

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m experiments.segmentation.test_single_image --dataset [pcontext|ade20k] \
    --model [encnet|deeplab|psp] --jpu [JPU|JPU_X] \
    --backbone [resnet50|resnet101] [--ms] --resume {MODEL} --input-path {INPUT} --save-path {OUTPUT}

2020-04-15: New joint upsampling module is now available !!!

  • --jpu [JPU|JPU_X]: JPU is the original module in the arXiv paper; JPU_X is a pyramid version of JPU.

2020-02-20: FastFCN can now run on every OS with PyTorch>=1.1.0 and Python==3.*.*

  • Replace all C/C++ extensions with pure python extensions.

Version

  1. Original code, producing the results reported in the arXiv paper. [branch:v1.0.0]
  2. Pure PyTorch code, with torch.nn.DistributedDataParallel and torch.nn.SyncBatchNorm. [branch:latest]
  3. Pure Python code. [branch:master]

Overview

Framework

Joint Pyramid Upsampling (JPU)

Install

  1. PyTorch >= 1.1.0 (Note: The code is test in the environment with python=3.6, cuda=9.0)
  2. Download FastFCN
    git clone https://github.com/wuhuikai/FastFCN.git
    cd FastFCN
    
  3. Install Requirements
    nose
    tqdm
    scipy
    cython
    requests
    

Train and Test

PContext

python -m scripts.prepare_pcontext
Method Backbone mIoU FPS Model Scripts
EncNet ResNet-50 49.91 18.77
EncNet+JPU (ours) ResNet-50 51.05 37.56 GoogleDrive bash
PSP ResNet-50 50.58 18.08
PSP+JPU (ours) ResNet-50 50.89 28.48 GoogleDrive bash
DeepLabV3 ResNet-50 49.19 15.99
DeepLabV3+JPU (ours) ResNet-50 50.07 20.67 GoogleDrive bash
EncNet ResNet-101 52.60 (MS) 10.51
EncNet+JPU (ours) ResNet-101 54.03 (MS) 32.02 GoogleDrive bash

ADE20K

python -m scripts.prepare_ade20k

Training Set

Method Backbone mIoU (MS) Model Scripts
EncNet ResNet-50 41.11
EncNet+JPU (ours) ResNet-50 42.75 GoogleDrive bash
EncNet ResNet-101 44.65
EncNet+JPU (ours) ResNet-101 44.34 GoogleDrive bash

Training Set + Val Set

Method Backbone FinalScore (MS) Model Scripts
EncNet+JPU (ours) ResNet-50 GoogleDrive bash
EncNet ResNet-101 55.67
EncNet+JPU (ours) ResNet-101 55.84 GoogleDrive bash

Note: EncNet (ResNet-101) is trained with crop_size=576, while EncNet+JPU (ResNet-101) is trained with crop_size=480 for fitting 4 images into a 12G GPU.

Visual Results

Dataset Input GT EncNet Ours
PContext
ADE20K

More Visual Results

Acknowledgement

Code borrows heavily from PyTorch-Encoding.

Comments
  • Some problem when running test.py and train.py

    Some problem when running test.py and train.py

    Hi, I am a beginner in deep learning. Some problem occurred when I was running the code. First, I use the command 「 tar -xvf encnet_jpu_res50_pcontext.pth.tar 」 to extract the tar file, but it fails. Second, if i successfully extract the file and get checkpoint, which file should I put my checkpoint in ? Where should I extract my checkpoint file to? Thank You!

    opened by pp00704831 18
  • why i remove JPU,I also can  train model?

    why i remove JPU,I also can train model?

    Why does the code still execute without error when I delete the JPU module?(/FastFCN/encoding/nn/customize.py),I also can train model? These are my commands :(I did load the JPU module) CUDA_VISIBLE_DEVICES=4,5,6,7 python train.py --dataset pcontext --model encnet --jpu --aux --se-loss --backbone resnet101 --checkname encnet_res101_pcontext

    opened by E18301194 17
  • Segmentation fault

    Segmentation fault

    I think this problem is caused by my previous pytorch problem,so maybe i have to solve pytorch first.Could you give me some help? gcc:4.8 pytorch:1.1.0 python:3.5 and how could i change the pytorch version to 1.0.0?pip install torch==1.0?

    opened by Anikily 12
  • Performance Issue

    Performance Issue

    Thanks for your work. I have tried this script: https://github.com/wuhuikai/FastFCN/blob/master/experiments/segmentation/scripts/encnet_res50_pcontext.sh with the hardware and software: 4xTitanXp, Ubuntu16.04, CUDA9.0, PyToch1.0

    But I can't reproduce the performance reported in your paper. I got pixAcc: 0.7747, mIoU: 0.4785 for single-scale, and pixAcc: 0.7833, mIoU: 0.4898 for multi-scale.

    I would appreciate your help. Thanks for your consideration.

    bug 
    opened by tonysy 12
  • FastFCN has been supported by MMSegmentation.

    FastFCN has been supported by MMSegmentation.

    Hi, right now FastFCN has been supported by MMSegmentation. We do find using JPU with smaller feature maps from backbone could get similar or higher performance than original models with larger feature maps.

    There is still something to do for us, for example, we do not find obviously improvement about FPS in our implementation, thus we would try to figure it out in the future.

    Anyway, thanks for your work and hope more people from community could use FastFCN.

    Best,

    opened by MengzhangLI 9
  • RuntimeError: Failed downloading

    RuntimeError: Failed downloading

    Hi, thanks for your work. I try to run your code to train a model on the pascalContext dataset.But I got the following error: RuntimeError: Failed downloading url https://hangzh.s3.amazonaws.com/encoding/models/resnet50-ebb6acbb.zip I find the problem is I can not download the pretrained model. I find the author no longer provide the pretrained resnet model. https://github.com/zhanghang1989/PyTorch-Encoding/issues/273

    So, How can I solve this problem. Thanks for your consideration.

    opened by bufferXia 9
  • How could I set

    How could I set "resume" while running test_single_image?

    Hello!

    When I run test_single_image.py, I tried to set resume as path of resnet101-2a57e44d.pth and encountered an error.

    File "G:/gitfolder/FastFCN/experiments/segmentation/test_single_image.py", line 43, in test model.load_state_dict(checkpoint['state_dict'], strict=False) KeyError: 'state_dict

    I doubted that there existed a problem with "resume". Waiting for your reply.

    Thank you!

    opened by CN-HaoJiang 8
  • Questions about the SE-loss and  Aux-loss

    Questions about the SE-loss and Aux-loss

    Hi, first thank you for the great work. I just checked the codes and also had run some scripts. I am confused with the final loss which is composited with three individual losses. could you tell what is the se-loss and the aux-loss used for.

    opened by meanmee 7
  • Backbone weights download links not working anymore

    Backbone weights download links not working anymore

    Download links for the backbone do not seem to work anymore.

    I've tested with Resnet50 (https://hangzh.s3.amazonaws.com/encoding/models/resnet50-ebb6acbb.zip) and Resnet 101 (https://hangzh.s3.amazonaws.com/encoding/models/resnet101-2a57e44d.zip) too.

    I also tried to use torchivision weights instead, but I got matching errors when trying to load them.

    Could you consider reuploading the weights? That would be very helpful!

    opened by Khroto 6
  • Segmentation Fault

    Segmentation Fault

    我執行以下 command 準備 train model 但是發生 segmentation fault 有人有這個問題嗎 ? 謝謝幫忙 !

    run : CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --dataset pcontext --model encnet --jpu --aux --se-loss --backbone resnet101 --checkname encnet_res101_pcontext

    crashed : Using poly LR Scheduler! Starting Epoch: 0 Total Epoches: 80 0%| | 0/312 [00:00<?, ?it/s] =>Epoches 0, learning rate = 0.0010, previous best = 0.0000 Segmentation fault

    //------------ Nvidia GPU : Tesla P100-PCIE 16G x 4 CPU : GenuineIntel x 18 , Memory 140G totally

    opened by SimonTsungHanKuo 6
  • Need your suggestions

    Need your suggestions

    Hi, i have designed this SPP module for my network. But i am also interested in your work to replace my his module with JPU. Would you like to give me any suggestions? here is my implementation

    class SPP(nn.Module): def init(self, pool_sizes): super(SPP, self).init() self.pool_sizes = pool_sizes

    def forward(self, x):
        h, w = x.shape[2:]
        k_sizes = []
        strides = []
        for pool_size in self.pool_sizes:
            k_sizes.append((int(h / pool_size), int(w / pool_size)))
            strides.append((int(h / pool_size), int(w / pool_size)))
    
        spp_sum = x
    
        for i in range(len(self.pool_sizes)):
            out = F.avg_pool2d(x, k_sizes[i], stride=strides[i], padding=0)
            out = F.upsample(out, size=(h, w), mode="bilinear")
            spp_sum = spp_sum + out
    
        return spp_sum  
    
    opened by haideralimughal 5
  • add resnest and xception65

    add resnest and xception65

    Copy Resnest and xception65 from Pytorch-Encoding, and xception65 only can be used without pretrained models.

    Pls be careful as there are many changes!!

    I test it on my own server, and everything seems ok. As a caution, maybe you could test it by yourself first.My FastFCN

    I don't change the Readme.md and *.sh. Maybe you can rectify it if you agree this request.

    If the server resources are not tight, I will run the encnet+jpu+resnest101+pcontext and encnet+jpu_x+resnest101+pcontext, I will share you the results at issues or pull another request about Readme.md with my pth.tar.

    Thanks for your work again.

    opened by tjj1998 1
Releases(v1.0.0)
Code and models used in "MUSS Multilingual Unsupervised Sentence Simplification by Mining Paraphrases".

Multilingual Unsupervised Sentence Simplification Code and pretrained models to reproduce experiments in "MUSS: Multilingual Unsupervised Sentence Sim

Facebook Research 81 Dec 29, 2022
This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)

Transferability for domain generalization This repo is for evaluating and improving transferability in domain generalization (NeurIPS 2021), based on

gordon 9 Nov 29, 2022
learned_optimization: Training and evaluating learned optimizers in JAX

learned_optimization: Training and evaluating learned optimizers in JAX learned_optimization is a research codebase for training learned optimizers. I

Google 533 Dec 30, 2022
A python library to artfully visualize Factorio Blueprints and an interactive web demo for using it.

Factorio Blueprint Visualizer I love the game Factorio and I really like the look of factories after growing for many hours or blueprints after tweaki

Piet Brömmel 124 Jan 07, 2023
We propose a new method for effective shadow removal by regarding it as an exposure fusion problem.

Auto-exposure fusion for single-image shadow removal We propose a new method for effective shadow removal by regarding it as an exposure fusion proble

Qing Guo 146 Dec 31, 2022
An implementation of the AdaOPS (Adaptive Online Packing-based Search), which is an online POMDP Solver used to solve problems defined with the POMDPs.jl generative interface.

AdaOPS An implementation of the AdaOPS (Adaptive Online Packing-guided Search), which is an online POMDP Solver used to solve problems defined with th

9 Oct 05, 2022
Real-Time Semantic Segmentation in Mobile device

Real-Time Semantic Segmentation in Mobile device This project is an example project of semantic segmentation for mobile real-time app. The architectur

708 Jan 01, 2023
Official Code for "Non-deep Networks"

Non-deep Networks arXiv:2110.07641 Ankit Goyal, Alexey Bochkovskiy, Jia Deng, Vladlen Koltun Overview: Depth is the hallmark of DNNs. But more depth m

Ankit Goyal 567 Dec 12, 2022
Implementation of ToeplitzLDA for spatiotemporal stationary time series data.

Code for the ToeplitzLDA classifier proposed in here. The classifier conforms sklearn and can be used as a drop-in replacement for other LDA classifiers. For in-depth usage refer to the learning from

Jan Sosulski 5 Nov 07, 2022
Predict bus arrival time using VertexAI and Nvidia's Jetson Nano

bus_prediction predict bus arrival time using VertexAI and Nvidia's Jetson Nano imagenet the command for imagenet.py look like this python3 /path/to/i

10 Dec 22, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 58 Dec 06, 2022
A PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling"

SelfGNN A PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling" paper, which will appear in Th

Zekarias Tilahun 24 Jun 21, 2022
Deep Learning Visuals contains 215 unique images divided in 23 categories

Deep Learning Visuals contains 215 unique images divided in 23 categories (some images may appear in more than one category). All the images were originally published in my book "Deep Learning with P

Daniel Voigt Godoy 1.3k Dec 28, 2022
Automate issue discovery for your projects against Lightning nightly and releases.

Automated Testing for Lightning EcoSystem Projects Automate issue discovery for your projects against Lightning nightly and releases. You get CPUs, Mu

Pytorch Lightning 41 Dec 24, 2022
PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

76 Dec 24, 2022
Bayesian optimisation library developped by Huawei Noah's Ark Library

Bayesian Optimisation Research This directory contains official implementations for Bayesian optimisation works developped by Huawei R&D, Noah's Ark L

HUAWEI Noah's Ark Lab 395 Dec 30, 2022
[IJCAI'21] Deep Automatic Natural Image Matting

Deep Automatic Natural Image Matting [IJCAI-21] This is the official repository of the paper Deep Automatic Natural Image Matting. Introduction | Netw

Jizhizi_Li 316 Jan 06, 2023
RGB-D Local Implicit Function for Depth Completion of Transparent Objects

RGB-D Local Implicit Function for Depth Completion of Transparent Objects [Project Page] [Paper] Overview This repository maintains the official imple

NVIDIA Research Projects 43 Dec 12, 2022
A Python package for generating concise, high-quality summaries of a probability distribution

GoodPoints A Python package for generating concise, high-quality summaries of a probability distribution GoodPoints is a collection of tools for compr

Microsoft 28 Oct 10, 2022
Deep Learning and Logical Reasoning from Data and Knowledge

Logic Tensor Networks (LTN) Logic Tensor Network (LTN) is a neurosymbolic framework that supports querying, learning and reasoning with both rich data

171 Dec 29, 2022