Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation

Overview

Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation

This is the inference codes of Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation using Tensorflow (paper link). Given an image and its trimap, it estimates the alpha matte and foreground color.

Paper

Setup

Requirements

System: Ubuntu

Tensorflow version: tf1.8, tf1.12 and tf1.13 (It might also work for other versions.)

GPU memory: >= 12G

System RAM: >= 64G

Download codes and models

1, Clone Context-aware Matting repository

git clone https://github.com/hqqxyy/Context-Aware-Matting.git

2, Download our models at here. Unzip them and move it to root of this repository.

tar -xvf model.tgz

After moving, it should be like

.
├── conmat
│   ├── common.py
│   ├── core
│   ├── demo.py
│   ├── model.py
│   └── utils
├── examples
│   ├── img
│   └── trimap
├── model
│   ├── lap
│   ├── lap_fea_da
│   └── lap_fea_da_color
└── README.md

Run

You can first set the image and trimap path by:

export IMAGEPATH=./examples/img/2848300_93d0d3a063_o.png
export TRIMAPPATH=./examples/trimap/2848300_93d0d3a063_o.png

For the model(3) ME+CE+lap in the paper,

python conmat/demo.py \
--checkpoint=./model/lap/model.ckpt \
--vis_logdir=./log/lap/ \
--fgpath=$IMAGEPATH \
--trimappath=$TRIMAPPATH \
--model_parallelism=True

You can find the result at ./log/

For the model(5) ME+CE+lap+fea+DA in the paper. (Please use this model for the real world images)

python conmat/demo.py \
--checkpoint=./model/lap_fea_da/model.ckpt \
--vis_logdir=./log/lap_fea_da/ \
--fgpath=$IMAGEPATH \
--trimappath=$TRIMAPPATH \
--model_parallelism=True

You can find the result at ./log/

For the model(7) ME+CE+lap+fea+color+DA in the paper.

python conmat/demo.py \
--checkpoint=./model/lap_fea_da_color/model.ckpt \
--vis_logdir=./log/lap_fea_da_color/ \
--fgpath=$IMAGEPATH \
--trimappath=$TRIMAPPATH \
--branch_vis=1 \
--branch_vis=1 \
--model_parallelism=True

You can find the result at ./log/

Note

Please note that since the input image is high resolution. You might need to use gpu whose memory is bigger or equal to 12G. You can set the --model_parallelism=True in order to further save the GPU memory.

If you still meet problems, you can run the codes in CPU by disable GPU

export CUDA_VISIBLE_DEVICES=''

, and you need to set --model_parallelism=False. Otherwise, you can resize the image and trimap to a smaller size and then change the vis_comp_crop_size and vis_patch_crop_size accordingly.

You can download our results of Compisition-1k dataset and the real-world image dataset at here.

License

The provided implementation is strictly for academic purposes only. Should you be interested in using our technology for any commercial use, please feel free to contact us.

If you find this code is helpful, please consider to cite our paper.

@inproceedings{hou2019context,
  title={Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation},
  author={Hou, Qiqi and Liu, Feng},
  booktitle = {IEEE International Conference on Computer Vision},
  year = {2019}
}

If you find any bugs of the code, feel free to send me an email: qiqi2 AT pdx DOT edu. You can find more information in my homepage.

Acknowledgments

This projects employs functions from Deeplab V3+ to implement our network. The source images in the demo figure are used under a Creative Commons license from Flickr users Robbie Sproule, MEGA PISTOLO and Jeff Latimer. The background images are from the MS-COCO dataset. The images in the examples are from Composition-1k dataset and the real-world image. We thank them for their help.

Owner
Qiqi Hou
I am a 4th year Ph.D. student at Portland State University. I have broad interests in computer vision, computer graphics, and machine learning.
Qiqi Hou
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation

MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation This repo is the official implementation of "MHFormer: Multi-Hypothesis Transforme

Vegetabird 281 Jan 07, 2023
Voxel-based Network for Shape Completion by Leveraging Edge Generation (ICCV 2021, oral)

Voxel-based Network for Shape Completion by Leveraging Edge Generation This is the PyTorch implementation for the paper "Voxel-based Network for Shape

10 Dec 04, 2022
DeepMReye: magnetic resonance-based eye tracking using deep neural networks

DeepMReye: magnetic resonance-based eye tracking using deep neural networks

73 Dec 21, 2022
Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF shows significant improvements over baseline fine-tuning without data filtration.

Information Gain Filtration Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF sho

4 Jul 28, 2022
A modification of Daniel Russell's notebook merged with Katherine Crowson's hq-skip-net changes

Edits made to this repo by Katherine Crowson I have added several features to this repository for use in creating higher quality generative art (featu

Paul Fishwick 10 May 07, 2022
This repo holds the code of TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation

TransFuse This repo holds the code of TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation Requirements Pytorch=1.6.0, 1.9.0 (=1.

Rayicer 93 Dec 19, 2022
(NeurIPS '21 Spotlight) IQ-Learn: Inverse Q-Learning for Imitation

Inverse Q-Learning (IQ-Learn) Official code base for IQ-Learn: Inverse soft-Q Learning for Imitation, NeurIPS '21 Spotlight IQ-Learn is an easy-to-use

Divyansh Garg 102 Dec 20, 2022
The Codebase for Causal Distillation for Language Models.

Causal Distillation for Language Models Zhengxuan Wu*,Atticus Geiger*, Josh Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, Noah D.

Zen 20 Dec 31, 2022
Training RNNs as Fast as CNNs

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
Video-based open-world segmentation

UVO_Challenge Team Alpes_runner Solutions This is an official repo for our UVO Challenge solutions for Image/Video-based open-world segmentation. Our

Yuming Du 84 Dec 22, 2022
Wide Residual Networks (WideResNets) in PyTorch

Wide Residual Networks (WideResNets) in PyTorch WideResNets for CIFAR10/100 implemented in PyTorch. This implementation requires less GPU memory than

Jason Kuen 296 Dec 27, 2022
Causal Imitative Model for Autonomous Driving

Causal Imitative Model for Autonomous Driving Mohammad Reza Samsami, Mohammadhossein Bahari, Saber Salehkaleybar, Alexandre Alahi. arXiv 2021. [Projec

VITA lab at EPFL 8 Oct 04, 2022
Official Implementation of Neural Splines

Neural Splines: Fitting 3D Surfaces with Inifinitely-Wide Neural Networks This repository contains the official implementation of the CVPR 2021 (Oral)

Francis Williams 56 Nov 29, 2022
PyTorch Implementation of AnimeGANv2

PyTorch implementation of AnimeGANv2

4k Jan 07, 2023
GeneDisco is a benchmark suite for evaluating active learning algorithms for experimental design in drug discovery.

GeneDisco is a benchmark suite for evaluating active learning algorithms for experimental design in drug discovery.

22 Dec 12, 2022
This repo is developed for Strong Baseline For Vehicle Re-Identification in Track 2 Ai-City-2021 Challenges

A STRONG BASELINE FOR VEHICLE RE-IDENTIFICATION This paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition Workshop(CVPR

Cybercore Co. Ltd 78 Dec 29, 2022
Global Rhythm Style Transfer Without Text Transcriptions

Global Prosody Style Transfer Without Text Transcriptions This repository provides a PyTorch implementation of AutoPST, which enables unsupervised glo

Kaizhi Qian 193 Dec 30, 2022
House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent for Professional Architects

House-GAN++ Code and instructions for our paper: House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent

122 Dec 28, 2022
[WACV 2022] Contextual Gradient Scaling for Few-Shot Learning

CxGrad - Official PyTorch Implementation Contextual Gradient Scaling for Few-Shot Learning Sanghyuk Lee, Seunghyun Lee, and Byung Cheol Song In WACV 2

Sanghyuk Lee 4 Dec 05, 2022
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"

FLASH - Pytorch Implementation of the Transformer variant proposed in the paper Transformer Quality in Linear Time Install $ pip install FLASH-pytorch

Phil Wang 209 Dec 28, 2022