PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Related tags

Deep LearningEMSRDPN
Overview

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning

This repository is for EMSRDPN introduced in the following paper

Bin-Cheng Yang and Gangshan Wu, "Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning", [arxiv]

It's an extension to a conference paper

Bin-Cheng Yang. 2019. Super Resolution Using Dual Path Connections. In Proceedings of the 27th ACM International Conference on Multimedia (MM ’19), October 21–25, 2019, Nice, France. ACM, NewYork, NY, USA, 9 pages. https://doi.org/10.1145/3343031.3350878

The code is built on EDSR (PyTorch) and tested on Ubuntu 16.04 environment (Python3.7, PyTorch_1.1.0, CUDA9.0) with Titan X/Xp/V100 GPUs.

Contents

  1. Introduction
  2. Train
  3. Test
  4. Results
  5. Citation
  6. Acknowledgements

Introduction

Deep convolutional neural networks have been demonstrated to be effective for SISR in recent years. On the one hand, residual connections and dense connections have been used widely to ease forward information and backward gradient flows to boost performance. However, current methods use residual connections and dense connections separately in most network layers in a sub-optimal way. On the other hand, although various networks and methods have been designed to improve computation efficiency, save parameters, or utilize training data of multiple scale factors for each other to boost performance, it either do super-resolution in HR space to have a high computation cost or can not share parameters between models of different scale factors to save parameters and inference time. To tackle these challenges, we propose an efficient single image super-resolution network using dual path connections with multiple scale learning named as EMSRDPN. By introducing dual path connections inspired by Dual Path Networks into EMSRDPN, it uses residual connections and dense connections in an integrated way in most network layers. Dual path connections have the benefits of both reusing common features of residual connections and exploring new features of dense connections to learn a good representation for SISR. To utilize the feature correlation of multiple scale factors, EMSRDPN shares all network units in LR space between different scale factors to learn shared features and only uses a separate reconstruction unit for each scale factor, which can utilize training data of multiple scale factors to help each other to boost performance, meanwhile which can save parameters and support shared inference for multiple scale factors to improve efficiency. Experiments show EMSRDPN achieves better performance and comparable or even better parameter and inference efficiency over SOTA methods.

Train

Prepare training data

  1. Download DIV2K training data (800 training images for x2, x3, x4 and x8) from DIV2K dataset and Flickr2K training data (2650 training images) from Flickr2K dataset.

  2. Untar the download files.

  3. Using src/generate_LR_x8.m to generate x8 LR data for Flickr2K dataset, you need to modify 'folder' in src/generate_LR_x8.m to your directory to place Flickr2K dataset.

  4. Specify '--dir_data' in src/option.py to your directory to place DIV2K and Flickr2K datasets.

For more informaiton, please refer to EDSR(PyTorch).

Begin to train

  1. Cd to 'src', run the following scripts to train models.

    You can use scripts in file 'demo.sh' to train models for our paper.

    To train a fresh model using DIV2K dataset

    CUDA_VISIBLE_DEVICES=0,1 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348 --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 2 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train DIV2K

    To train a fresh model using Flickr2K dataset

    CUDA_VISIBLE_DEVICES=0,1 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348 --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 2 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train Flickr2K

    To train a fresh model using both DIV2K and Flickr2K datasets to reproduce results in the paper, you need copy all the files in DIV2K_HR/ to Flickr2K_HR/, copy all the directories in DIV2K_LR_bicubic/ to Flickr2K_LR_bicubic/, then using the following script

    CUDA_VISIBLE_DEVICES=0,1 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348 --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 2 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train Flickr2K

    To continue a unfinished model using DIV2K dataset, the processes for other datasets are similiar

    CUDA_VISIBLE_DEVICES=0,1 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348 --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 2 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5 --resume -1 --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train DIV2K --load EMSRDPN_BIx2348

Test

Quick start

  1. Download benchmark dataset from BaiduYun (access code: 20v5), place them in directory specified by '--dir_data' in src/option.py, untar it.

  2. Download EMSRDPN model for our paper from BaiduYun (access code: d2ov) and place them in 'experiment/'. Other multiple scale models can be downloaded from BaiduYun (access code: z5ey).

  3. Cd to 'src', run the following scripts to test downloaded EMSRDPN model.

    You can use scripts in file 'demo.sh' to produce results for our paper.

    To test a trained model

    CUDA_VISIBLE_DEVICES=0 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348_test --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 1 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5+Set14+B100+Urban100+Manga109 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train DIV2K --pre_train ../experiment/EMSRDPN_BIx2348.pt --test_only --save_results

    To test a trained model using self ensemble

    CUDA_VISIBLE_DEVICES=0 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348_test+ --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 1 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5+Set14+B100+Urban100+Manga109 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train DIV2K --pre_train ../experiment/EMSRDPN_BIx2348.pt --test_only --save_results --self_ensemble

    To test a trained model using multiple scale infer

    CUDA_VISIBLE_DEVICES=0 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348_test_multi_scale_infer --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 1 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train DIV2K --pre_train ../experiment/EMSRDPN_BIx2348.pt --test_only --save_results --multi_scale_infer

Results

All the test results can be download from BaiduYun (access code: oawz).

Citation

If you find the code helpful in your resarch or work, please cite the following papers.

@InProceedings{Lim_2017_CVPR_Workshops,
  author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
  title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month = {July},
  year = {2017}
}

@inproceedings{2019Super,
  title={Super Resolution Using Dual Path Connections},
  author={ Yang, Bin Cheng },
  booktitle={the 27th ACM International Conference},
  year={2019},
}

@misc{yang2021efficient,
      title={Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning}, 
      author={Bin-Cheng Yang and Gangshan Wu},
      year={2021},
      eprint={2112.15386},
      archivePrefix={arXiv},
      primaryClass={eess.IV}
}

Acknowledgements

This code is built on EDSR (PyTorch). We thank the authors for sharing their code.

Exploit ILP to learn symmetry breaking constraints of ASP programs.

ILP Symmetry Breaking Overview This project aims to exploit inductive logic programming to lift symmetry breaking constraints of ASP programs. Given a

Research Group Production Systems 1 Apr 13, 2022
Raindrop strategy for Irregular time series

Graph-Guided Network For Irregularly Sampled Multivariate Time Series Overview This repository contains processed datasets and implementation code for

Zitnik Lab @ Harvard 74 Jan 03, 2023
This repository contains the DendroMap implementation for scalable and interactive exploration of image datasets in machine learning.

DendroMap DendroMap is an interactive tool to explore large-scale image datasets used for machine learning. A deep understanding of your data can be v

DIV Lab 33 Dec 30, 2022
A simple algorithm for extracting tree height in sparse scene from point cloud data.

TREE HEIGHT EXTRACTION IN SPARSE SCENES BASED ON UAV REMOTE SENSING This is the offical python implementation of the paper "Tree Height Extraction in

6 Oct 28, 2022
OpenMMLab 3D Human Parametric Model Toolbox and Benchmark

Introduction English | 简体中文 MMHuman3D is an open source PyTorch-based codebase for the use of 3D human parametric models in computer vision and comput

OpenMMLab 782 Jan 04, 2023
Yolact-keras实例分割模型在keras当中的实现

Yolact-keras实例分割模型在keras当中的实现 目录 性能情况 Performance 所需环境 Environment 文件下载 Download 训练步骤 How2train 预测步骤 How2predict 评估步骤 How2eval 参考资料 Reference 性能情况 训练数

Bubbliiiing 11 Dec 26, 2022
Video Instance Segmentation with a Propose-Reduce Paradigm (ICCV 2021)

Propose-Reduce VIS This repo contains the official implementation for the paper: Video Instance Segmentation with a Propose-Reduce Paradigm Huaijia Li

DV Lab 39 Nov 23, 2022
Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations

Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations This repo contains official code for the NeurIPS 2021 paper Imi

Jiayao Zhang 2 Oct 18, 2021
POCO: Point Convolution for Surface Reconstruction

POCO: Point Convolution for Surface Reconstruction by: Alexandre Boulch and Renaud Marlet Abstract Implicit neural networks have been successfully use

valeo.ai 93 Dec 29, 2022
Lux AI environment interface for RLlib multi-agents

Lux AI interface to RLlib MultiAgentsEnv For Lux AI Season 1 Kaggle competition. LuxAI repo RLlib-multiagents docs Kaggle environments repo Please let

Jaime 12 Nov 07, 2022
Deep Reinforcement Learning for Keras.

Deep Reinforcement Learning for Keras What is it? keras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seaml

Keras-RL 0 Dec 15, 2022
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

93 Nov 06, 2022
Augmented Traffic Control: A tool to simulate network conditions

Augmented Traffic Control Full documentation for the project is available at http://facebook.github.io/augmented-traffic-control/. Overview Augmented

Meta Archive 4.3k Jan 08, 2023
Trading Strategies for Freqtrade

Freqtrade Strategies Strategies for Freqtrade, developed primarily in a partnership between @werkkrew and @JimmyNixx from the Freqtrade Discord. Use t

Bryan Chain 242 Jan 07, 2023
PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer.

Unsupervised_IEPGAN This is the PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer. Ha

25 Oct 26, 2022
pixelNeRF: Neural Radiance Fields from One or Few Images

pixelNeRF: Neural Radiance Fields from One or Few Images Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa UC Berkeley arXiv: http://arxiv.org/abs/2

Alex Yu 1k Jan 04, 2023
Software & Hardware to do multi color printing with Sharpies

3D Print Colorizer is a combination of 3D printed parts and a Cura plugin which allows anyone with an Ender 3 like 3D printer to produce multi colored

343 Jan 06, 2023
[AAAI 2022] Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification

Sparse Structure Learning via Graph Neural Networks for inductive document classification Make graph dataset create co-occurrence graph for datasets.

16 Dec 22, 2022
This repository contains the code used for Predicting Patient Outcomes with Graph Representation Learning (https://arxiv.org/abs/2101.03940).

Predicting Patient Outcomes with Graph Representation Learning This repository contains the code used for Predicting Patient Outcomes with Graph Repre

Emma Rocheteau 76 Dec 22, 2022
Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19 (Oral).

Pose-Transfer Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19(Oral). The paper is available here. Video generation

Tengteng Huang 679 Jan 04, 2023