Open source single image super-resolution toolbox containing various functionality for training a diverse number of state-of-the-art super-resolution models. Also acts as the companion code for the IEEE signal processing letters paper titled 'Improving Super-Resolution Performance using Meta-Attention Layers’.

Overview

Deep-FIR Codebase - Super Resolution Meta Attention Networks macOS Linux Windows License: GPL v3

About

This repository contains the main coding framework accompanying our work on meta-attention in Single Image Super-Resolution (SISR), which has been published in the IEEE Signal Processing Letters (SPL) here. A sample of the results obtained by our metadata-enhanced models is provided below:

training_system

Installation

Python and Virtual Environments

If installing from scratch, it is first recommended to set up a new Python virtual environment prior to installing this code. With Conda, this can be achieved through the following:

conda create -n *environment_name* python=3.7 (Python 3.7 recommended but not essential).

conda activate *environment_name*

Code testing was conducted in Python 3.7, but the code should work fine with Python 3.6+.

Local Installation

Run the following commands from the repo base directory to fully install the package and all requirements:

cd Code

If using CPU only: conda install --file requirements.txt --channel pytorch --channel conda-forge

If using CPU + GPU: First install Pytorch and Cudatoolkit for your specific configuration using instructions here. Then, install requirements as above.

If using Aim for metrics logging, install via pip install aim. The Aim GUI does not work on Windows, but metrics should still be logged in the .aim folder.

Finally:

pip install -e . This installs all the command-line functions from Code/setup.py.

All functionality has been tested on Linux (CPU & GPU), Mac OS (CPU) and Windows (CPU & GPU).

Requirements installation is only meant as a guide and all requirements can be installed using alternative means (e.g. using pip).

Guidelines for Generating SR Data

Setting up CelebA Dataset

Create a folder 'celeba' in the Data directory. In here, download all files from the celeba source.
Unpack all archives in this location. Run image_manipulate to generate LR images and corresponding metadata (check details in Documentation/data_prep.md for more info on how to do this).

Setting up CelebA-HQ Dataset

CelebA-HQ files can be easily downloaded from here. To generate LR images, check Documentation/data_prep.md as with CelebA. For our IEEE SPL paper (super-resolving by 4x), we generated images using the following two commands:

To generate 512x512 HR images: image_manipulate --source_dir *path_to_original_images* --output_dir *path_to_new_folder* --pipeline downscale --scale 2

To generate 128x128 LR images: image_manipulate --source_dir *path_to_512x512_images* --output_dir *path_to_new_folder* --pipeline blur-downscale --scale 4

To generate pre-upscaled 512x512 LR images for SPARNet: image_manipulate --source_dir *path_to_128x128_images* --output_dir *path_to_new_folder* --pipeline upscale --scale 4

Setting up DIV2K/Flickr2K Datasets

DIV2K training/validation downloadable from here.
Flickr2K dataset downloadable from here.

Similar to CelebA-HQ, for our IEEE SPL paper (super-resolving by 4x), we generated LR images using the following command:

image_manipulate --source_dir *path_to_original_HR_images* --output_dir *path_to_new_folder* --pipeline blur-downscale --scale 4

For blurred & compressed images, we used the following command (make sure to first install JM to be able to compress the images, as detailed here):

image_manipulate --source_dir *path_to_original_HR_images* --output_dir *path_to_new_folder* --pipeline blur-downscale-jm_compress --scale 4 --random_compression

Setting up SR testing Datasets

All SR testing datasets are available for download from the LapSRN main page here. Generate LR versions of each image using the same commands as used for the DIV2K/Flickr2K datasets.

Additional Options

Further detail on generating LR data provided in Documentation/data_prep.md.

Training/Evaluating Models

Training

To train models, prepare a configuration file (details in Documentation/model_training.md) and run:

train_sisr --parameters *path_to_config_file*

Evaluation

Similarly, for evaluation, prepare an eval config file (details in Documentation/model_eval.md) and run:

eval_sisr --config *path_to_config_file*

Standard SISR models available (code for each adapted from their official repository - linked within source code):

  1. SRCNN
  2. VDSR
  3. EDSR
  4. RCAN
  5. SPARNet
  6. SFTMD
  7. SRMD
  8. SAN
  9. HAN

Custom models available (all meta-models are marked with a Q-):

  1. Q-RCAN (meta-RCAN)
  2. Q-EDSR
  3. Q-SAN
  4. Q-HAN
  5. Q-SPARNet
  6. Various SFTMD variants (check SFTMD architectures file for options)

IEEE SPL Pre-Trained Model Weights

All weights for the models presented in our paper are available for download here. The models are split into three folders:

  • Models trained on blurry general images: These models were all trained on DIV2K/Flickr2K blurred/downsampled images. These include:
    • SRMD
    • SFTMD
    • RCAN
    • EDSR
    • SAN
    • HAN
    • Meta-RCAN
    • Meta-EDSR
    • Meta-SAN
    • Meta-HAN
  • Models trained on blurry and compressed general images: These models were all trained on DIV2K/Flickr2K blurred/downsampled/compressed images. These include:
    • RCAN
    • Meta-RCAN (accepting blur kernel data only)
    • Meta-RCAN (accepting compression QPI data only)
    • Meta-RCAN (accepting both blur kernels and compression QPI)
  • Models trained on blurry face images: These models were all trained on CelebA-HQ blurred/downsampled images. These include:
    • RCAN
    • SPARNet (note that SPARNET only accepts pre-upsampled images)
    • Meta-RCAN
    • Meta-SPARNet
  • Testing config files for all of these models are available in Documentation/SPL_testing_files. To use these, you need to first download and prepare the relevant datasets as shown here. Place the downloaded model folders in ./Results to use the config files as is, or adjust the model_loc parameter to point towards the directory containing the models.

Once downloaded, these models can be used directly with the eval command (```eval_sisr``) on any other input dataset as discussed in the evaluation documentation (Documentation/model_eval.md).

Replicating SPL Results from Scratch

All training config files for models presented in our SPL paper are provided in Documentation/sample_config_files. These configurations assume that your training/eval data is stored in the relevant directory within ./Data, so please check that you have downloaded and prepared your datasets (as detailed above) before training.

Additional/Advanced Setup

Setting up JM (for compressing images)

Download the reference software from here. Place the software in the directory ./JM. cd into this directory and compile the software using the commands . unixprep.sh and make. Some changes might be required for different OS versions.
To compress images, simply add the jm_compress argument when specifying image_manipulate's pipeline.

Setting up VGGFace (Pytorch)

Download pre-trained weights for the VGGFace model from here (scroll to VGGFace). Place the weights file in the directory ./external_packages/VGGFace/. The weights file should be called vgg_face_dag.pth.

Setting up lightCNN

Download pre-trained weights for the lightCNN model from here (LightCNN-29 v1). Place the weights file in the directory ./external_packages/LightCNN/. The weights file should be called LightCNN_29Layers_checkpoint.pth.tar.

Creating Custom Models

Information on how to develop and train your own models is available in Documentation/framework_development.md.

Full List of Commands Available

The entire list of commands available with this repository is:

  • train_sisr - main model training function.
  • eval_sisr - main model evaluation function.
  • image_manipulate - main bulk image converter.
  • images_to_video - Helper function to convert a folder of images into a video.
  • extract_best_model - Helper function to extract model config and best model checkpoint from a folder to a target location.
  • clean_models - Helper function to remove unnecessary model checkpoints.
  • model_report - Helper function to report on models available in specified directory.

Each command can be run with the --help parameter, which will print out the available options and docstrings.

Uninstall

Simply run:

pip uninstall Deep-FIR-SR

from any directory, with the relevant virtual environment activated.

Citation

Paper currently still in early-access, will update once fully published.

@ARTICLE{Meta-Attention,
author={Aquilina, Matthew and Galea, Christian and Abela, John and Camilleri, Kenneth P. and Farrugia, Reuben},
journal={IEEE Signal Processing Letters},
title={Improving Super-Resolution Performance using Meta-Attention Layers},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/LSP.2021.3116518}}

License/Further Development

This code has been released via the GNU GPLv3 open-source license. However, this code can also be made available via an alternative closed, permissive license. Third-parties interested in this form of licensing should contact us separately.

Usages of code from other repositories is properly referenced within the code itself.

We are working on a number of different research tasks in super-resolution, we'll be updating this repo as we make further advancements!

Short-term upgrades planned:

  • CI automated testing (alongside Pytest)
  • Release of packaged version
  • Other upgrades TBA
Code for: https://berkeleyautomation.github.io/bags/

DeformableRavens Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the

Daniel Seita 121 Dec 30, 2022
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

CV Lab @ Yonsei University 87 Dec 30, 2022
Python package to generate image embeddings with CLIP without PyTorch/TensorFlow

imgbeddings A Python package to generate embedding vectors from images, using OpenAI's robust CLIP model via Hugging Face transformers. These image em

Max Woolf 81 Jan 04, 2023
这是一个deeplabv3-plus-pytorch的源码,可以用于训练自己的模型。

DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download 训练步骤

Bubbliiiing 350 Dec 28, 2022
This is the PyTorch implementation of GANs N’ Roses: Stable, Controllable, Diverse Image to Image Translation

Official PyTorch repo for GAN's N' Roses. Diverse im2im and vid2vid selfie to anime translation.

1.1k Jan 01, 2023
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" ([email protected])

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Wanyu Du 18 Dec 29, 2022
PyTorch implementation of VAGAN: Visual Feature Attribution Using Wasserstein GANs

Prototypical Networks for Few shot Learning in PyTorch Simple alternative Implementation of Prototypical Networks for Few Shot Learning (paper, code)

Orobix 93 Aug 17, 2022
PASSL包含 SimCLR,MoCo,BYOL,CLIP等基于对比学习的图像自监督算法以及 Vision-Transformer,Swin-Transformer,BEiT,CVT,T2T,MLP_Mixer等视觉Transformer算法

PASSL Introduction PASSL is a Paddle based vision library for state-of-the-art Self-Supervised Learning research with PaddlePaddle. PASSL aims to acce

186 Dec 29, 2022
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Manipulator Learning This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In par

STARS Laboratory 5 Dec 08, 2022
frida工具的缝合怪

fridaUiTools fridaUiTools是一个界面化整理脚本的工具。新人的练手作品。参考项目ZenTracer,觉得既然可以界面化,那么应该可以把功能做的更加完善一些。跨平台支持:win、mac、linux 功能缝合怪。把一些常用的frida的hook脚本简单统一输出方式后,整合进来。并且

diveking 997 Jan 09, 2023
Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer

AdaConv Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer from "Adaptive Convolutions for Structure-

65 Dec 22, 2022
DANet for Tabular data classification/ regression.

Deep Abstract Networks A PyTorch code implemented for the submission DANets: Deep Abstract Networks for Tabular Data Classification and Regression. Do

Ronnie Rocket 55 Sep 14, 2022
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"

Transparency-by-Design networks (TbD-nets) This repository contains code for replicating the experiments and visualizations from the paper Transparenc

David Mascharka 351 Nov 18, 2022
BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer

BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer Project Page | Paper | Video State-of-the-art image-to-image translatio

47 Dec 06, 2022
x-transformers-paddle 2.x version

x-transformers-paddle x-transformers-paddle 2.x version paddle 2.x版本 https://github.com/lucidrains/x-transformers 。 requirements paddlepaddle-gpu==2.2

yujun 7 Dec 08, 2022
A Survey on Deep Learning Technique for Video Segmentation

A Survey on Deep Learning Technique for Video Segmentation A Survey on Deep Learning Technique for Video Segmentation Wenguan Wang, Tianfei Zhou, Fati

Tianfei Zhou 112 Dec 12, 2022
A GridMixup augmentation, inspired by GridMask and CutMix

GridMixup A GridMixup augmentation, inspired by GridMask and CutMix Easy install pip install git+https://github.com/IlyaDobrynin/GridMixup.git Overvie

IlyaDo 42 Dec 28, 2022
Framework for evaluating ANNS algorithms on billion scale datasets.

Billion-Scale ANN http://big-ann-benchmarks.com/ Install The only prerequisite is Python (tested with 3.6) and Docker. Works with newer versions of Py

Harsha Vardhan Simhadri 132 Dec 24, 2022
Label Hallucination for Few-Shot Classification

Label Hallucination for Few-Shot Classification This repo covers the implementation of the following paper: Label Hallucination for Few-Shot Classific

Yiren Jian 13 Nov 13, 2022
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).

Scalable Incomplete Network Embedding ⠀⠀ A PyTorch implementation of Scalable Incomplete Network Embedding (ICDM 2018). Abstract Attributed network em

Benedek Rozemberczki 69 Sep 22, 2022