edge-SR: Super-Resolution For The Masses

Related tags

Text Data & NLPeSR
Overview

edge-SR: Super Resolution For The Masses

Citation

Pablo Navarrete Michelini, Yunhua Lu and Xingqun Jiang. "edge-SR: Super-Resolution For The Masses", in IEEE Winter conference on Applications of Computer Vision (WACV), 2022.

BibTeX

@inproceedings{eSR,
    title     = {edge--{SR}: Super--Resolution For The Masses},
    author    = {Navarrete~Michelini, Pablo and Lu, Yunhua and Jiang, Xingqun},
    booktitle = {Proceedings of the {IEEE/CVF} Winter Conference on Applications of Computer Vision ({WACV})},
    month     = {January},
    year      = {2022},
    pages     = {1078--1087},
    url       = {https://arxiv.org/abs/2108.10335}
}

Instructions:

  • Place input images in input directory (provided as empty directory). Color images will be converted to grayscale.

  • To upscale images run: python run.py.

    Output images will come out in output directory.

  • The GPU number and model file can be changed in run.py (in comment "CHANGE HERE").

Requirements:

  • Python 3, PyTorch, NumPy, Pillow, OpenCV

Experiment results

  • The data directory contains the file tests.pkl that has the Python dictionary with all our test results on different devices. The following sample code shows how to read the file:
>>> import pickle
>>> test = pickle.load(open('tests.pkl', 'rb'))
>>> test['Bicubic_s2']
    {'psnr_Set5': 33.72849620514912,
     'ssim_Set5': 0.9283912810369976,
     'lpips_Set5': 0.14221979230642318,
     'psnr_Set14': 30.286027790636204,
     'ssim_Set14': 0.8694934108301432,
     'lpips_Set14': 0.19383049915943826,
     'psnr_BSDS100': 29.571233006609656,
     'ssim_BSDS100': 0.8418117904964167,
     'lpips_BSDS100': 0.26246454380452633,
     'psnr_Urban100': 26.89378248655882,
     'ssim_Urban100': 0.8407461069831571,
     'lpips_Urban100': 0.21186692919582129,
     'psnr_Manga109': 30.850672809780587,
     'ssim_Manga109': 0.9340133711400112,
     'lpips_Manga109': 0.102985977955641,
     'parameters': 104,
     'speed_AGX': 18.72132628065749,
     'power_AGX': 1550,
     'speed_MaxQ': 632.5429857814075,
     'power_MaxQ': 50,
     'temperature_MaxQ': 76,
     'memory_MaxQ': 2961,
     'speed_RPI': 11.361346064182795,
     'usage_RPI': 372.8714285714285}

The keys of the dictionary identify the name of each model and its hyper--parameters using the following format:

  • Bicubic_s#,
  • eSR-MAX_s#_K#_C#,
  • eSR-TM_s#_K#_C#,
  • eSR-TR_s#_K#_C#,
  • eSR-CNN_s#_C#_D#_S#,
  • ESPCN_s#_D#_S#, or
  • FSRCNN_s#_D#_S#_M#,

where # represents an integer number with the value of the correspondent hyper-parameter. For each model the data of the dictionary contains a second dictionary with the information displayed above. This includes: number of model parameters; image quality metrics PSNR, SSIM and LPIPS measured in 5 different datasets; as well as power, speed, CPU usage, temperature and memory usage for devices AGX (Jetson AGX Xavier), MaxQ (GTX 1080 MaxQ) and RPI (Raspberry Pi 400).

Owner
Pablo
Pablo
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

Tensor2Tensor Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and ac

12.9k Jan 07, 2023
DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time

DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time. While it efficiently searches the answers out of 60 billion phrases in Wikipedia, it is also v

Jinhyuk Lee 543 Jan 08, 2023
Part of Speech Tagging using Hidden Markov Model (HMM) POS Tagger and Brill Tagger

Part of Speech Tagging using Hidden Markov Model (HMM) POS Tagger and Brill Tagger In this project, our aim is to tune, compare, and contrast the perf

Chirag Daryani 0 Dec 25, 2021
Blue Brain text mining toolbox for semantic search and structured information extraction

Blue Brain Search Source Code DOI Data & Models DOI Documentation Latest Release Python Versions License Build Status Static Typing Code Style Securit

The Blue Brain Project 29 Dec 01, 2022
An easy-to-use Python module that helps you to extract the BERT embeddings for a large text dataset (Bengali/English) efficiently.

An easy-to-use Python module that helps you to extract the BERT embeddings for a large text dataset (Bengali/English) efficiently.

Khalid Saifullah 37 Sep 05, 2022
Code for paper Multitask-Finetuning of Zero-shot Vision-Language Models

Code for paper Multitask-Finetuning of Zero-shot Vision-Language Models

Zhenhailong Wang 2 Jul 15, 2022
Code for Findings at EMNLP 2021 paper: "Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning"

Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning This repo is for Findings at EMNLP 2021 paper: Learn Cont

INK Lab @ USC 6 Sep 02, 2022
Wrapper to display a script output or a text file content on the desktop in sway or other wlroots-based compositors

nwg-wrapper This program is a part of the nwg-shell project. This program is a GTK3-based wrapper to display a script output, or a text file content o

Piotr Miller 94 Dec 27, 2022
Python package for performing Entity and Text Matching using Deep Learning.

DeepMatcher DeepMatcher is a Python package for performing entity and text matching using deep learning. It provides built-in neural networks and util

461 Dec 28, 2022
Learning to Rewrite for Non-Autoregressive Neural Machine Translation

RewriteNAT This repo provides the code for reproducing our proposed RewriteNAT in EMNLP 2021 paper entitled "Learning to Rewrite for Non-Autoregressiv

Xinwei Geng 20 Dec 25, 2022
AEC_DeepModel - Deep learning based acoustic echo cancellation baseline code

AEC_DeepModel - Deep learning based acoustic echo cancellation baseline code

凌逆战 75 Dec 05, 2022
本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。

【关于 NLP】那些你不知道的事 作者:杨夕、芙蕖、李玲、陈海顺、twilight、LeoLRH、JimmyDU、艾春辉、张永泰、金金金 介绍 本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。 目录架构 一、【

1.4k Dec 30, 2022
Python wrapper for Stanford CoreNLP tools v3.4.1

Python interface to Stanford Core NLP tools v3.4.1 This is a Python wrapper for Stanford University's NLP group's Java-based CoreNLP tools. It can eit

Dustin Smith 610 Sep 07, 2022
A spaCy wrapper of OpenTapioca for named entity linking on Wikidata

spaCyOpenTapioca A spaCy wrapper of OpenTapioca for named entity linking on Wikidata. Table of contents Installation How to use Local OpenTapioca Vizu

Universitätsbibliothek Mannheim 80 Jan 03, 2023
Code for text augmentation method leveraging large-scale language models

HyperMix Code for our paper GPT3Mix and conducting classification experiments using GPT-3 prompt-based data augmentation. Getting Started Installing P

NAVER AI 47 Dec 20, 2022
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 1.2k Jan 08, 2023
Continuously update some NLP practice based on different tasks.

NLP_practice We will continuously update some NLP practice based on different tasks. prerequisites Software pytorch = 1.10 torchtext = 0.11.0 sklear

0 Jan 05, 2022
Unsupervised text tokenizer focused on computational efficiency

YouTokenToMe YouTokenToMe is an unsupervised text tokenizer focused on computational efficiency. It currently implements fast Byte Pair Encoding (BPE)

VK.com 847 Dec 19, 2022
Club chatbot

Chatbot Club chatbot Instructions to get the Chatterbot working Step 1. First make sure you are using a version of Python 3 or newer. To check your ve

5 Mar 07, 2022
A combination of autoregressors and autoencoders using XLNet for sentiment analysis

A combination of autoregressors and autoencoders using XLNet for sentiment analysis Abstract In this paper sentiment analysis has been performed in or

James Zaridis 2 Nov 20, 2021