(under submission) Bayesian Integration of a Generative Prior for Image Restoration

Overview

BIGPrior: Towards Decoupling Learned Prior Hallucination and Data Fidelity in Image Restoration

Authors: Majed El Helou, and Sabine Süsstrunk

Python 3.7 pytorch 1.1.0 CUDA 10.1

{Note: paper under submission}

BIGPrior pipeline

The figure below illustrates the BIGPrior pipeline, with a generative-network inversion for the learned prior.

[Paper]

Abstract: Image restoration, such as denoising, inpainting, colorization, etc. encompasses fundamental image processing tasks that have been addressed with different algorithms and deep learning methods. Classical image restoration algorithms leverage a variety of priors, either implicitly or explicitly. Their priors are hand-designed and their corresponding weights are heuristically assigned. Thus, deep learning methods often produce superior image restoration quality. Deep networks are, however, capable of strong and hardly-predictable hallucinations of the data to be restored. Networks jointly and implicitly learn to be faithful to the observed data while learning an image prior, and the separation of original data and hallucinated data downstream is then not possible. This limits their wide-spread adoption in image restoration applications. Furthermore, it is often the hallucinated part that is victim to degradation-model overfitting.

We present an approach with decoupled network-prior based hallucination and data fidelity terms. We refer to our framework as the Bayesian Integration of a Generative Prior (BIGPrior). Our BIGPrior method is rooted in a Bayesian restoration framework, and tightly connected to classical restoration methods. In fact, our approach can be viewed as a generalization of a large family of classical restoration algorithms. We leverage a recent network inversion method to extract image prior information from a generative network. We show on image colorization, inpainting, and denoising that our framework consistently improves the prior results through good integration of data fidelity. Our method, though partly reliant on the quality of the generative network inversion, is competitive with state-of-the-art supervised and task-specific restoration methods. It also provides an additional metric that sets forth the degree of prior reliance per pixel. Indeed, the per pixel contributions of the decoupled data fidelity and prior terms are readily available in our proposed framework.

Key take-aways: our paper presents a learning-based restoration framework that forms a generalization of various families of classical methods. It is both tightly connected with Bayesian estimation upon which it builds, and also to classical dictionary methods. Our BIGPrior makes the explicit integration of learned-network priors possible, notably a generative-network prior. Its biggest advantage is that, by decoupling data fidelity and prior hallucination, it structurally provides a per pixel fusion metric that determines the contribution of each. This can be important both for end users and for various downstream applications. We hope this work will foster future learning methods with clearly decoupled network hallucinations, both for interpretability, reliability, and to safeguard against the hazards of black-box restoration.

Structure overview

All code is in the code directory, and input data are in the data folder. The net_data directory stores the network weights per epoch (along with many other trackers and all experiment parameters), it uses an automated index incrementation strategy on top of the experiment name for avoiding over-writing. We generate a lot of intermediate data for the different experiments, and along with the final outputs, these are written in inter_data.

Data setup

The needed data are already stored under data, if you want to repeat our experiments with different datasets we added a help README under data/lsun/ explaining how to pre-process the lsun data.

Generative inversion

The generative inversion we use is based on mGAN but we do some modifications to their code, which is why we have our own version in this repository.

(1) You need to download the pre-trained generative networks (we use PGGAN), and put the pretrain folder inside code/mganprior/models/. You can download them from the original repo, or mGAN's, or from our link right here.

(2) (recommended) You might face some bugs with the perceptual vgg-based loss due to caching, if you run parallel experiments or if you run on remote servers. We recommend you cache the pretrained model. To do this, first download vgg model vgg16-397923af.pth and paste it inside cache/torch/checkpoints/, then before starting an experiment run:

export XDG_CACHE_HOME=cache/

(3) We compiled the commands for all experiments in the bash file runall_mGAN.sh, you can find the templates inside to rerun each experiment.

Training

The train_cnn.sh bash compiles the commands to retrain all our experiments, for instance for colorization:

python code/train.py --experiment col_bedroom --lr 0.01 --batch_size 8 --backbone D --phi_weight 1e-5

the experiment name is parsed in 2 to determine the task and the dataset, the remaining args control the network or training parameters. All are detailed in code/train.py.

If you retrain multiple times for a given experiment, every run is saved with an incremented ID starting from 0, and the corresponding parameters are also saved as OURargs.txt next to the network checkpoints.

Testing

The test_cnn.sh bash compiles the commands to test all our experiments, for instance for colorization:

python code/train.py --experiment col_bedroom --test_model 1 --test True --test_epoch 24

where the test_model argument selects the ID of the already-trained experiment. The arguments of the chosen experiments are also saved under inter_data/{experiment}/OURoutput/OURargs.txt because, unlike network weights, the image outputs get over-written with every new run. This is because their computation is fast but they take a lot of storage.

Note: our pretrained models are already available within this repo under net_data (epoch 25 only, i.e. ID 24), so if you want to test without retraining it can be done directly.

Results visualization

We group all results processing, visualization, quantitative assessment, also including our correlation analysis figure, in one comprehensive notebook. It contains a large number of control parameters to obtain all the different table results, and more.

Citation

@article{elhelou2020bigprior,
    title   = {{BIGPrior}: Towards Decoupling Learned Prior Hallucination and Data Fidelity in Image Restoration},
    author  = {El Helou, Majed and S\"usstrunk, Sabine},
    journal = {arXiv preprint arXiv:2011.01406},
    year    = {2020}
}
Owner
Majed El Helou
CS PhD student, EPFL
Majed El Helou
Storage-optimizer - Identify potintial optimizations on the cloud storage accounts

Storage Optimizer Identify potintial optimizations on the cloud storage accounts

Zaher Mousa 1 Feb 13, 2022
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation

Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation Introduction This is a PyTorch

XMed-Lab 30 Sep 23, 2022
ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs

ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs This is the code of paper ConE: Cone Embeddings for Multi-Hop Reasoning over Knowl

MIRA Lab 33 Dec 07, 2022
Hard cater examples from Hopper ICLR paper

CATER-h Honglu Zhou*, Asim Kadav, Farley Lai, Alexandru Niculescu-Mizil, Martin Renqiang Min, Mubbasir Kapadia, Hans Peter Graf (*Contact: honglu.zhou

NECLA ML Group 6 May 11, 2021
Nsdf: A mesh SDF with just some code we can directly paste into our raymarcher

nsdf Representing SDFs of arbitrary meshes has been a bit tricky so far. Express

Jan Ivanecky 5 Feb 18, 2022
This repository contains the source code of Auto-Lambda and baselines from the paper, Auto-Lambda: Disentangling Dynamic Task Relationships.

Auto-Lambda This repository contains the source code of Auto-Lambda and baselines from the paper, Auto-Lambda: Disentangling Dynamic Task Relationship

Shikun Liu 76 Dec 20, 2022
Direct design of biquad filter cascades with deep learning by sampling random polynomials.

IIRNet Direct design of biquad filter cascades with deep learning by sampling random polynomials. Usage git clone https://github.com/csteinmetz1/IIRNe

Christian J. Steinmetz 55 Nov 02, 2022
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022
A library for optimization on Riemannian manifolds

TensorFlow RiemOpt A library for manifold-constrained optimization in TensorFlow. Installation To install the latest development version from GitHub:

Oleg Smirnov 83 Dec 27, 2022
Picasso: a methods for embedding points in 2D in a way that respects distances while fitting a user-specified shape.

Picasso Code to generate Picasso embeddings of any input matrix. Picasso maps the points of an input matrix to user-defined, n-dimensional shape coord

Pachter Lab 45 Dec 23, 2022
Code of 3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces

3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces Installation After cloning the repo open

37 Dec 03, 2022
Official implementation of "Membership Inference Attacks Against Self-supervised Speech Models"

Introduction Official implementation of "Membership Inference Attacks Against Self-supervised Speech Models". In this work, we demonstrate that existi

Wei-Cheng Tseng 7 Nov 01, 2022
A Python implementation of the Locality Preserving Matching (LPM) method for pruning outliers in image matching.

LPM_Python A Python implementation of the Locality Preserving Matching (LPM) method for pruning outliers in image matching. The code is established ac

AoxiangFan 11 Nov 07, 2022
Detect roadway lanes using Python OpenCV for project during the 5th semester at DHBW Stuttgart for lecture in digital image processing.

Find Line Detection (Image Processing) Identifying lanes of the road is very common task that human driver performs. It's important to keep the vehicl

LMF 4 Jun 21, 2022
Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.

Decision Transformer Lili Chen*, Kevin Lu*, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas†, and Igor M

Kevin Lu 1.4k Jan 07, 2023
PyTorch implementation of Trust Region Policy Optimization

PyTorch implementation of TRPO Try my implementation of PPO (aka newer better variant of TRPO), unless you need to you TRPO for some specific reasons.

Ilya Kostrikov 366 Nov 15, 2022
Repository of the paper Compressing Sensor Data for Remote Assistance of Autonomous Vehicles using Deep Generative Models at ML4AD @ NeurIPS 2021.

Compressing Sensor Data for Remote Assistance of Autonomous Vehicles using Deep Generative Models Code and supplementary materials Repository of the p

Daniel Bogdoll 4 Jul 13, 2022
Code accompanying "Evolving spiking neuron cellular automata and networks to emulate in vitro neuronal activity," accepted to IEEE SSCI ICES 2021

Evolving-spiking-neuron-cellular-automata-and-networks-to-emulate-in-vitro-neuronal-activity Code accompanying "Evolving spiking neuron cellular autom

SOCRATES: Self-Organizing Computational substRATES 2 Dec 02, 2022
A rule learning algorithm for the deduction of syndrome definitions from time series data.

README This project provides a rule learning algorithm for the deduction of syndrome definitions from time series data. Large parts of the algorithm a

0 Sep 24, 2021
This is a code repository for the paper "Graph Auto-Encoders for Financial Clustering".

Repository for the paper "Graph Auto-Encoders for Financial Clustering" Requirements Python 3.6 torch torch_geometric Instructions This is a simple c

Edward Turner 1 Dec 02, 2021