Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Overview

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

paper download python version pytorch version License

Jiaxi Jiang, Kai Zhang, Radu Timofte

Computer Vision Lab, ETH Zurich, Switzerland


🔥 🔥 This repository is the official PyTorch implementation of paper "Towards Flexible Blind JPEG Artifacts Removal". paper download FBCNN achieves state-of-the-art performance in blind JPEG artifacts removal on

  • Single JPEG images (color/grayscale)
  • Double JPEG images (aligned/non-aligned)
  • Real-world JPEG images

Training a single deep blind model to handle different quality factors for JPEG image artifacts removal has been attracting considerable attention due to its convenience for practical usage. However, existing deep blind methods usually directly reconstruct the image without predicting the quality factor, thus lacking the flexibility to control the output as the non-blind methods. To remedy this problem, in this paper, we propose a flexible blind convolutional neural network, namely FBCNN, that can predict the adjustable quality factor to control the trade-off between artifacts removal and details preservation. Specifically, FBCNN decouples the quality factor from the JPEG image via a decoupler module and then embeds the predicted quality factor into the subsequent reconstructor module through a quality factor attention block for flexible control. Besides, we find existing methods are prone to fail on non-aligned double JPEG images even with only a one-pixel shift, and we thus propose a double JPEG degradation model to augment the training data. Extensive experiments on single JPEG images, more general double JPEG images, and real-world JPEG images demonstrate that our proposed FBCNN achieves favorable performance against state-of-the-art methods in terms of both quantitative metrics and visual quality.

🚀 🚀 Some Visual Examples (Click for full images)


Training

We will release the training code at KAIR.

Testing

  • Grayscale JPEG images
python main_test_fbcnn_gray.py
  • Grayscale JPEG images, trained with double JPEG degradation model
python main_test_fbcnn_gray_doublejpeg.py
  • Color JPEG images
python main_test_fbcnn_color.py
  • Real-World JPEG images
python main_test_fbcnn_color_real.py

Contents

Motivations

JPEG is one of the most widely-used image compression algorithms and formats due to its simplicity and fast encoding/decoding speeds. However, it is a lossy compression algorithm and can introduce annoying artifacts. Existing methods for JPEG artifacts removal generally have four limitations in real applications:

  • Most existing learning-based methods [e.g. ARCNN, MWCNN, SwinIR] trained a specific model for each quality factor, lacking the flexibility to learn a single model for different JPEG quality factors.

  • DCT-based methods [e.g. DMCNN, QGAC] need to obtain the DCT coefficients or quantization table as input, which is only stored in JPEG format. Besides, when images are compressed multiple times, only the most recent compression information is stored.

  • Existing blind methods [e.g. DnCNN, DCSC, QGAC] can only provide a deterministic reconstruction result for each input, ignoring the need for user preferences.

  • Existing methods are all trained with synthetic images which assumes that the low-quality images are compressed only once. However, most images from the Internet are compressed multiple times. Despite some progress for real recompressed images, e.g. from Twitter [ARCNN, DCSC], a detailed and complete study on double JPEG artifacts removal is still missing.

Network Architecture

We propose a flexible blind convolutional neural network (FBCNN) that predicts the quality factor of a JPEG image and embed it into the decoder to guide image restoration. The quality factor can be manually adjusted for flexible JPEG restoration according to the user's preference. architecture

Analysis of Double JPEG Restoration

1. What is non-aligned double JPEG compression?

Non-aligned double JPEG compression means that the 8x8 blocks of two JPEG compression are not aligned. For example, when we crop a JPEG image and save it also as JPEG, it is highly possible we get a non-aligned double JPEG image. real There are many other common scenarios including, but not limited to:

  • take a picture by smartphone and upload it online. Most social media platforms, e.g. Wechat, Twitter, Facebook, resize the uploaded images by downsampling and then apply JPEG compression to save storage space.
  • Edit a JPEG image that introduces cropping, rotation, or resizing, and save it as JPEG.
  • Zoom in/out a JPEG image, then take a screenshot, save it as JPEG.
  • Group different JPEG image and save it as a single JPEG image.
  • Most memes are compressed many times with non-aligned cases.

2. Limitation of existing blind methods on restoration of non-aligned double JPEG images

We find that existing blind methods always do not work when the 8x8 blocks of two JPEG compression are not aligned and QF1 <= QF2, even with just a one-pixel shift. Other cases such as non-aligned double JPEG with QF1>QF2, or aligned double JPEG compression, are actually equivalent to single JPEG compression.

Here is an example of the restoration result of DnCNN and QGAC on a JPEG image with different degradation settings. '*' means there is a one-pixel shift between two JPEG blocks. lena_doublejpeg

3. Our solutions

We find for non-aligned double JPEG images with QF1 < QF2, FBCNN always predicts the quality factor as QF2. However, it is the smaller QF1 that dominants the compression artifacts. By manually changing the predicted quality factor to QF1, we largely improve the result.

Besides, to get a fully blind model, we propose two blind solutions to solve this problem:

(1) FBCNN-D: Train a model with a single JPEG degradation model + automatic dominant QF correction. By utilizing the property of JPEG images, we find the quality factor of a single JPEG image can be predicted by applying another JPEG compression. When QF1 = QF2, the MSE of two JPEG images is minimal. In our paper, we also extend this method to non-aligned double JPEG cases to get a fully blind model.

(2) FBCNN-A: Augment training data with double JPEG degradation model, which is given by:

y = JPEG(shift(JPEG(x, QF1)),QF2)

By reducing the misalignment of training data and real-world JPEG images, FBCNN-A further improves the results on complex double JPEG restoration. This proposed double JPEG degradation model can be easily integrated into other image restoration tasks, such as single image super-resolution (e.g. BSRGAN), for better general real image restoration.

Experiments

1. Single JPEG restoration

single_table *: Train a specific model for each quality factor. single_compare

2. Non-aligned double JPEG restoration

There is a pixel shift of (4,4) between the blocks of two JPEG compression. double_table double_compare

3. Real-world JPEG restoration

real

4. Flexibility of FBCNN

By setting different quality factors, we can control the trade-off between artifacts removal and details preservation, according to the user's preference. flexible

Citation

@inproceedings{jiang2021towards,
title={Towards Flexible Blind {JPEG} Artifacts Removal},
author={Jiang, Jiaxi and Zhang, Kai and Timofte, Radu},
booktitle={IEEE International Conference on Computer Vision},
year={2021}
}

License and Acknowledgement

This project is released under the Apache 2.0 license. This work was partly supported by the ETH Zürich Fund (OK) and a Huawei Technologies Oy (Finland) project.

Comments
  • About the results of fbcnn_color.pth

    About the results of fbcnn_color.pth

    Hi, Great works! I test your model(fbcnn_color.pth) in 'testset/Real' dataset. The results are not so remarkable as the picture in this master. The output of model(fbcnn_color.pth) without qf_input are as follow(left is input, right is output):

    merge_1 Uploading merge_2.jpg… merge_3

    merge_4 merge_5 merge_6 I don't kown if there are something wrong with my results. And the output of model(fbcnn_color.pth) with qf_input are also not so good. When zoom out, I can find obvious artifacts. Hope for your reply.

    question 
    opened by YangGangZhiQi 6
  • Inference is causing Out of Memory error (even for V100)

    Inference is causing Out of Memory error (even for V100)

    Hello, I tried running 'main_test_fbcnn_color.py' on a real JPEG image using one 16 GB V100 but the code threw ´Out of Memory´ error. Any idea on how to use this code with large images, say 12 MPix or more?

    opened by wind-surfer 4
  • Real data testing

    Real data testing

    hello, it seems the results of the real dataset are not run from the fbcnn_color.pth ? Would you mind providing the corresponding fbcnn model for the real dataset? Thank you!

    opened by yiyunchen 3
  • Problems about re-produce the  training results  in  QF-estimation loss

    Problems about re-produce the training results in QF-estimation loss

    Hi, It is a nice job to enable flexible QF embedding in JPEG deblocking! I want to re-produce the results following the directions in the paper, However, there are some questions.

    1. qf_loss Following the paper, I calculate the loss as:
                input = batch["degree"].type(Tensor)
                label = batch["label"].type(Tensor)
                qf_label = batch["qf"].type(Tensor)
                qf_label = (1-qf_label/100) # 0-1
                out, qf_pred = model(input)
    
                mse = mse_pixelwise(out, label)
    
                cls_loss = l1_pixelwise2(qf_pred, qf_label)
    
                loss = l1_pixelwise(out, label) * 0.5  + mse * 0.5 + cls_loss * 0.1
    

    However, during the training phase, the cls_loss is not changed at all

    2021-10-09 22:38:47,481 - __main__ - INFO - 
    [QF 10 lr 0.000100 cls_loss:0.23448  loss:0.04852 Epoch 0/120] [Batch 19/203] [psnr: 19.300194 ]  ETA: 6:18:37.362892   [mse: 0.005858 ] [mse_original: 0.000583 ]
    2021-10-09 22:39:08,319 - __main__ - INFO - 
    [QF 10 lr 0.000100 cls_loss:0.20090  loss:0.03645 Epoch 0/120] [Batch 39/203] [psnr: 21.449274 ]  ETA: 7:02:01.486693   [mse: 0.003679 ] [mse_original: 0.000384 ]
    2021-10-09 22:39:29,365 - __main__ - INFO - 
    
    2021-10-10 05:11:26,158 - __main__ - INFO - 
    [QF 10 lr 0.000020 cls_loss:0.20545  loss:0.02607 Epoch 119/120] [Batch 159/203] [psnr: 34.804854 ]  ETA: 0:00:41.660994   [mse: 0.000345 ] [mse_original: 0.000475 ]
    2021-10-10 05:11:45,303 - __main__ - INFO - 
    [QF 10 lr 0.000020 cls_loss:0.22516  loss:0.02851 Epoch 119/120] [Batch 179/203] [psnr: 34.770472 ]  ETA: 0:00:22.570621   [mse: 0.000385 ] [mse_original: 0.000513 ]
    2021-10-10 05:12:04,498 - __main__ - INFO - 
    [QF 10 lr 0.000020 cls_loss:0.18704  loss:0.02385 Epoch 119/120] [Batch 199/203] [psnr: 34.775089 ]  ETA: 0:00:03.771435   [mse: 0.000276 ] [mse_original: 0.000377 ]
    2021-10-10 05:12:12,970 - __main__ - INFO - QF: 10 [PSNR: live1: 29.005289    classic: 29.159697]  
     [SSIM: live1: 0.806543    classic: 0.795164] 
     [max PSNR in live1: 29.005289, max epoch: 119]
    

    The cls_loss is always at 2, and when I test the qf estimation with input image in qf=[10,20,..,90], the qf estimation is always 53,

    which shows that the cls_loss does not work well. It is reasonable that qf estimation is 50 since it is good for L1 loss when not trained well.

    Due to the GPU limitation, I set the Batchsize = 96, and training with DIV2K patches for about 2.5W iters

    Is there anything wrong in my implementation, thanks!

    opened by JuZiSYJ 3
  • About the result  under rgb2ycbcr of opencv version

    About the result under rgb2ycbcr of opencv version

    Hi, I test the result on classic5 and live1 in gray with pre-trained weight

    However, I find some mistakes caused by rgb2ycbcr function

    Here is my offline test result image

    where classic5 is ok because it is gray input, and the live1 result is strange because of the rgb2ycbcr function.

    The GT I compare is produced by Matlab version YCbCr, which is different from OpenCV version which is utilized in your training and eval process. if n_channels == 3: img_L = cv2.cvtColor(img_L, cv2.COLOR_RGB2BGR) _, encimg = cv2.imencode('.jpg', img_L, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) img_L = cv2.imdecode(encimg, 0) if n_channels == 1 else cv2.imdecode(encimg, 3)

    The logging result in live1_qf10 is 21-12-23 15:43:59.031 : Average PSNR/SSIM/PSNRB - live1_fbcnn_gray_10 -: 28.96$\vert$0.8254$\vert$28.64. which is a little higher than 26.97 but much lower than 29.75, because the model is for OpenCV version Y, not Matlab version

    However, the common setting in deblocking is using Matlab version Y

    I test ICCV21-Learning Dual Priors for JPEG Compression Artifacts Removal, the offline results are image

    It is similar to the results produced in paper

    I hope it may be helpful.

    opened by JuZiSYJ 2
  • run color real error

    run color real error

    run default code, show me error: model.load_state_dict(torch.load(model_path), strict=True): error in loading state_stict for FBCNN: Missing keys in state_stict : ......

    but when run fbcnn_color.py , it is ok. why ?

    opened by gao123qiang 1
  • Slight color(chroma) brightness change

    Slight color(chroma) brightness change

    Hello @jiaxi-jiang, Sorry to bother you,

    I test some non-photo content jpeg(2d drawing),
    high FBCNN QF can preserve detail and noise,
    but I notice some area have slight color(chroma) brightness change.

    In this case, FBCNN QF 70 have slight brightness change on dark red area(red circle),
    could you teach me how to improve color accurate for non-photo content?

    original image (jpeg q75 420),
    153967648-8051d8d9-38ec-4e2a-a4d7-fd0cd3e81c4d

    png 8 bit depth (FBCNN QF 70)
    153967648-8051d8d9-38ec-4e2a-a4d7-fd0cd3e81c4d_qf_70_red

    Other sample(QF 30,50,70)

    sample.zip

    opened by Lee-lithium 2
  • Image tile and output 16 bit depth png

    Image tile and output 16 bit depth png

    Hello @jiaxi-jiang, Sorry to bother you,

    I plan apply FBCNN for my non-photo content jpeg(2d drawing),
    but I have some question about tool implement,
    could you teach me about those question?

    1. I have some large jpeg(4441x6213) want apply FBCNN,
      but I don't have enough RAM to process this image,
      probably can implement tile function for FBCNN?

    I find a split_imageset function in FBCNN utils_image.py,
    I try to invoke this function,
    but I haven't find a good method to implement split and merge function.

    1. I notice FBCNN output is 8 bit depth png, probably implement output(input) 16 bit depth png function, can get better result?

    Thank you produce this amazing tool. :)

    opened by Lee-lithium 2
  • Colab

    Colab

    I'm excited to watch the demo video. Please release the code for Google colab so that we can try it easily. I am looking forward to your next update. Thank you.

    opened by osushilover 0
Releases(v1.0)
Official implementation for paper Knowledge Bridging for Empathetic Dialogue Generation (AAAI 2021).

Knowledge Bridging for Empathetic Dialogue Generation This is the official implementation for paper Knowledge Bridging for Empathetic Dialogue Generat

Qintong Li 50 Dec 20, 2022
An implementation of EWC with PyTorch

EWC.pytorch An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural

Ryuichiro Hataya 166 Dec 22, 2022
This is the official implementation of "One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval".

CORA This is the official implementation of the following paper: Akari Asai, Xinyan Yu, Jungo Kasai and Hannaneh Hajishirzi. One Question Answering Mo

Akari Asai 59 Dec 28, 2022
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022
Code for the paper "Zero-shot Natural Language Video Localization" (ICCV2021, Oral).

Zero-shot Natural Language Video Localization (ZSNLVL) by Pseudo-Supervised Video Localization (PSVL) This repository is for Zero-shot Natural Languag

Computer Vision Lab. @ GIST 37 Dec 27, 2022
Employs neural networks to classify images into four categories: ship, automobile, dog or frog

Neural Net Image Classifier Employs neural networks to classify images into four categories: ship, automobile, dog or frog Viterbi_1.py uses a classic

Riley Baker 1 Jan 18, 2022
CL-Gym: Full-Featured PyTorch Library for Continual Learning

CL-Gym: Full-Featured PyTorch Library for Continual Learning CL-Gym is a small yet very flexible library for continual learning research and developme

Iman Mirzadeh 36 Dec 25, 2022
Make your own game in a font!

Project structure. Included is a suite of tools to create font games. Tutorial: For a quick tutorial about how to make your own game go here For devel

Michael Mulet 125 Dec 04, 2022
Pose estimation with MoveNet Lightning

Pose Estimation With MoveNet Lightning MoveNet is the TensorFlow pre-trained model that identifies 17 different key points of the human body. It is th

Yash Vora 2 Jan 04, 2022
Hl classification bc - A Network-Based High-Level Data Classification Algorithm Using Betweenness Centrality

A Network-Based High-Level Data Classification Algorithm Using Betweenness Centr

Esteban Vilca 3 Dec 01, 2022
imbalanced-DL: Deep Imbalanced Learning in Python

imbalanced-DL: Deep Imbalanced Learning in Python Overview imbalanced-DL (imported as imbalanceddl) is a Python package designed to make deep imbalanc

NTUCSIE CLLab 19 Dec 28, 2022
Project page for the paper Semi-Supervised Raw-to-Raw Mapping 2021.

Project page for the paper Semi-Supervised Raw-to-Raw Mapping 2021.

Mahmoud Afifi 22 Nov 08, 2022
Simulate genealogical trees and genomic sequence data using population genetic models

msprime msprime is a population genetics simulator based on tskit. Msprime can simulate random ancestral histories for a sample of individuals (consis

Tskit developers 150 Dec 14, 2022
Unit-Convertor - Unit Convertor Built With Python

Python Unit Converter This project can convert Weigth,length and ... units for y

Mahdis Esmaeelian 1 May 31, 2022
Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021)

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021) PyTorch implementation of Learning RAW-to-sRGB Mappings with Inaccurat

Zhilu Zhang 53 Dec 20, 2022
Objax Apache-2Objax (🥉19 · ⭐ 580) - Objax is a machine learning framework that provides an Object.. Apache-2 jax

Objax Tutorials | Install | Documentation | Philosophy This is not an officially supported Google product. Objax is an open source machine learning fr

Google 729 Jan 02, 2023
[CVPR2021 Oral] FFB6D: A Full Flow Bidirectional Fusion Network for 6D Pose Estimation.

FFB6D This is the official source code for the CVPR2021 Oral work, FFB6D: A Full Flow Biderectional Fusion Network for 6D Pose Estimation. (Arxiv) Tab

Yisheng (Ethan) He 201 Dec 28, 2022
This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models

CPM 项目描述 CPM(Chinese Pretrained Models)模型是北京智源人工智能研究院和清华大学发布的中文大规模预训练模型。官方发布了三种规模的模型,参数量分别为109M、334M、2.6B,用户需申请与通过审核,方可下载。 由于原项目需要考虑大模型的训练和使用,需要安装较为复杂

hzwer 190 Jan 08, 2023
CvT-ASSD: Convolutional vision-Transformerbased Attentive Single Shot MultiBox Detector (ICTAI 2021 CCF-C 会议)The 33rd IEEE International Conference on Tools with Artificial Intelligence

CvT-ASSD including extra CvT, CvT-SSD, VGG-ASSD models original-code-website: https://github.com/albert-jin/CvT-SSD new-code-website: https://github.c

金伟强 -上海大学人工智能小渣渣~ 5 Mar 07, 2022
Official Pytorch implementation of MixMo framework

MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks Official PyTorch implementation of the MixMo framework | paper | docs Alexandr

79 Nov 07, 2022