The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".

Overview

Deep Exemplar-based Video Colorization (Pytorch Implementation)

Paper | Pretrained Model | Youtube video 🔥 | Colab demo

Deep Exemplar-based Video Colorization, CVPR2019

Bo Zhang1,3, Mingming He1,5, Jing Liao2, Pedro V. Sander1, Lu Yuan4, Amine Bermak1, Dong Chen3
1Hong Kong University of Science and Technology,2City University of Hong Kong, 3Microsoft Research Asia, 4Microsoft Cloud&AI, 5USC Institute for Creative Technologies

Prerequisites

  • Python 3.6+
  • Nvidia GPU + CUDA, CuDNN

Installation

First use the following commands to prepare the environment:

conda create -n ColorVid python=3.6
source activate ColorVid
pip install -r requirements.txt

Then, download the pretrained models from this link, unzip the file and place the files into the corresponding folders:

  • video_moredata_l1 under the checkpoints folder
  • vgg19_conv.pth and vgg19_gray.pth under the data folder

Data Preparation

In order to colorize your own video, it requires to extract the video frames, and provide a reference image as an example.

  • Place your video frames into one folder, e.g., ./sample_videos/v32_180
  • Place your reference images into another folder, e.g., ./sample_videos/v32

If you want to automatically retrieve color images, you can try the retrieval algorithm from this link which will retrieve similar images from the ImageNet dataset. Or you can try this link on your own image database.

Test

python test.py --image-size [image-size] \
               --clip_path [path-to-video-frames] \
               --ref_path [path-to-reference] \
               --output_path [path-to-output]

We provide several sample video clips with corresponding references. For example, one can colorize one sample legacy video using:

python test.py --clip_path ./sample_videos/clips/v32 \
               --ref_path ./sample_videos/ref/v32 \
               --output_path ./sample_videos/output

Note that we use 216*384 images for training, which has aspect ratio of 1:2. During inference, we scale the input to this size and then rescale the output back to the original size.

Train

We also provide training code for reference. The training can be started by running:

python --data_root [root of video samples] \
       --data_root_imagenet [root of image samples] \
       --gpu_ids [gpu ids] \

We do not provide the full video dataset due to the copyright issue. For image samples, we retrieve semantically similar images from ImageNet using this repository. Still, one can refer to our code to understand the detailed procedure of augmenting the image dataset to mimic the video frames.

Comparison with State-of-the-Arts

More results

Please check our Youtube demo for results of video colorization.

Citation

If you use this code for your research, please cite our paper.

@inproceedings{zhang2019deep,
title={Deep exemplar-based video colorization},
author={Zhang, Bo and He, Mingming and Liao, Jing and Sander, Pedro V and Yuan, Lu and Bermak, Amine and Chen, Dong},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={8052--8061},
year={2019}
}

Old Photo Restoration 🔥

If you are also interested in restoring the artifacts in the legacy photo, please check our recent work, bringing old photo back to life.

@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}

License

This project is licensed under the MIT license.

Comments
  • I met

    I met "CUDA error: an illegal memory access was encountered" problem

    When i tested your pre-trained model, i met the problem called "CUDA error: an illegal memory access was encountered", can you provide the version of your CUDA, cudnn and pytorch

    opened by Zonobia-A 3
  • Could i apply this method to image colorization

    Could i apply this method to image colorization

    Hi, could i apply this method to image colorization and remove the temporal consistency loss? BTW, how to get the pairs.txt/pairs_mid.txt/pairs_bad.txt used in videoloader_imagenet.py?

    opened by buptlj 2
  • video size very low

    video size very low

    video colorization very good and very impressive . but render image low size 768x432 . and video size also same. how to in increase the image size and video size. thank you..

    opened by srinivas68 2
  • training command is wrong.

    training command is wrong.

    The original training command is python --data_root [root of video samples] \ --data_root_imagenet [root of image samples] \ --gpu_ids [gpu ids] \ Maybe it should be python train.py --data_root [root of video samples] \ --data_root_imagenet [root of image samples] \ --gpu_ids [gpu ids] \ ?

    opened by Horizon2333 1
  • There seems a bug ofr feature centering with x_features - y_features.mean

    There seems a bug ofr feature centering with x_features - y_features.mean

    There seems a bug ofr feature centering with x_features - y_features.mean which I think should be x_features - x_features.mean https://github.com/zhangmozhe/Deep-Exemplar-based-Video-Colorization/blob/37639748f12dfecbb0a3fe265b533887b5fe46ce/models/ContextualLoss.py#L100

    opened by JerryLeolfl 1
  • the test code

    the test code

    Thanks for your great work! I have a question when i run test.py. Why don't you extract the feature of inference image out of the for loop. I haven't found any difference.

    opened by buptlj 1
  • CUDA OOM

    CUDA OOM

    Hello, I am running a 4gb nvidia GPU. Is that enough for inference? I try to run on ubuntu 18.04 as well as windows but always get a Out of memory error eventually. Sometimes happen after 2nd image and sometimes after 5th. This is 1080p video.

    opened by quocthaitang 1
  • illustrative training data

    illustrative training data

    Could you please release a tiny illustrative training dataset, such that the preparation of a custom training data can be easily followed. Currently, it is not easy to prepare a custom training data by reading the train.py. or could you please give a further explanation of the following fields? ( image1_name, image2_name, reference_video_name, reference_video_name1, reference_name1, reference_name2, reference_name3, reference_name4, reference_name5, reference_gt1, reference_gt2, reference_gt3, ) Thank you very much.

    opened by davyfeng 1
  • Runtime error

    Runtime error

    Getting a runtime error when running the test cells at the 'first we visualize the input video'. I'm not good with code, but this is the first time I've experienced this issue with this wonderful program. No cells execute after this error. I've attached a screenshot. IMG_7258

    opened by StevieMaccc 0
  • Test result problem

    Test result problem

    At present, after training, it is found that the generated test image is effective, but the color saturation is very low. Is it because of the colored model or other reasons? I'm looking forward to your reply!!!

    opened by songyn95 0
  • Training has little effect

    Training has little effect

    Hello, I read in the paper that "we train the network for 10 epichs with a batch size of 40 pairs of video frames. " Is it effective after only 10 iterations? Is your data 768 videos, 25 frames per video? I only train one video at present, epoch=40, but I find that it has little effect. What may be the reason?

    opened by songyn95 0
  • Error 404 - Important files missing

    Error 404 - Important files missing

    I was working with the Colab program and there appears to be important models / files missing. As a result the program has ceased to function. I've brough to the designers attention so hopefully will be resolved.

    opened by StevieMaccc 1
  • CUDA device error

    CUDA device error "module 'torch._C' has no attribute '_cuda_setDevice'" when running test.py

    Hi !

    Trying out test.py results in the following error:

    Traceback (most recent call last): File "test.py", line 26, in <module> torch.cuda.set_device(0) File "C:\Users\natha\anaconda3\envs\ColorVid\lib\site-packages\torch\cuda\__init__.py", line 311, in set_device torch._C._cuda_setDevice(device) AttributeError: module 'torch._C' has no attribute '_cuda_setDevice'

    I tried installing pytorch manually using their tool https://pytorch.org/get-started/locally/ (with CUDA 11.6) but that doesn't resolve the issue.

    Can someone help me understand what is going on ? Thanks !!

    opened by FoxTrotte 4
  • Questions about the test phase

    Questions about the test phase

    Thanks for your outstanding work! I have some questions when I read it.

    1. What are the settings when you test this video model on image colorization which used for comparing with other image colorization methods?
    2. Could you please give me a url about your video testset (116 video clips collected from Videvo)? Thanks again for your attention.
    opened by JerryLeolfl 0
  • It seems not correct of the code in TestTransforms.py line 341

    It seems not correct of the code in TestTransforms.py line 341

    https://github.com/zhangmozhe/Deep-Exemplar-based-Video-Colorization/blob/37639748f12dfecbb0a3fe265b533887b5fe46ce/lib/TestTransforms.py#L341 it seems a repeated define of call

    opened by JerryLeolfl 0
  • Wrong output resolution

    Wrong output resolution

    Processing 4x3 video 912x720 outputs cropped and downscaled 16x9 768x432. Playing around "python test.py --image-size [image-size] " doesn't help My be I don't properly specify an arguments? So, what the the proper use of --image-size [image-size] in order to get 912x720? Greatly appreciate for suggesting.

    opened by semel1 5
Unofficial Implementation of Oboe (SIGCOMM'18').

Oboe-Reproduce This is the unofficial implementation of the paper "Oboe: Auto-tuning video ABR algorithms to network conditions, Zahaib Akhtar, Yun Se

Tianchi Huang 13 Nov 04, 2022
A lossless neural compression framework built on top of JAX.

Kompressor Branch CI Coverage main (active) main development A neural compression framework built on top of JAX. Install setup.py assumes a compatible

Rosalind Franklin Institute 2 Mar 14, 2022
GBIM(Gesture-Based Interaction map)

手势交互地图 GBIM(Gesture-Based Interaction map),基于视觉深度神经网络的交互地图,通过电脑摄像头观察使用者的手势变化,进而控制地图进行简单的交互。网络使用PaddleX提供的轻量级模型PPYOLO Tiny以及MobileNet V3 small,使得整个模型大小约10MB左右,即使在CPU下也能快速定位和识别手势。

8 Feb 10, 2022
Diverse Image Captioning with Context-Object Split Latent Spaces (NeurIPS 2020)

Diverse Image Captioning with Context-Object Split Latent Spaces This repository is the PyTorch implementation of the paper: Diverse Image Captioning

Visual Inference Lab @TU Darmstadt 34 Nov 21, 2022
A toy compiler that can convert Python scripts to pickle bytecode 🥒

Pickora 🐰 A small compiler that can convert Python scripts to pickle bytecode. Requirements Python 3.8+ No third-party modules are required. Usage us

ꌗᖘ꒒ꀤ꓄꒒ꀤꈤꍟ 68 Jan 04, 2023
CS50's Introduction to Artificial Intelligence Test Scripts

CS50's Introduction to Artificial Intelligence Test Scripts 🤷‍♂️ What's this? 🤷‍♀️ This repository contains Python scripts to automate tests for mos

Jet Kan 2 Dec 28, 2022
Code for STFT Transformer used in BirdCLEF 2021 competition.

STFT_Transformer Code for STFT Transformer used in BirdCLEF 2021 competition. The STFT Transformer is a new way to use Transformers similar to Vision

Jean-François Puget 69 Sep 29, 2022
A TensorFlow implementation of SOFA, the Simulator for OFfline LeArning and evaluation.

SOFA This repository is the implementation of SOFA, the Simulator for OFfline leArning and evaluation. Keeping Dataset Biases out of the Simulation: A

22 Nov 23, 2022
Multiview 3D object detection on MultiviewC dataset through moft3d.

Multiview Orthographic Feature Transformation for 3D Object Detection Multiview 3D object detection on MultiviewC dataset through moft3d. Introduction

Jiahao Ma 20 Dec 21, 2022
Redash reset for python

redash-reset This will use a default REDASH_SECRET_KEY key of c292a0a3aa32397cdb050e233733900f this allows you to reset the password of the user ID bu

Robert Wiggins 5 Nov 14, 2022
Spline is a tool that is capable of running locally as well as part of well known pipelines like Jenkins (Jenkinsfile), Travis CI (.travis.yml) or similar ones.

Welcome to spline - the pipeline tool Important note: Since change in my job I didn't had the chance to continue on this project. My main new project

Thomas Lehmann 29 Aug 22, 2022
Code accompanying the paper Shared Independent Component Analysis for Multi-subject Neuroimaging

ShICA Code accompanying the paper Shared Independent Component Analysis for Multi-subject Neuroimaging Install Move into the ShICA directory cd ShICA

8 Nov 07, 2022
Image-based Navigation in Real-World Environments via Multiple Mid-level Representations: Fusion Models Benchmark and Efficient Evaluation

Image-based Navigation in Real-World Environments via Multiple Mid-level Representations: Fusion Models Benchmark and Efficient Evaluation This reposi

First Person Vision @ Image Processing Laboratory - University of Catania 1 Aug 21, 2022
Byte-based multilingual transformer TTS for low-resource/few-shot language adaptation.

One model to speak them all 🌎 Audio Language Text ▷ Chinese 人人生而自由,在尊严和权利上一律平等。 ▷ English All human beings are born free and equal in dignity and rig

Mutian He 60 Nov 14, 2022
Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion This repository contains a pytorch implementation of "Learning to Listen: Modeling

50 Dec 17, 2022
Synthetic structured data generators

Join us on What is Synthetic Data? Synthetic data is artificially generated data that is not collected from real world events. It replicates the stati

YData 850 Jan 07, 2023
Twin-deep neural network for semi-supervised learning of materials properties

Deep Semi-Supervised Teacher-Student Material Synthesizability Prediction Citation: Semi-supervised teacher-student deep neural network for materials

MLEG 3 Dec 14, 2022
A collection of easy-to-use, ready-to-use, interesting deep neural network models

Interesting and reproducible research works should be conserved. This repository wraps a collection of deep neural network models into a simple and un

Aria Ghora Prabono 16 Jun 16, 2022
nanodet_plus,yolov5_v6.0

OAK_Detection OAK设备上适配nanodet_plus,yolov5_v6.0 Environment pytorch = 1.7.0

炼丹去了 1 Feb 18, 2022
Learning Confidence for Out-of-Distribution Detection in Neural Networks

Learning Confidence Estimates for Neural Networks This repository contains the code for the paper Learning Confidence for Out-of-Distribution Detectio

235 Jan 05, 2023