Demo code for paper "Learning optical flow from still images", CVPR 2021.

Overview

Depthstillation

Demo code for "Learning optical flow from still images", CVPR 2021.

[Project page] - [Paper] - [Supplementary]

This code is provided to replicate the qualitative results shown in the supplementary material, Sections 2-4. The code has been tested using Ubuntu 20.04 LTS, python 3.8 and gcc 9.3.0

Alt text

Reference

If you find this code useful, please cite our work:

@inproceedings{Aleotti_CVPR_2021,
  title     = {Learning optical flow from still images},
  author    = {Aleotti, Filippo and
               Poggi, Matteo and
               Mattoccia, Stefano},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2021}
}

Contents

  1. Introduction
  2. Usage
  3. Supplementary
  4. Weights
  5. Contacts
  6. Acknowledgments

Introduction

This paper deals with the scarcity of data for training optical flow networks, highlighting the limitations of existing sources such as labeled synthetic datasets or unlabeled real videos. Specifically, we introduce a framework to generate accurate ground-truth optical flow annotations quickly and in large amounts from any readily available single real picture. Given an image, we use an off-the-shelf monocular depth estimation network to build a plausible point cloud for the observed scene. Then, we virtually move the camera in the reconstructed environment with known motion vectors and rotation angles, allowing us to synthesize both a novel view and the corresponding optical flow field connecting each pixel in the input image to the one in the new frame. When trained with our data, state-of-the-art optical flow networks achieve superior generalization to unseen real data compared to the same models trained either on annotated synthetic datasets or unlabeled videos, and better specialization if combined with synthetic images.

Usage

Install the project requirements in a new python 3 environment:

virtualenv -p python3 learning_flow_env
source learning_flow_env/bin/activate
pip install -r requirements.txt

Compile the forward_warping module, written in C (required to handle warping collisions):

cd external/forward_warping
bash compile.sh
cd ../..

You are now ready to run the depthstillation.py script:

python depthstillation.py 

By switching some parameters you can generate all the qualitatives provided in the supplementary material.

These parameters are:

  • num_motions: changes the number of virtual motions
  • segment: enables instance segmentation (for independently moving objects)
  • mask_type: mask selection. Options are H' and H
  • num_objects: sets the number of independently moving objects (one, in this example)
  • no_depth: disables monocular depth and force depth to assume a constant value
  • no_sharp: disables depth sharpening
  • change_k: uses different intrinsics K
  • change_motion: samples a different motion (ignored if num_motions greater than 1)

For instance, to simulate a different K settings, just run:

python depthstillation.py --change_k

The results are saved in dCOCO folder, organized as follows:

  • depth_color: colored depth map
  • flow: generated flow labels (in 16bit KITTI format)
  • flow_color: colored flow labels
  • H: H mask
  • H': H' mask
  • im0: real input image
  • im1: generated virtual image
  • im1_raw: generated virtual image (pre-inpainting)
  • instances_color: colored instance map (if --segment is enabled)
  • M: M mask
  • M': M' mask
  • P: P mask

We report the list of files used to depthstill dCOCO in samples/dCOCO_file_list.txt

Supplementary

We report here the list of commands to obtain, in the same order, the Figures shown in Sections 2-4 of the Supplementary Material:

  • Section 2 -- the first figure is obtained with default parameters, then we use --no_depth and --no_depth --segment respectively
  • Section 3 -- the first figure is obtained with --no_sharp, the remaining figures with default parameters or by setting --mask_type "H".
  • Section 4 -- we show three times the results obtained by default parameters, followed respectively by figures generated using --change_k, --change_motion and --segment individually.

Weights

We provide RAFT models trained in our experiments. To run them and reproduce our results, please refer to RAFT repository:

Contacts

m [dot] poggi [at] unibo [dot] it

Acknowledgments

Thanks to Clément Godard and Niantic for sharing monodepth2 code, used to simulate camera motion.

Our work is inspired by Jamie Watson et al., Learning Stereo from Single Images.

3D-Transformer: Molecular Representation with Transformer in 3D Space

3D-Transformer: Molecular Representation with Transformer in 3D Space

55 Dec 19, 2022
House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent for Professional Architects

House-GAN++ Code and instructions for our paper: House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent

122 Dec 28, 2022
Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

97 Dec 17, 2022
On Nonlinear Latent Transformations for GAN-based Image Editing - PyTorch implementation

On Nonlinear Latent Transformations for GAN-based Image Editing - PyTorch implementation On Nonlinear Latent Transformations for GAN-based Image Editi

Valentin Khrulkov 22 Oct 24, 2022
A Closer Look at Reference Learning for Fourier Phase Retrieval

A Closer Look at Reference Learning for Fourier Phase Retrieval This repository contains code for our NeurIPS 2021 Workshop on Deep Learning and Inver

Tobias Uelwer 1 Oct 28, 2021
Artificial intelligence technology inferring issues and logically supporting facts from raw text

개요 비정형 텍스트를 학습하여 쟁점별 사실과 논리적 근거 추론이 가능한 인공지능 원천기술 Artificial intelligence techno

6 Dec 29, 2021
Augmentation for Single-Image-Super-Resolution

SRAugmentation Augmentation for Single-Image-Super-Resolution Implimentation CutBlur Cutout CutMix Cutup CutMixup Blend RGBPermutation Identity OneOf

Yubo 6 Jun 27, 2022
Official implementation for "Low-light Image Enhancement via Breaking Down the Darkness"

Low-light Image Enhancement via Breaking Down the Darkness by Qiming Hu, Xiaojie Guo. 1. Dependencies Python3 PyTorch=1.0 OpenCV-Python, TensorboardX

Qiming Hu 30 Jan 01, 2023
A toolkit for controlling Euro Truck Simulator 2 with python to develop self-driving algorithms.

europilot Overview Europilot is an open source project that leverages the popular Euro Truck Simulator(ETS2) to develop self-driving algorithms. A con

1.4k Jan 04, 2023
UNet model with VGG11 encoder pre-trained on Kaggle Carvana dataset

TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation By Vladimir Iglovikov and Alexey Shvets Introduction TernausNet is

Vladimir Iglovikov 1k Dec 28, 2022
This code is for eCaReNet: explainable Cancer Relapse Prediction Network.

eCaReNet This code is for eCaReNet: explainable Cancer Relapse Prediction Network. (Towards Explainable End-to-End Prostate Cancer Relapse Prediction

Institute of Medical Systems Biology 2 Jul 28, 2022
Deploy recommendation engines with Edge Computing

RecoEdge: Bringing Recommendations to the Edge A one stop solution to build your recommendation models, train them and, deploy them in a privacy prese

NimbleEdge 131 Jan 02, 2023
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit).

Salesforce 334 Jan 06, 2023
Implementation of Ag-Grid component for Streamlit

streamlit-aggrid AgGrid is an awsome grid for web frontend. More information in https://www.ag-grid.com/. Consider purchasing a license from Ag-Grid i

Pablo Fonseca 556 Dec 31, 2022
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

AyseBuyukcelik 2 Jan 26, 2022
CNNs for Sentence Classification in PyTorch

Introduction This is the implementation of Kim's Convolutional Neural Networks for Sentence Classification paper in PyTorch. Kim's implementation of t

Shawn Ng 956 Dec 19, 2022
FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks

FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks Image Classification Dataset: Google Landmark, COCO, ImageNet Model: Efficient

FedML-AI 62 Dec 10, 2022
Meta Language-Specific Layers in Multilingual Language Models

Meta Language-Specific Layers in Multilingual Language Models This repo contains the source codes for our paper On Negative Interference in Multilingu

Zirui Wang 20 Feb 13, 2022
Deep learning with dynamic computation graphs in TensorFlow

TensorFlow Fold TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation graph

1.8k Dec 28, 2022
A Marvelous ChatBot implement using PyTorch.

PyTorch Marvelous ChatBot [Update] it's 2019 now, previously model can not catch up state-of-art now. So we just move towards the future a transformer

JinTian 223 Oct 18, 2022