A 2D Visual Localization Framework based on Essential Matrices [ICRA2020]

Overview

A 2D Visual Localization Framework based on Essential Matrices

This repository provides implementation of our paper accepted at ICRA: To Learn or Not to Learn: Visual Localization from Essential Matrices

Pipeline

To use our code, first download the repository:

git clone [email protected]:GrumpyZhou/visloc-relapose.git

Setup Running Environment

We have tested the code on Linux Ubuntu 16.04.6 under following environments:

Python 3.6 / 3.7
Pytorch 0.4.0 / 1.0 / 1.1 
CUDA 8.0 + CUDNN 8.0v5.1
CUDA 10.0 + CUDNN 10.0v7.5.1.10

The setting we used in the paper is:
Python 3.7 + Pytorch 1.1 + CUDA 10.0 + CUDNN 10.0v7.5.1.10

We recommend to use Anaconda to manage packages. Run following lines to automatically setup a ready environment for our code.

conda env create -f environment.yml  # Notice this one installs latest pytorch version.
conda activte relapose

Otherwise, one can try to download all required packages separately according to their offical documentation.

Prepare Datasets

Our code is flexible for evaluation on various localization datasets. We use Cambridge Landmarks dataset as an example to show how to prepare a dataset:

  1. Create data/ folder
  2. Download original Cambridge Landmarks Dataset and extract it to $CAMBRIDGE_DIR$.
  3. Construct the following folder structure in order to conveniently run all scripts in this repo:
    cd visloc-relapose/
    mkdir data
    mkdir data/datasets_original
    cd data/original_datasets
    ln -s $CAMBRIDGE_DIR$ CambridgeLandmarks
    
  4. Download our pairs for training, validation and testing. About the format of our pairs, check readme.
  5. Place the pairs to corresponding folder under data/datasets_original/CambridgeLandmarks.
  6. Pre-save resized 480 images to speed up data loading time for regression models (Optional, but Recommended)
    cd visloc-relapose/
    python -m utils.datasets.resize_dataset \
    	--base_dir data/datasets_original/CambridgeLandmarks \ 
    	--save_dir=data/datasets_480/CambridgeLandmarks \
    	--resize 480  --copy_txt True 
    
  7. Test your setup by visualizing the data using notebooks/data_loading.ipynb.

7Scenes Datasets

We follow the camera pose label convention of Cambridge Landmarks dataset. Similarly, you can download our pairs for 7Scenes. For other datasets, contact me for information about preprocessing and pair generation.

Feature-based: SIFT + 5-Point Solver

We use the SIFT feature extractor and feature matcher in colmap. One can follow the installation guide to install colmap. We save colmap outputs in database format, see explanation.

Preparing SIFT features

Execute following commands to run SIFT extraction and matching on CambridgeLandmarks:

cd visloc-relapose/
bash prepare_colmap_data.sh  CambridgeLandmarks

Here CambridgeLandmarks is the folder name that is consistent with the dataset folder. So you can also use other dataset names such as 7Scenes if you have prepared the dataset properly in advance.

Evaluate SIFT within our pipeline

Example to run sift+5pt on Cambridge Landmarks:

python -m pipeline.sift_5pt \
        --data_root 'data/datasets_original/' \
        --dataset 'CambridgeLandmarks' \
        --pair_txt 'test_pairs.5nn.300cm50m.vlad.minmax.txt' \
        --cv_ransac_thres 0.5\
        --loc_ransac_thres 5\
        -odir 'output/sift_5pt'\
        -log 'results.dvlad.minmax.txt'

More evaluation examples see: sift_5pt.sh. Check example outputs Visualize SIFT correspondences using notebooks/visualize_sift_matches.ipynb.

Learning-based: Direct Regression via EssNet

The pipeline.relapose_regressor module can be used for both training or testing our regression networks defined under networks/, e.g., EssNet, NCEssNet, RelaPoseNet... We provide training and testing examples in regression.sh. The module allows flexible variations of the setting. For more details about the module options, run python -m pipeline.relapose_regressor -h.

Training

Here we show an example how to train an EssNet model on ShopFacade scene.

python -m pipeline.relapose_regressor \
        --gpu 0 -b 16 --train -val 20 --epoch 200 \
        --data_root 'data/datasets_480' -ds 'CambridgeLandmarks' \
        --incl_sces 'ShopFacade' \
        -rs 480 --crop 448 --normalize \
        --ess_proj --network 'EssNet' --with_ess\
        --pair 'train_pairs.30nn.medium.txt' -vpair 'val_pairs.5nn.medium.txt' \
        -lr 0.0001 -wd 0.000001 \
        --odir  'output/regression_models/example' \
        -vp 9333 -vh 'localhost' -venv 'main' -vwin 'example.shopfacade' 

This command produces outputs are available online here.

Visdom (optional)

As you see in the example above, we use Visdom server to visualize the training process. One can adapt the meters to plot inside utils/common/visdom.py. If you DON'T want to use visdom, just remove the last line -vp 9333 -vh 'localhost' -venv 'main' -vwin 'example.shopfacade'.

Trained models and weights

We release all trained models that are used in our paper. One can download them from pretrained regression models. We also provide some pretrained weights on MegaDepth/ScanNet.

Testing

Here is a piece of code to test the example model above.

python -m pipeline.relapose_regressor \
        --gpu 2 -b 16  --test \
        --data_root 'data/datasets_480' -ds 'CambridgeLandmarks' \
        --incl_sces 'ShopFacade' \
        -rs 480 --crop 448 --normalize\
        --ess_proj --network 'EssNet'\
        --pair 'test_pairs.5nn.300cm50m.vlad.minmax.txt'\
        --resume 'output/regression_models/example/ckpt/checkpoint_140_0.36m_1.97deg.pth' \
        --odir 'output/regression_models/example'

This testing code outputs are shown in test_results.txt. For convenience, we also provide notebooks/eval_regression_models.ipynb to perform evaluation.

Hybrid: Learnable Matching + 5-Point Solver

In this method, the code of the NCNet is taken from the original implementation https://github.com/ignacio-rocco/ncnet. We use their pre-trained model but we only use the weights for neighbourhood consensus(NC-Matching), i.e., the 4d-conv layer weights. For convenience, you can download our parsed version nc_ivd_5ep.pth. The models for feature extractor initialization needs to be downloaded from pretrained regression models in advance, if you want to test them.

Testing example for NC-EssNet(7S)+NCM+5Pt (Paper.Tab2)

In this example, we use NCEssNet trained on 7Scenes for 60 epochs to extract features and use the pre-trained NC Matching layer to get the point matches. Finally the 5 point solver calculates the essential matrix. The model is evaluated on CambridgeLandmarks.

# 
python -m pipeline.ncmatch_5pt \
    --data_root 'data/datasets_original' \
    --dataset 'CambridgeLandmarks' \
    --pair_txt 'test_pairs.5nn.300cm50m.vlad.minmax.txt' \
    --cv_ransac_thres 4.0\
    --loc_ransac_thres 15\
    --feat 'output/regression_models/448_normalize/nc-essnet/7scenes/checkpoint_60_0.04m_1.62deg.pth'\
    --ncn 'output/pretrained_weights/nc_ivd_5ep.pth' \    
    --posfix 'essncn_7sc_60ep+ncn'\
    --match_save_root 'output/ncmatch_5pt/saved_matches'\
    --ncn_thres 0.9 \
    --gpu 2\
    -o 'output/ncmatch_5pt/loc_results/Cambridge/essncn_7sc_60ep+ncn.txt' 

Example outputs is available in essncn_7sc_60ep+ncn.txt. If you don't want to save THE intermediate matches extracted, remove THE option --match_save_root.

Owner
Qunjie Zhou
PhD Candidate at the Dynamic Vision and Learning Group.
Qunjie Zhou
Nonnegative spatial factorization for multivariate count data

Nonnegative spatial factorization for multivariate count data This repository contains supporting code to facilitate reproducible analysis. For detail

Will Townes 24 Dec 19, 2022
Code for the paper "Controllable Video Captioning with an Exemplar Sentence"

SMCG Code for the paper "Controllable Video Captioning with an Exemplar Sentence" Introduction We investigate a novel and challenging task, namely con

10 Dec 04, 2022
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
CTF challenges and write-ups for MicroCTF 2021.

MicroCTF 2021 Qualifications About This repository contains CTF challenges and official write-ups for MicroCTF 2021 Qualifications. License Distribute

Shellmates 12 Dec 27, 2022
Iowa Project - My second project done at General Assembly, focused on feature engineering and understanding Linear Regression as a concept

Project 2 - Ames Housing Data and Kaggle Challenge PROBLEM STATEMENT Inferring or Predicting? What's more valuable for a housing model? When creating

Adam Muhammad Klesc 1 Jan 03, 2022
Calling Julia from Python - an experiment on data loading

Calling Julia from Python - an experiment on data loading See the slides. TLDR After reading Patrick's blog post, we decided to try to replace C++ wit

Abel Siqueira 8 Jun 07, 2022
Heterogeneous Temporal Graph Neural Network

Heterogeneous Temporal Graph Neural Network This repository contains the datasets and source code of HTGNN. run_mag.ipynb is the training and testing

15 Dec 22, 2022
MLJetReconstruction - using machine learning to reconstruct jets for CMS

MLJetReconstruction - using machine learning to reconstruct jets for CMS The C++ data extraction code used here was based heavily on that foundv here.

ALPhA Davidson 0 Nov 17, 2021
Systematic generalisation with group invariant predictions

Requirements are Python 3, TensorFlow v1.14, Numpy, Scipy, Scikit-Learn, Matplotlib, Pillow, Scikit-Image, h5py, tqdm. Experiments were run on V100 GPUs (16 and 32GB).

Faruk Ahmed 30 Dec 01, 2022
GUI for TOAD-GAN, a PCG-ML algorithm for Token-based Super Mario Bros. Levels.

If you are using this code in your own project, please cite our paper: @inproceedings{awiszus2020toadgan, title={TOAD-GAN: Coherent Style Level Gene

Maren A. 13 Dec 14, 2022
Spectralformer: Rethinking hyperspectral image classification with transformers

The code in this toolbox implements the "Spectralformer: Rethinking hyperspectral image classification with transformers". More specifically, it is detailed as follow.

Danfeng Hong 104 Jan 04, 2023
Code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition"

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC)

ppg-vc Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC) This repo implements different kinds of PPG-based VC models. Pretrained models. More m

Liu Songxiang 227 Dec 28, 2022
TensorRT examples (Jetson, Python/C++)(object detection)

TensorRT examples (Jetson, Python/C++)(object detection)

Nobuo Tsukamoto 53 Dec 22, 2022
More Photos are All You Need: Semi-Supervised Learning for Fine-Grained Sketch Based Image Retrieval

More Photos are All You Need: Semi-Supervised Learning for Fine-Grained Sketch Based Image Retrieval, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdh

Ayan Kumar Bhunia 22 Aug 27, 2022
Implementing yolov4 target detection and tracking based on nao robot

Implementing yolov4 target detection and tracking based on nao robot

6 Apr 19, 2022
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification

DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification Created by Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, Ch

Yongming Rao 414 Jan 01, 2023
An unofficial implementation of "Unpaired Image Super-Resolution using Pseudo-Supervision." CVPR2020

UnpairedSR An unofficial implementation of "Unpaired Image Super-Resolution using Pseudo-Supervision." CVPR2020 turn RCAN(modified) -- xmodel(xilinx

JiaKui Hu 10 Oct 28, 2022
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)

NeuralWOZ This code is official implementation of "NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation". Sungdong Kim, Mi

NAVER AI 31 Oct 25, 2022
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Abhinav Atrishi 11 Nov 25, 2022