[ICCV' 21] "Unsupervised Point Cloud Pre-training via Occlusion Completion"

Overview

OcCo: Unsupervised Point Cloud Pre-training via Occlusion Completion

This repository is the official implementation of paper: "Unsupervised Point Cloud Pre-training via Occlusion Completion"

[Paper] [Project Page]

Intro

image

In this work, we train a completion model that learns how to reconstruct the occluded points, given the partial observations. In this way, our method learns a pre-trained encoder that can identify the visual constraints inherently embedded in real-world point clouds.

We call our method Occlusion Completion (OcCo). We demonstrate that OcCo learns representations that: improve generalization on downstream tasks over prior pre-training methods, transfer to different datasets, reduce training time, and improve labeled sample efficiency.

Citation

Our paper is preprinted on arxiv:

@inproceedings{OcCo,
	title = {Unsupervised Point Cloud Pre-Training via Occlusion Completion},
	author = {Hanchen Wang and Qi Liu and Xiangyu Yue and Joan Lasenby and Matthew J. Kusner},
	year = 2021,
	booktitle = {International Conference on Computer Vision, ICCV}
}

Usage

We provide codes in both PyTorch (1.3): OcCo_Torch and TensorFlow (1.13-1.15): OcCo_TF. We also provide with docker configuration docker. Our recommended development environment PyTorch + docker, the following descriptions are based on OcCo_Torch, we refer the readme in the OcCo_TF for the details of TensorFlow implementation.

1) Prerequisite

Docker

In the docker folder, we provide the build, configuration and launch scripts:

docker
| - Dockerfile_Torch  # configuration
| - build_docker_torch.sh  # scripts for building up from the docker images
| - launch_docker_torch.sh  # launch from the built image
| - .dockerignore  # ignore the log and data folder while building up 

which can be automatically set up as following:

# build up from docker images
cd OcCo_Torch/docker
sh build_docker_torch.sh

# launch the docker image, conduct completion/classification/segmentation experiments
cd OcCo_Torch/docker
sh launch_docker_torch.sh
Non-Docker Setup

Just go with pip install -r Requirements_Torch.txt with the PyTorch 1.3.0, CUDA 10.1, CUDNN 7 (otherwise you may encounter errors while building the C++ extension chamfer_distance for calculating the Chamfer Distance), my development environment besides docker is Ubuntu 16.04.6 LTS, gcc/g++ 5.4.0, cuda10.1, CUDNN 7.

2) Pre-Training via Occlusion Completion (OcCo)

Data Usage:

For the details in the data setup, please see data/readme.md.

Training Scripts:

We unify the training of all three models (PointNet, PCN and DGCNN) in train_completion.py as well as the bash templates, see bash_template/train_completion_template.sh for details:

#!/usr/bin/env bash

cd ../

# train pointnet-occo model on ModelNet, from scratch
python train_completion.py \
	--gpu 0,1 \
	--dataset modelnet \
	--model pointnet_occo \
	--log_dir modelnet_pointnet_vanilla ;

# train dgcnn-occo model on ShapeNet, from scratch
python train_completion.py \
	--gpu 0,1 \
	--batch_size 16 \
	--dataset shapenet \
	--model dgcnn_occo \
	--log_dir shapenet_dgcnn_vanilla ;
Pre-Trained Weights

We will provide the OcCo pre-trained models for all the three models here, you can use them for visualization of completing self-occluded point cloud, fine tuning on classification, scene semantic and object part segmentation tasks.

3) Sanity Check on Pre-Training

We use single channel values as well as the t-SNE for dimensionality reduction to visualize the learned object embeddings on objects from the ShapeNet10, while the encoders are pre-trained on the ModelNet40 dataset, see utils/TSNE_Visu.py for details.

We also train a Support Vector Machine (SVM) based on the learned embeddings object recognition. It is in train_svm.py. We also provide the bash template for this, see bash_template/train_svm_template.sh for details:

#!/usr/bin/env bash

cd ../

# fit a simple linear SVM on ModelNet40 with OcCo PCN
python train_svm.py \
	--gpu 0 \
	--model pcn_util \
	--dataset modelnet40 \
	--restore_path log/completion/modelnet_pcn_vanilla/checkpoints/best_model.pth ;

# grid search the best svm parameters with rbf kernel on ScanObjectNN(OBJ_BG) with OcCo DGCNN
python train_svm.py \
	--gpu 0 \
	--grid_search \
	--batch_size 8 \
	--model dgcnn_util \
	--dataset scanobjectnn \
	--bn \
	--restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;

4) Fine Tuning Task - Classification

Data Usage:

For the details in the data setup, please see data/readme.md.

Training/Testing Scripts:

We unify the training and testing of all three models (PointNet, PCN and DGCNN) in train_cls.py. We also provide the bash template for training each models from scratch, JigSaw/OcCo pre-trained checkpoints, see bash_template/train_cls_template.sh for details:

#!/usr/bin/env bash

cd ../

# training pointnet on ModelNet40, from scratch
python train_cls.py \
	--gpu 0 \
	--model pointnet_cls \
	--dataset modelnet40 \
	--log_dir modelnet40_pointnet_scratch ;

# fine tuning pcn on ScanNet10, using jigsaw pre-trained checkpoints
python train_cls.py \
	--gpu 0 \
	--model pcn_cls \
	--dataset scannet10 \
	--log_dir scannet10_pcn_jigsaw \
	--restore \
	--restore_path log/completion/modelnet_pcn_vanilla/checkpoints/best_model.pth ;

# fine tuning dgcnn on ScanObjectNN(OBJ_BG), using jigsaw pre-trained checkpoints
python train_cls.py \
	--gpu 0,1 \
	--epoch 250 \
	--use_sgd \
	--scheduler cos \
	--model dgcnn_cls \
	--dataset scanobjectnn \
	--bn \
	--log_dir scanobjectnn_dgcnn_occo \
	--restore \
	--restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;

# test pointnet on ModelNet40 from pre-trained checkpoints
python train_cls.py \
	--gpu 1 \
	--mode test \
	--model pointnet_cls \
	--dataset modelnet40 \
	--log_dir modelnet40_pointnet_scratch \
	--restore \
	--restore_path log/cls/modelnet40_pointnet_scratch/checkpoints/best_model.pth ;

5) Fine Tuning Task - Semantic Segmentation

Data Usage:

For the details in the data setup, please see data/readme.md.

Training/Testing Scripts:

We unify the training and testing of all three models (PointNet, PCN and DGCNN) in train_semseg.py. We also provide the bash template for training each models from scratch, JigSaw/OcCo pre-trained checkpoints, see bash_template/train_semseg_template.sh for details:

#!/usr/bin/env bash

cd ../

# train pointnet_semseg on 6-fold cv of S3DIS, from scratch
for area in $(seq 1 1 6)
do
python train_semseg.py \
	--gpu 0,1 \
	--model pointnet_semseg \
	--bn_decay \
	--xavier_init \
	--test_area ${area} \
	--scheduler step \
	--log_dir pointnet_area${area}_scratch ;
done

# fine tune pcn_semseg on 6-fold cv of S3DIS, using jigsaw pre-trained weights
for area in $(seq 1 1 6)
do
python train_semseg.py \
	--gpu 0,1 \
	--model pcn_semseg \
	--bn_decay \
	--test_area ${area} \
	--log_dir pcn_area${area}_jigsaw \
	--restore \
	--restore_path log/jigsaw/modelnet_pcn_vanilla/checkpoints/best_model.pth ;
done

# fine tune dgcnn_semseg on 6-fold cv of S3DIS, using occo pre-trained weights
for area in $(seq 1 1 6)
do
python train_semseg.py \
	--gpu 0,1 \
	--test_area ${area} \
	--optimizer sgd \
	--scheduler cos \
	--model dgcnn_semseg \
	--log_dir dgcnn_area${area}_occo \
	--restore \
	--restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;
done

# test pointnet_semseg on 6-fold cv of S3DIS, from saved checkpoints
for area in $(seq 1 1 6)
do
python train_semseg.py \
	--gpu 0,1 \
	--mode test \
	--model pointnet_semseg \
	--test_area ${area} \
	--scheduler step \
	--log_dir pointnet_area${area}_scratch \
	--restore \
	--restore_path log/semseg/pointnet_area${area}_scratch/checkpoints/best_model.pth ;
done
Visualization:

We recommended using relevant code snippets in RandLA-Net for visualization.

6) Fine Tuning Task - Part Segmentation

Data Usage:

For the details in the data setup, please see data/readme.md.

Training/Testing Scripts:

We unify the training and testing of all three models (PointNet, PCN and DGCNN) in train_partseg.py. We also provide the bash template for training each models from scratch, JigSaw/OcCo pre-trained checkpoints, see bash_template/train_partseg_template.sh for details:

#!/usr/bin/env bash

cd ../

# training pointnet on ShapeNetPart, from scratch
python train_partseg.py \
	--gpu 0 \
	--normal \
	--bn_decay \
	--xavier_init \
	--model pointnet_partseg \
    --log_dir pointnet_scratch ;


# fine tuning pcn on ShapeNetPart, using jigsaw pre-trained checkpoints
python train_partseg.py \
	--gpu 0 \
	--normal \
	--bn_decay \
	--xavier_init \
	--model pcn_partseg \
	--log_dir pcn_jigsaw \
	--restore \
	--restore_path log/jigsaw/modelnet_pcn_vanilla/checkpoints/best_model.pth ;


# fine tuning dgcnn on ShapeNetPart, using occo pre-trained checkpoints
python train_partseg.py \
	--gpu 0,1 \
	--normal \
	--use_sgd \
	--xavier_init \
	--scheduler cos \
	--model dgcnn_partseg \
	--log_dir dgcnn_occo \
	--restore \
	--restore_path log/completion/modelnet_dgcnn_vanilla/checkpoints/best_model.pth ;


# test fine tuned pointnet on ShapeNetPart, using multiple votes
python train_partseg.py \
	--gpu 1 \
	--epoch 1 \
	--mode test \
	--num_votes 3 \
	--model pointnet_partseg \
	--log_dir pointnet_scratch \
	--restore \
	--restore_path log/partseg/pointnet_occo/checkpoints/best_model.pth ;

6) OcCo Data Generation (Create Your Own Dataset for OcCo Pre-Training)

For the details in the self-occluded point cloud generation, please see render/readme.md.

7) Just Completion (Complete Your Own Data with Pre-Trained Model)

You can use it for completing your occluded point cloud data with our provided OcCo checkpoints.

8) Jigsaw Puzzle

We also provide our implementation (developed from scratch) on pre-training point cloud models via solving 3d jigsaw puzzles tasks as well as data generation, the method is described in this paper, while the authors did not reprocess to our code request. The details of our implementation is reported in our paper appendix.

For the details of our implementation, please refer to description in the appendix of our paper and relevant code snippets, i.e., train_jigsaw.py, utils/3DPC_Data_Gen.py and train_jigsaw_template.sh.

Results

Generated Dataset:

image

Completed Occluded Point Cloud:

-- PointNet:

image

-- PCN:

image

-- DGCNN:

image

-- Failure Examples:

image

Visualization of learned features:

image

Classification (linear SVM):

image

Classification:

image

##### Semantic Segmentation:

image

##### Part Segmentation:

image

Sample Efficiency:

image

Learning Efficiency:

image

For the description and discussion of the results, please refer to our paper, thanks :)

Contributing

The code of this project is released under the MIT License.

We would like to thank and acknowledge referenced codes from the following repositories:

https://github.com/wentaoyuan/pcn

https://github.com/hansen7/NRS_3D

https://github.com/WangYueFt/dgcnn

https://github.com/charlesq34/pointnet

https://github.com/charlesq34/pointnet2

https://github.com/PointCloudLibrary/pcl

https://github.com/AnTao97/dgcnn.pytorch

https://github.com/HuguesTHOMAS/KPConv

https://github.com/QingyongHu/RandLA-Net

https://github.com/chrdiller/pyTorchChamferDistance

https://github.com/yanx27/Pointnet_Pointnet2_pytorch

https://github.com/AnTao97/UnsupervisedPointCloudReconstruction

We appreciate the help from the supportive technicians, Peter and Raf, from Cambridge Engineering :)

Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
Make Watson Assistant send messages to your Discord Server

Make Watson Assistant send messages to your Discord Server Prerequisites Sign up for an IBM Cloud account. Fill in the required information and press

1 Jan 10, 2022
공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다.

ObsCare_Main 소개 공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다. CCTV의 대수가 급격히 늘어나면서 관리와 효율성 문제와 더불어, 곳곳에 설치된 CCTV를 개별 관제하는 것으로는 응급 상

5 Jul 07, 2022
Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper

TransGanFormer (wip) Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GansFormer and TransGan paper. I

Phil Wang 146 Dec 06, 2022
VIL-100: A New Dataset and A Baseline Model for Video Instance Lane Detection (ICCV 2021)

Preparation Please see dataset/README.md to get more details about our datasets-VIL100 Please see INSTALL.md to install environment and evaluation too

82 Dec 15, 2022
Research shows Google collects 20x more data from Android than Apple collects from iOS. Block this non-consensual telemetry using pihole blocklists.

pihole-antitelemetry Research shows Google collects 20x more data from Android than Apple collects from iOS. Block both using these pihole lists. Proj

Adrian Edwards 290 Jan 09, 2023
ISBI 2022: Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image.

Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image Introduction This repository contains the PyTorch implem

25 Nov 09, 2022
Self-Supervised Generative Style Transfer for One-Shot Medical Image Segmentation

Self-Supervised Generative Style Transfer for One-Shot Medical Image Segmentation This repository contains the Pytorch implementation of the proposed

Devavrat Tomar 19 Nov 10, 2022
A general and strong 3D object detection codebase that supports more methods, datasets and tools (debugging, recording and analysis).

ALLINONE-Det ALLINONE-Det is a general and strong 3D object detection codebase built on OpenPCDet, which supports more methods, datasets and tools (de

Michael.CV 5 Nov 03, 2022
Code for one-stage adaptive set-based HOI detector AS-Net.

AS-Net Code for one-stage adaptive set-based HOI detector AS-Net. Mingfei Chen*, Yue Liao*, Si Liu, Zhiyuan Chen, Fei Wang, Chen Qian. "Reformulating

Mingfei Chen 45 Dec 09, 2022
Linear algebra python - Number of operations and problems in Linear Algebra and Numerical Linear Algebra

Linear algebra in python Number of operations and problems in Linear Algebra and

Alireza 5 Oct 09, 2022
Includes PyTorch -> Keras model porting code for ConvNeXt family of models with fine-tuning and inference notebooks.

ConvNeXt-TF This repository provides TensorFlow / Keras implementations of different ConvNeXt [1] variants. It also provides the TensorFlow / Keras mo

Sayak Paul 87 Dec 06, 2022
Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"

Hold me tight! Influence of discriminative features on deep network boundaries This is the source code to reproduce the experiments of the NeurIPS 202

EPFL LTS4 19 Dec 10, 2021
Bayesian Neural Networks in PyTorch

We present the new scheme to compute Monte Carlo estimator in Bayesian VI settings with almost no memory cost in GPU, regardles of the number of sampl

Jurijs Nazarovs 7 May 03, 2022
A mini lib that implements several useful functions binding to PyTorch in C++.

Torch-gather A mini library that implements several useful functions binding to PyTorch in C++. What does gather do? Why do we need it? When dealing w

maxwellzh 8 Sep 07, 2022
Framework to build and train RL algorithms

RayLink RayLink is a RL framework used to build and train RL algorithms. RayLink was used to build a RL framework, and tested in a large-scale multi-a

Bytedance Inc. 32 Oct 07, 2022
Code accompanying the paper Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs (Chen et al., CVPR 2020, Oral).

Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs This repository contains PyTorch implementation of our pa

Shizhe Chen 178 Dec 29, 2022
Code, final versions, and information on the Sparkfun Graphical Datasheets

Graphical Datasheets Code, final versions, and information on the SparkFun Graphical Datasheets. Generated Cells After Running Script Example Complete

SparkFun Electronics 102 Jan 05, 2023
Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation"

CoCosNet Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation" (CVPR 2020 oral). Update: 202

Lingbo Yang 38 Sep 22, 2021
Educational 2D SLAM implementation based on ICP and Pose Graph

slam-playground Educational 2D SLAM implementation based on ICP and Pose Graph How to use: Use keyboard arrow keys to navigate robot. Press 'r' to vie

Kirill 19 Dec 17, 2022