PyTorch implementation of SQN based on CloserLook3D's encoder

Overview

SQN_pytorch

This repo is an implementation of Semantic Query Network (SQN) using CloserLook3D's encoder in Pytorch. For TensorFlow implementation, check our SQN_tensorflow repo.

Caution: currently, this repo does not achieve a satisfactory result as the SQN paper reports. For performance details, check performance section.

The repo is still under development, with the aim of reaching the level of performance reported in the SQN paper.(Note: our SQN_tensorflow repo has slightly higher performance than this pytorch repo.)

Please open an issue, if you have any comments and suggestions for improving the model performance.

TODOs

  • implement the training strategy mentioned in the Appendix of the paper.
  • ablation study
  • benchmark weak supervision

Install python packages

The latest codes are tested on two Ubuntu settings:

  • Ubuntu 18.04, Nvidia 1080, CUDA 10.1, PyTorch 1.4 and Python 3.6
  • Ubuntu 18.04, Nvidia 3090, CUDA 11.3, PyTorch 1.4 and Python 3.6

For details setting up the development environment, check CloserLook3D Pytorch version. To facilitate settings, below I also provide my own bash script( install.sh ) to create a conda environment from scratch for this repo. (You may need tailor this script according to your own system)

#!/bin/bash
ENV_NAME='closerlook'
conda create –n $ENV_NAME python=3.6.10 -y
source activate $ENV_NAME
conda install -c anaconda pillow=6.2 -y
conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch -y
conda install -c conda-forge opencv -y
pip3 install termcolor tensorboard h5py easydict

Datasets

Take S3DIS as an example.

Scene Segmentation on S3DIS

You can download the S3DIS dataset from here (4.8 GB). You only need to download the file named Stanford3dDataset_v1.2.zip, unzip and move (or link) it to data/S3DIS/Stanford3dDataset_v1.2. (same as the CloserLook3D repo setting.)

The file structure should look like:

<root>
├── cfgs
│   └── s3dis
├── data
│   └── S3DIS
│       └── Stanford3dDataset_v1.2
│           ├── Area_1
│           ├── Area_2
│           ├── Area_3
│           ├── Area_4
│           ├── Area_5
│           └── Area_6
├── init.sh
├── datasets
├── function
├── models
├── ops
└── utils

run prepare-s3dis-sqn.sh to preprocess the S3DIS dataset. This script will generate a processed folder with the below structure with five types of data, including: raw, sub-sampled point clouds for each area, KDtrees for each sub-sampled area, projection indices for each raw point over the sub-sampled area and weak labels for raw and sub-sampled point clouds (involving different weak proportion of the dataset, e.g., 0.1, 0.01, 0.001, etc.. Details check datasets/S3DIS_sqn.py and my summary notes in this file.

The processed folder is organized as follows:

<root>
├── data
│   └── S3DIS
│       └── Stanford3dDataset_v1.2
│           ├── Area_1
│           ├── Area_2
│           ├── Area_3
│           ├── Area_4
│           ├── Area_5
│           ├── Area_6
│           └── processed
│             ├── weak_label_0.01
│             ├── weak_label_1.0
│             ├── Area_1_0.040_sub.pkl
│             ├── Area_1.pkl
│             ├── ...(many other pkl files)

Compile custom CUDA operators

sh init.sh

Run

use the run-sqn.sh script for training or evaluation.

The core training script is as follows:

python -m torch.distributed.launch \
--master_port 1234567 \
--nproc_per_node ${num_gpu} \
function/train_s3dis_dist_sqn.py \
--dataset_name ${dataset_name} \
--cfg cfgs/${dataset_name}/pospool_xyz_avg_sqn.yaml \
--num_points ${num_points} \
--batch_size ${batch_size} \
--val_freq 20 \
--weak_ratio ${weak_ratio}

The core evaluation script is as follows:

python -m torch.distributed.launch \
--master_port 12346 \
--nproc_per_node 1 \
function/evaluate_s3dis_dist_sqn.py \
--cfg cfgs/s3dis/pospool_xyz_avg_sqn.yaml \
--load_path <checkpoint>
[--log_dir <log directory>]

Performance on S3DIS

The experiments are still in progress due to my slow GPU.

Model Weak ratio Performance (mIoU, %) Description
Official RandLA-Net 100% 63.0 Fully supervised method trained with full labels.
Official SQN 1/1000 61.4 This official SQN uses additional techniques to improve the performance, our replicaed SQN currently does not investigate this yet. Official SQN does not provide results of S3DIS under the weak ratio of 1/10 and 1/100
Our replicated SQN 1/10 51.4 Use PosPool (s) as the encoder whose width=36, due to limited GPU usage and active learning is currently not used.
Our replicated SQN 1/100 25.22 Use PosPool (s) as the encoder whose width=36, due to limited GPU usage and active learning is currently not used.
Our replicated SQN 1/1000 21.10 Use PosPool (s) as the encoder whose width=36, due to limited GPU usage and active learning is currently not used.

Acknowledgements

Our pytorch codes borrowed a lot from CloserLook3D and the custom trilinear interoplation CUDA ops are modified from erikwijmans's Pointnet2_PyTorch.

Citation

If you find our work useful in your research, please consider citing:

@article{pytorchpointnet++,
    Author = {YIN, Chao},
    Title = {SQN Pytorch implementation based on CloserLook3D's encoder},
    Journal = {https://github.com/PointCloudYC/SQN_pytorch},
    Year = {2021}
   }

@article{hu2021sqn,
    title={SQN: Weakly-Supervised Semantic Segmentation of Large-Scale 3D Point Clouds with 1000x Fewer Labels},
    author={Hu, Qingyong and Yang, Bo and Fang, Guangchi and Guo, Yulan and Leonardis, Ales and Trigoni, Niki and Markham, Andrew},
    journal={arXiv preprint arXiv:2104.04891},
    year={2021}
  }
Owner
PointCloudYC
Ph.D candidate at HKUST, focus on point cloud processing and deep learning.
PointCloudYC
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
Tensorflow implementation of soft-attention mechanism for video caption generation.

SA-tensorflow Tensorflow implementation of soft-attention mechanism for video caption generation. An example of soft-attention mechanism. The attentio

Paul Chen 153 Nov 14, 2022
FG-transformer-TTS Fine-grained style control in transformer-based text-to-speech synthesis

LST-TTS Official implementation for the paper Fine-grained style control in transformer-based text-to-speech synthesis. Submitted to ICASSP 2022. Audi

Li-Wei Chen 64 Dec 30, 2022
Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning".

ERICA Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive L

THUNLP 75 Nov 02, 2022
Fashion Recommender System With Python

Fashion-Recommender-System Thr growing e-commerce industry presents us with a la

Omkar Gawade 2 Feb 02, 2022
This repo is customed for VisDrone.

Object Detection for VisDrone(无人机航拍图像目标检测) My environment 1、Windows10 (Linux available) 2、tensorflow = 1.12.0 3、python3.6 (anaconda) 4、cv2 5、ensemble

53 Jul 17, 2022
MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images (ISBI 2021, MELBA 2021)

MultiMix This repository contains the implementation of MultiMix. Our publications for this project are listed below: "MultiMix: Sparingly Supervised,

Ayaan Haque 27 Dec 22, 2022
Source code for 2021 ICCV paper "In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces"

In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces This is the PyTorch implementation for 2021 ICCV paper "In-the-Wild Single C

27 Dec 06, 2022
A PyTorch implementation of QANet.

QANet-pytorch NOTICE I'm very busy these months. I'll return to this repo in about 10 days. Introduction An implementation of QANet with PyTorch. Any

H. Z. 343 Nov 03, 2022
E-Ink Magic Calendar that automatically syncs to Google Calendar and runs off a battery powered Raspberry Pi Zero

MagInkCal This repo contains the code needed to drive an E-Ink Magic Calendar that uses a battery powered (PiSugar2) Raspberry Pi Zero WH to retrieve

2.8k Dec 28, 2022
Learned Token Pruning for Transformers

LTP: Learned Token Pruning for Transformers Check our paper for more details. Installation We follow the same installation procedure as the original H

Sehoon Kim 52 Dec 29, 2022
Tensorflow implementation of Swin Transformer model.

Swin Transformer (Tensorflow) Tensorflow reimplementation of Swin Transformer model. Based on Official Pytorch implementation. Requirements tensorflow

167 Jan 08, 2023
Clockwork Convnets for Video Semantic Segmentation

Clockwork Convnets for Video Semantic Segmentation This is the reference implementation of arxiv:1608.03609: Clockwork Convnets for Video Semantic Seg

Evan Shelhamer 141 Nov 21, 2022
Atomistic Line Graph Neural Network

Table of Contents Introduction Installation Examples Pre-trained models Quick start using colab JARVIS-ALIGNN webapp Peformances on a few datasets Use

National Institute of Standards and Technology 91 Dec 30, 2022
Tensorflow port of a full NetVLAD network

netvlad_tf The main intention of this repo is deployment of a full NetVLAD network, which was originally implemented in Matlab, in Python. We provide

Robotics and Perception Group 225 Nov 08, 2022
Deep Learning GPU Training System

DIGITS DIGITS (the Deep Learning GPU Training System) is a webapp for training deep learning models. The currently supported frameworks are: Caffe, To

NVIDIA Corporation 4.1k Jan 03, 2023
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 08, 2022
This is an official implementation for "DeciWatch: A Simple Baseline for 10x Efficient 2D and 3D Pose Estimation"

DeciWatch: A Simple Baseline for 10× Efficient 2D and 3D Pose Estimation This repo is the official implementation of "DeciWatch: A Simple Baseline for

117 Dec 24, 2022
Example of a Quantum LSTM

Example of a Quantum LSTM

Riccardo Di Sipio 36 Oct 31, 2022
Resources for our AAAI 2022 paper: "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification".

LOREN Resources for our AAAI 2022 paper (pre-print): "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification". DEMO System Check out o

Jiangjie Chen 37 Dec 27, 2022