The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

Overview

PlantStereo

This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Paper

PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction[preprint]

Qingyu Wang, Baojian Ma, Wei Liu, Mingzhao Lou, Mingchuan Zhou*, Huanyu Jiang and Yibin Ying

College of Biosystems Engineering and Food Science, Zhejiang University.

Example and Overview

We give an example of our dataset, including spinach, tomato, pepper and pumpkin.

The data size and the resolution of the images are listed as follows:

Subset Train Validation Test All Resolution
Spinach 160 40 100 300 1046×606
Tomato 80 20 50 150 1040×603
Pepper 150 30 32 212 1024×571
Pumpkin 80 20 50 150 1024×571
All 470 110 232 812

Analysis

We evaluated the disparity distribution of different stereo matching datasets.

Format

The data was organized as the following format, where the sub-pixel level disparity images are saved as .tiff format, and the pixel level disparity images are saved as .png format.

PlantStereo

├── PlantStereo2021

│          ├── tomato

│          │          ├── training

│          │          │         ├── left_view

│          │          │          │         ├── 000000.png

│          │          │          │         ├── 000001.png

│          │          │          │         ├── ......

│          │          │          ├── right_view

│          │          │          │         ├── ......

│          │          │          ├── disp

│          │          │          │         ├── ......

│          │          │          ├── disp_high_acc

│          │          │          │         ├── 000000.tiff

│          │          │          │         ├── ......

│          │          ├── testing

│          │          │          ├── left_view

│          │          │          ├── right_view

│          │          │          ├── disp

│          │          │          ├── disp_high_acc

│          ├── spinach

│          ├── ......

Download

You can use the following links to download out PlantStereo dataset.

Baidu Netdisk link
Google Drive link

Usage

  • sample.py

To construct the dataset, you can run the code in sample.py in your terminal:

conda activate <your_anaconda_virtual_environment>
python sample.py --num 0

We can registrate the image and transformate the coordinate through function mech_zed_alignment():

def mech_zed_alignment(depth, mech_height, mech_width, zed_height, zed_width):
    ground_truth = np.zeros(shape=(zed_height, zed_width), dtype=float)
    for v in range(0, mech_height):
        for u in range(0, mech_width):
            i_mech = np.array([[u], [v], [1]], dtype=float)  # 3*1
            p_i_mech = np.dot(np.linalg.inv(K_MECH), i_mech * depth[v, u])  # 3*1
            p_i_zed = np.dot(R_MECH_ZED, p_i_mech) + T_MECH_ZED  # 3*1
            i_zed = np.dot(K_ZED_LEFT, p_i_zed) * (1 / p_i_zed[2])  # 3*1
            disparity = ZED_BASELINE * ZED_FOCAL_LENGTH * 1000 / p_i_zed[2]
            u_zed = i_zed[0]
            v_zed = i_zed[1]
            coor_u_zed = round(u_zed[0])
            coor_v_zed = round(v_zed[0])
            if coor_u_zed < zed_width and coor_v_zed < zed_height:
                ground_truth[coor_v_zed][coor_u_zed] = disparity
    return ground_truth
  • epipole_rectification.py

    After collecting the left, right and disparity images throuth sample.py, we can perform epipole rectification on left and right images through epipole_rectification.py:

    python epipole_rectification.py

Citation

If you use our PlantStereo dataset in your research, please cite this publication:

@misc{PlantStereo,
    title={PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction},
    author={Qingyu Wang, Baojian Ma, Wei Liu, Mingzhao Lou, Mingchuan Zhou, Huanyu Jiang and Yibin Ying},
    howpublished = {\url{https://github.com/wangqingyu985/PlantStereo}},
    year={2021}
}

Acknowledgements

This project is mainly based on:

zed-python-api

mecheye_python_interface

Contact

If you have any questions, please do not hesitate to contact us through E-mail or issue, we will reply as soon as possible.

[email protected] or [email protected]

Owner
Wang Qingyu
A second-year Ph.D. student in Zhejiang University
Wang Qingyu
An example showing how to use jax to train resnet50 on multi-node multi-GPU

jax-multi-gpu-resnet50-example This repo shows how to use jax for multi-node multi-GPU training. The example is adapted from the resnet50 example in d

Yangzihao Wang 20 Jul 04, 2022
The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation

BiMix The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation arxiv Framework: visualization results: Requiremen

stanley 18 Sep 18, 2022
LSTM Neural Networks for Spectroscopic Studies of Type Ia Supernovae

Package Description The difficulties in acquiring spectroscopic data have been a major challenge for supernova surveys. snlstm is developed to provide

7 Oct 11, 2022
Contextual Attention Network: Transformer Meets U-Net

Contextual Attention Network: Transformer Meets U-Net Contexual attention network for medical image segmentation with state of the art results on skin

Reza Azad 67 Nov 28, 2022
A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation

Aboleth A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation [1] with stochastic gradient variational Bayes

Gradient Institute 127 Dec 12, 2022
StyleGAN of All Trades: Image Manipulation withOnly Pretrained StyleGAN

StyleGAN of All Trades: Image Manipulation withOnly Pretrained StyleGAN This is the PyTorch implementation of StyleGAN of All Trades: Image Manipulati

360 Dec 28, 2022
The code of paper "Block Modeling-Guided Graph Convolutional Neural Networks".

Block Modeling-Guided Graph Convolutional Neural Networks This repository contains the demo code of the paper: Block Modeling-Guided Graph Convolution

22 Dec 08, 2022
GDSC-ML Team Interview Task

GDSC-ML-Team---Interview-Task Task 1 : Clean or Messy room In this task we have to classify the given test images as clean or messy. - Link for datase

Aayush. 1 Jan 19, 2022
Train a state-of-the-art yolov3 object detector from scratch!

TrainYourOwnYOLO: Building a Custom Object Detector from Scratch This repo let's you train a custom image detector using the state-of-the-art YOLOv3 c

AntonMu 616 Jan 08, 2023
This repository contains the segmentation user interface from the OpenSurfaces project, extracted as a lightweight tool

OpenSurfaces Segmentation UI This repository contains the segmentation user interface from the OpenSurfaces project, extracted as a lightweight tool.

Sean Bell 66 Jul 11, 2022
Simple-Neural-Network From Scratch in Python

Simple-Neural-Network From Scratch in Python This is a simple Neural Network created without any Machine Learning Libraries. The only dependencies are

Aum Shah 1 Dec 28, 2021
A boosting-based Multiple Instance Learning (MIL) package that includes MIL-Boost and MCIL-Boost

A boosting-based Multiple Instance Learning (MIL) package that includes MIL-Boost and MCIL-Boost

Jun-Yan Zhu 27 Aug 08, 2022
Cowsay - A rewrite of cowsay in python

Python Cowsay A rewrite of cowsay in python. Allows for parsing of existing .cow

James Ansley 3 Jun 27, 2022
SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches

SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [Paper]  [Project Page]  [Interactive Demo]  [Supplementary Material]        Usag

215 Dec 25, 2022
Geometry-Aware Learning of Maps for Camera Localization (CVPR2018)

Geometry-Aware Learning of Maps for Camera Localization This is the PyTorch implementation of our CVPR 2018 paper "Geometry-Aware Learning of Maps for

NVIDIA Research Projects 321 Nov 26, 2022
Bootstrapped Unsupervised Sentence Representation Learning (ACL 2021)

Install first pip3 install -e . Training python3 training/unsupervised_tuning.py python3 training/supervised_tuning.py python3 training/multilingual_

yanzhang_nlp 26 Jul 22, 2022
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
Code release for "Making a Bird AI Expert Work for You and Me".

Making-a-Bird-AI-Expert-Work-for-You-and-Me Code release for "Making a Bird AI Expert Work for You and Me". arxiv (Coming soon...) Changelog 2021/12/6

PRIS-CV: Computer Vision Group 11 Dec 11, 2022
YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone

YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone In our recent paper we propose the YourTTS model. YourTTS bri

Edresson Casanova 390 Dec 29, 2022
FaRL for Facial Representation Learning

FaRL for Facial Representation Learning This repo hosts official implementation of our paper General Facial Representation Learning in a Visual-Lingui

Microsoft 19 Jan 05, 2022