test

Overview

Lidar-data-decode

In this project, you can decode your lidar data frame(pcap file) and make your own datasets(test dataset) in Windows without any huge c++-based lib or ROS under Ubuntu

  1. in lidar data frame decode part:
  • Supports just LSC32(LeiShen Intelligent System) at the moment(you can also change the parameters to fit other lidars like velodyne, robosense...).
  • Takes a pcap file recorded by LSC32 lidar as input.
  • Extracts all Frames from the pcap file.
  • Saves data-frames: Data frames are saved as Pointcloud files (.pcd) and/or as Text files(.txt)
  • Can be parameterizes by yaml file.
  1. in dataset prepare part:
  • Files format conversion(txt to bin, if you want to make your datasets like KITTI format)
  • Files rename
  • Data frames visualization
Output

Below a sample out of 2 Points in a point cloud file

All Point Cloud Text-Files have follwoing fields: Time [musec], X [m], Y [m], Z [m], ID, Intensity, Latitude [Deg], Longitudes [Deg], Distance [m] 2795827803, 0.032293, 5.781942, -1.549291, 0, 6, 0.320, -15.000, 5.986

All Point Cloud PCD-Files have follwoing fields:

  1. X-Coordinate
  2. Y-Coordinate
  3. Z-Coordinate
  4. Intensity
Dependencies
  1. for lidar frame decode: Veloparser has follwoing package dependencies:
  • dpkt
  • numpy
  • tqdm
  1. for lidar frame Visualization:
  • mayavi
  • torch
  • opencv-python (using pip install opencv-python)
Run

Firstly, clone this project by: "git clone https://github.com/hitxing/Lidar-data-decode.git"

Because empty folders can not be upload on Github, after you clone this project, please create some empty folders as follows: 20210301215614471

a. for lidar frame decode:

  1. make sure test.pcap is in dir .\input\test.pcap
  2. check your parameters in params.yaml, then, run: "python main.py --path=.\input\test.pcap --out-dir=.\output --config=.\params.yaml"

after this operation, you can get your Text files/PCD files as follows:

​ 1)Text files in .\output\velodynevlp16\data_ascii:

1614600893415

​ 2)PCD files in .\output\velodynevlp16\data_pcl:

1614600836040

b. for Format conversion and rename:

If you want to make your datasets like KITTI format(bin files), you should convert your txt files to bin files at first, if you want to make a datset like nuscenes(pcd files), just go to next step and ignore that.

  1. put all your txt files to dir .\txt2bin\txt and run ''python txt2bin.py"

then, your txt files will convert to bin format and saved in dir ./txt2bin/bin like this:

1614602160574

  1. To make a test dataset like KITTI format, the next step is to rename your files like 000000.bin, for bin files(also fits for pcd files, change the parameters in file_rename.py, line 31), run "python file_rename.py", you can get your test dataset in the dir .\txt2bin\bin like this:

    1614602847542

c. for visualization your data frames(just for bin files now)

Please make sure that all of those packages are installed (pip or conda).

  1. copy your bin files in dir .\txt2bin\bin to your own dir(default is in .\visualization)

  2. run "python point_visul.py", the visual will like this:

    1614603301315

Note that lidar data in 000000.bin is not complete(after 000000.bin is complete), that why the visualization result is as above, you can delect this frame when you make your own test dataset .000001.bin will like this:

1614603496357

If you want to make your full dataset and labeling your data frame, I hope here will be helpful(https://github.com/Gltina/ACP-3Detection).

Note

Thanks ArashJavan a lot for provide this fantastic project! lidar data frame decode part in Lidar-data-decode is based on https://github.com/ArashJavan/veloparser which Supports Velodyne VLP16, At this moment, Lidar-data-decode supports LSC32-151A andLSC32-151C, actually, this project can support any lidar as long as you change the parameters follow the corresponding technical manual.

The reason why i wrote this project: a. I could not find any simple way without installing ROS (Robot operating software) or other huge c++-based lib that does 'just' extract the point clouds from pcap-file. b. Provide a reference to expand this project to fit your own lidar and make your own datasets

基于pytorch_rnn的古诗词生成

pytorch_peot_rnn 基于pytorch_rnn的古诗词生成 说明 config.py里面含有训练、测试、预测的参数,更改后运行: python main.py 预测结果 if config.do_predict: result = trainer.generate('丽日照残春')

西西嘛呦 3 May 26, 2022
Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

9 Jan 08, 2023
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

ELECTRA Introduction ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using

Google Research 2.1k Dec 28, 2022
Bot to connect a real Telegram user, simulating responses with OpenAI's davinci GPT-3 model.

AI-BOT Bot to connect a real Telegram user, simulating responses with OpenAI's davinci GPT-3 model.

Thempra 2 Dec 21, 2022
NLP command-line assistant powered by OpenAI

NLP command-line assistant powered by OpenAI

Axel 16 Dec 09, 2022
Every Google, Azure & IBM text to speech voice for free

TTS-Grabber Quick thing i made about a year ago to download any text with any tts voice, over 630 voices to choose from currently. It will split the i

16 Dec 07, 2022
A minimal Conformer ASR implementation adapted from ESPnet.

Conformer ASR A minimal Conformer ASR implementation adapted from ESPnet. Introduction I want to use the pre-trained English ASR model provided by ESP

Niu Zhe 3 Jan 24, 2022
Use PaddlePaddle to reproduce the paper:mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer

MT5_paddle Use PaddlePaddle to reproduce the paper:mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer English | 简体中文 mT5: A Massively

2 Oct 17, 2021
Scene Text Retrieval via Joint Text Detection and Similarity Learning

This is the code of "Scene Text Retrieval via Joint Text Detection and Similarity Learning". For more details, please refer to our CVPR2021 paper.

79 Nov 29, 2022
Transformer-based Text Auto-encoder (T-TA) using TensorFlow 2.

T-TA (Transformer-based Text Auto-encoder) This repository contains codes for Transformer-based Text Auto-encoder (T-TA, paper: Fast and Accurate Deep

Jeong Ukjae 13 Dec 13, 2022
HAN2HAN : Hangul Font Generation

HAN2HAN : Hangul Font Generation

Changwoo Lee 36 Dec 28, 2022
Voice Assistant inspired by Google Assistant, Cortana, Alexa, Siri, ...

author: @shival_gupta VoiceAI This program is an example of a simple virtual assitant It will listen to you and do accordingly It will begin with wish

Shival Gupta 1 Jan 06, 2022
A Python 3.6+ package to run .many files, where many programs written in many languages may exist in one file.

RunMany Intro | Installation | VSCode Extension | Usage | Syntax | Settings | About A tool to run many programs written in many languages from one fil

6 May 22, 2022
基于pytorch+bert的中文事件抽取

pytorch_bert_event_extraction 基于pytorch+bert的中文事件抽取,主要思想是QA(问答)。 要预先下载好chinese-roberta-wwm-ext模型,并在运行时指定模型的位置。

西西嘛呦 31 Nov 30, 2022
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Amazon Web Services - Labs 1.1k Dec 27, 2022
Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER adversarial training part

VILLA: Vision-and-Language Adversarial Training This is the official repository of VILLA (NeurIPS 2020 Spotlight). This repository currently supports

Zhe Gan 109 Dec 31, 2022
Checking spelling of form elements

Checking spelling of form elements. You can check the source files of external workflows/reports and configuration files

СКБ Контур (команда 1с) 15 Sep 12, 2022
[WWW 2021 GLB] New Benchmarks for Learning on Non-Homophilous Graphs

New Benchmarks for Learning on Non-Homophilous Graphs Here are the codes and datasets accompanying the paper: New Benchmarks for Learning on Non-Homop

94 Dec 21, 2022