Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

Overview

Yolo-FastestV2DOI

image

  • Simple, fast, compact, easy to transplant
  • Less resource occupation, excellent single-core performance, lower power consumption
  • Faster and smaller:Trade 1% loss of accuracy for 40% increase in inference speed, reducing the amount of parameters by 25%
  • Fast training speed, low computing power requirements, training only requires 3GB video memory, gtx1660ti training COCO 1 epoch only takes 7 minutes

Evaluating indicator/Benchmark

Network COCO mAP(0.5) Resolution Run Time(4xCore) Run Time(1xCore) FLOPs(G) Params(M)
Yolo-FastestV2 23.56 % 352X352 3.23 ms 4.5 ms 0.238 0.25M
Yolo-FastestV1.1 24.40 % 320X320 5.59 ms 7.52 ms 0.252 0.35M
Yolov4-Tiny 40.2% 416X416 23.67ms 40.14ms 6.9 5.77M
  • Test platform Mi 11 Snapdragon 888 CPU,Based on NCNN
  • Reasons for the increase in inference speed: optimization of model memory access
  • Suitable for hardware with extremely tight computing resources

How to use

Dependent installation

  • PIP
pip3 install -r requirements.txt

Test

  • Picture test
    python3 test.py --data data/coco.data --weights modelzoo/coco2017-epoch-0.235624ap-model.pth --img img/dog.jpg
    

image

How to train

Building data sets(The dataset is constructed in the same way as darknet yolo)

  • The format of the data set is the same as that of Darknet Yolo, Each image corresponds to a .txt label file. The label format is also based on Darknet Yolo's data set label format: "category cx cy wh", where category is the category subscript, cx, cy are the coordinates of the center point of the normalized label box, and w, h are the normalized label box The width and height, .txt label file content example as follows:

    11 0.344192634561 0.611 0.416430594901 0.262
    14 0.509915014164 0.51 0.974504249292 0.972
    
  • The image and its corresponding label file have the same name and are stored in the same directory. The data file structure is as follows:

    .
    ├── train
    │   ├── 000001.jpg
    │   ├── 000001.txt
    │   ├── 000002.jpg
    │   ├── 000002.txt
    │   ├── 000003.jpg
    │   └── 000003.txt
    └── val
        ├── 000043.jpg
        ├── 000043.txt
        ├── 000057.jpg
        ├── 000057.txt
        ├── 000070.jpg
        └── 000070.txt
    
  • Generate a dataset path .txt file, the example content is as follows:

    train.txt

    /home/qiuqiu/Desktop/dataset/train/000001.jpg
    /home/qiuqiu/Desktop/dataset/train/000002.jpg
    /home/qiuqiu/Desktop/dataset/train/000003.jpg
    

    val.txt

    /home/qiuqiu/Desktop/dataset/val/000070.jpg
    /home/qiuqiu/Desktop/dataset/val/000043.jpg
    /home/qiuqiu/Desktop/dataset/val/000057.jpg
    
  • Generate the .names category label file, the sample content is as follows:

    category.names

    person
    bicycle
    car
    motorbike
    ...
    
    
  • The directory structure of the finally constructed training data set is as follows:

    .
    ├── category.names        # .names category label file
    ├── train                 # train dataset
    │   ├── 000001.jpg
    │   ├── 000001.txt
    │   ├── 000002.jpg
    │   ├── 000002.txt
    │   ├── 000003.jpg
    │   └── 000003.txt
    ├── train.txt              # train dataset path .txt file
    ├── val                    # val dataset
    │   ├── 000043.jpg
    │   ├── 000043.txt
    │   ├── 000057.jpg
    │   ├── 000057.txt
    │   ├── 000070.jpg
    │   └── 000070.txt
    └── val.txt                # val dataset path .txt file
    
    

Get anchor bias

  • Generate anchor based on current dataset
    python3 genanchors.py --traintxt ./train.txt
    
  • The anchors6.txt file will be generated in the current directory,the sample content of the anchors6.txt is as follows:
    12.64,19.39, 37.88,51.48, 55.71,138.31, 126.91,78.23, 131.57,214.55, 279.92,258.87  # anchor bias
    0.636158                                                                             # iou
    

Build the training .data configuration file

  • Reference./data/coco.data
    [name]
    model_name=coco           # model name
    
    [train-configure]
    epochs=300                # train epichs
    steps=150,250             # Declining learning rate steps
    batch_size=64             # batch size
    subdivisions=1            # Same as the subdivisions of the darknet cfg file
    learning_rate=0.001       # learning rate
    
    [model-configure]
    pre_weights=None          # The path to load the model, if it is none, then restart the training
    classes=80                # Number of detection categories
    width=352                 # The width of the model input image
    height=352                # The height of the model input image
    anchor_num=3              # anchor num
    anchors=12.64,19.39, 37.88,51.48, 55.71,138.31, 126.91,78.23, 131.57,214.55, 279.92,258.87 #anchor bias
    
    [data-configure]
    train=/media/qiuqiu/D/coco/train2017.txt   # train dataset path .txt file
    val=/media/qiuqiu/D/coco/val2017.txt       # val dataset path .txt file 
    names=./data/coco.names                    # .names category label file
    

Train

  • Perform training tasks
    python3 train.py --data data/coco.data
    

Evaluation

  • Calculate map evaluation
    python3 evaluation.py --data data/coco.data --weights modelzoo/coco2017-epoch-0.235624ap-model.pth
    

Deploy

NCNN

Comments
  • low precision and and recall

    low precision and and recall

    Hello

    Im training with only one class from coco dataset, data file is standar only changes anchors and classes to 1

    [name]
    model_name=coco
    
    [train-configure]
    epochs=300
    steps=150,250
    batch_size=128
    subdivisions=1
    learning_rate=0.001
    
    [model-configure]
    pre_weights=model/backbone/backbone.pth
    classes=1
    width=352
    height=352
    anchor_num=3
    anchors=8.54,20.34, 25.67,59.99, 52.42,138.38, 103.52,235.28, 197.43,103.53, 238.02,287.40
    
    [data-configure]
    train=coco_person/train.txt
    val=coco_person/val.txt
    names=data/coco.names
    

    I get an AP of 0.41 but with low precision 0.53 and recall of 0.41 that makes that model prediction has lots of false positives.

    Why im getting that low precision and recall?

    PD. i checked bbox annotations and are correct

    Thanks!

    opened by natxopedreira 1
  • 测试样例,没找到生成图片文件

    测试样例,没找到生成图片文件

    下载源码,运行如下命令: python3 test.py --data data/coco.data --weights modelzoo/coco2017-0.241078ap-model.pth --img img/000139.jpg

    却没找到test_result.png,指导一下是什么原因?多谢

    opened by lixiangMindSpore 1
  • Anchor Number

    Anchor Number

    I reduce the anchor number from 3 to 2, and there is a problem during training (evaluation):

    anchor_boxes[:, :, :, :2] = ((r[:, :, :, :2].sigmoid() * 2. - 0.5) + grid) * stride
    

    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 3

    The model configure is:

    [model-configure] pre_weights=None classes=7 width=320 height=320 anchor_num=2 anchors=10.54,9.51, 45.60,40.45, 119.62,95.06, 253.71,138.37

    opened by Yuanye-F 1
  • onnx2ncnn  error   Gather not supported yet!

    onnx2ncnn error Gather not supported yet!

    (base) ~/Yolo-FastestV2$ python pytorch2onnx.py --data ./data/coco.data --weights modelzoo/coco2017-epoch-0.235624ap-model.pth load param... /home/pc/Yolo-FastestV2/model/backbone/shufflenetv2.py:59: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert (num_channels % 4 == 0)

    ./onnx2ncnn model.onnx fast.param fast.bin Gather not supported yet!

    axis=0

    Gather not supported yet!

    axis=0

    Gather not supported yet!

    axis=0

    Gather not supported yet!

    opened by wavelet2008 1
  • 导出onnx后推理结果和pth不同

    导出onnx后推理结果和pth不同

    使用里面转换onnx的文件得到新的onnx模型后,同时用pth和onnx模型进行测试,发现得到的推理结果不同,使用onnxruntime onnx推理结果是(1,22,22,16)和(1,11,11,16) pth推理得到的是(1,12,22,22),(1,3,22,22),(1,1,22,22) (1,12,11,11),(1,3,11,11),(1,1,11,11) 即使做了处理后得到的最后结果也与pth文件得到的结果不同,不知道大佬能不能指点一下

    opened by ifdealer 0
  • train時發生錯誤,訊息如下

    train時發生錯誤,訊息如下

    Traceback (most recent call last): File "train.py", line 139, in _, _, AP, _ = utils.utils.evaluation(val_dataloader, cfg, model, device) File "D:\competition\Yolo-FastestV2-main\utils\utils.py", line 367, in evaluation for imgs, targets in pbar: File "C:\anaconda\envs\fire\lib\site-packages\tqdm\std.py", line 1195, in iter for obj in iterable: File "C:\anaconda\envs\fire\lib\site-packages\torch\utils\data\dataloader.py", line 521, in next data = self._next_data() File "C:\anaconda\envs\fire\lib\site-packages\torch\utils\data\dataloader.py", line 1203, in _next_data return self._process_data(data) File "C:\anaconda\envs\fire\lib\site-packages\torch\utils\data\dataloader.py", line 1229, in _process_data data.reraise() File "C:\anaconda\envs\fire\lib\site-packages\torch_utils.py", line 434, in reraise raise exception Exception: Caught Exception in DataLoader worker process 0. Original Traceback (most recent call last): File "C:\anaconda\envs\fire\lib\site-packages\torch\utils\data_utils\worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "C:\anaconda\envs\fire\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\anaconda\envs\fire\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\competition\Yolo-FastestV2-main\utils\datasets.py", line 127, in getitem raise Exception("%s is not exist" % label_path) Exception: .txt is not exist

    opened by richardlotw 4
Releases(V0.2)
Owner
qiuqiuqiuqiu ...球
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021) Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset Yong Rae

97 Dec 23, 2022
Semiconductor Machine learning project

Wafer Fault Detection Problem Statement: Wafer (In electronics), also called a slice or substrate, is a thin slice of semiconductor, such as a crystal

kunal suryawanshi 1 Jan 15, 2022
ShapeGlot: Learning Language for Shape Differentiation

ShapeGlot: Learning Language for Shape Differentiation Created by Panos Achlioptas, Judy Fan, Robert X.D. Hawkins, Noah D. Goodman, Leonidas J. Guibas

Panos 32 Dec 23, 2022
NEO: Non Equilibrium Sampling on the orbit of a deterministic transform

NEO: Non Equilibrium Sampling on the orbit of a deterministic transform Description of the code This repo describes the NEO estimator described in the

0 Dec 01, 2021
Code for Efficient Visual Pretraining with Contrastive Detection

Code for DetCon This repository contains code for the ICCV 2021 paper "Efficient Visual Pretraining with Contrastive Detection" by Olivier J. Hénaff,

DeepMind 56 Nov 13, 2022
Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.

Isaac ROS Visual Odometry This repository provides a ROS2 package that estimates stereo visual inertial odometry using the Isaac Elbrus GPU-accelerate

NVIDIA Isaac ROS 343 Jan 03, 2023
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

144 Dec 30, 2022
Implementation of the Paper: "Parameterized Hypercomplex Graph Neural Networks for Graph Classification" by Tuan Le, Marco Bertolini, Frank Noé and Djork-Arné Clevert

Parameterized Hypercomplex Graph Neural Networks (PHC-GNNs) PHC-GNNs (Le et al., 2021): https://arxiv.org/abs/2103.16584 PHM Linear Layer Illustration

Bayer AG 26 Aug 11, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

71 Nov 29, 2022
这是一个yolo3-tf2的源码,可以用于训练自己的模型。

YOLOV3:You Only Look Once目标检测模型在Tensorflow2当中的实现 目录 性能情况 Performance 所需环境 Environment 文件下载 Download 训练步骤 How2train 预测步骤 How2predict 评估步骤 How2eval 参考资料

Bubbliiiing 68 Dec 21, 2022
[ICCV 2021] Relaxed Transformer Decoders for Direct Action Proposal Generation

RTD-Net (ICCV 2021) This repo holds the codes of paper: "Relaxed Transformer Decoders for Direct Action Proposal Generation", accepted in ICCV 2021. N

Multimedia Computing Group, Nanjing University 80 Nov 30, 2022
Data augmentation for NLP, accepted at EMNLP 2021 Findings

AEDA: An Easier Data Augmentation Technique for Text Classification This is the code for the EMNLP 2021 paper AEDA: An Easier Data Augmentation Techni

Akbar Karimi 81 Dec 09, 2022
Anomaly detection analysis and labeling tool, specifically for multiple time series (one time series per category)

taganomaly Anomaly detection labeling tool, specifically for multiple time series (one time series per category). Taganomaly is a tool for creating la

Microsoft 272 Dec 17, 2022
On Uncertainty, Tempering, and Data Augmentation in Bayesian Classification

Understanding Bayesian Classification This repository hosts the code to reproduce the results presented in the paper On Uncertainty, Tempering, and Da

Sanyam Kapoor 18 Nov 17, 2022
Code for our CVPR 2022 Paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection"

GEN-VLKT Code for our CVPR 2022 paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection". Contributed by Yue Lia

Yue Liao 47 Dec 04, 2022
Code for NAACL 2021 full paper "Efficient Attentions for Long Document Summarization"

LongDocSum Code for NAACL 2021 paper "Efficient Attentions for Long Document Summarization" This repository contains data and models needed to reprodu

56 Jan 02, 2023
A pytorch &keras implementation and demo of Fastformer.

Fastformer Notes from the authors Pytorch/Keras implementation of Fastformer. The keras version only includes the core fastformer attention part. The

153 Dec 28, 2022
Code and data (Incidents Dataset) for ECCV 2020 Paper "Detecting natural disasters, damage, and incidents in the wild".

Incidents Dataset See the following pages for more details: Project page: IncidentsDataset.csail.mit.edu. ECCV 2020 Paper "Detecting natural disasters

Ethan Weber 67 Dec 27, 2022
Official Datasets and Implementation from our Paper "Video Class Agnostic Segmentation in Autonomous Driving".

Video Class Agnostic Segmentation [Method Paper] [Benchmark Paper] [Project] [Demo] Official Datasets and Implementation from our Paper "Video Class A

Mennatullah Siam 26 Oct 24, 2022
计算机视觉中用到的注意力模块和其他即插即用模块PyTorch Implementation Collection of Attention Module and Plug&Play Module

PyTorch实现多种计算机视觉中网络设计中用到的Attention机制,还收集了一些即插即用模块。由于能力有限精力有限,可能很多模块并没有包括进来,有任何的建议或者改进,可以提交issue或者进行PR。

PJDong 599 Dec 23, 2022