这是一个yolo3-tf2的源码,可以用于训练自己的模型。

Overview

YOLOV3:You Only Look Once目标检测模型在Tensorflow2当中的实现


目录

  1. 性能情况 Performance
  2. 所需环境 Environment
  3. 文件下载 Download
  4. 训练步骤 How2train
  5. 预测步骤 How2predict
  6. 评估步骤 How2eval
  7. 参考资料 Reference

性能情况

训练数据集 权值文件名称 测试数据集 输入图片大小 mAP 0.5:0.95 mAP 0.5
COCO-Train2017 yolo_weights.h5 COCO-Val2017 416x416 38.1 66.8

所需环境

tensorflow==2.2.0

文件下载

训练所需的yolo_weights.pth可以在百度云下载。
链接: https://pan.baidu.com/s/1URJJiPUYjiWzitU0nytBnA 提取码: qvxs

VOC数据集下载地址如下:
VOC2007+2012训练集
链接: https://pan.baidu.com/s/16pemiBGd-P9q2j7dZKGDFA 提取码: eiw9

VOC2007测试集
链接: https://pan.baidu.com/s/1BnMiFwlNwIWG9gsd4jHLig 提取码: dsda

训练步骤

a、数据集的准备

本文使用VOC格式进行训练,训练前需要自己制作好数据集,如果没有自己的数据集,可以通过Github连接下载VOC12+07的数据集尝试下。
训练前将标签文件放在VOCdevkit文件夹下的VOC2007文件夹下的Annotation中。
训练前将图片文件放在VOCdevkit文件夹下的VOC2007文件夹下的JPEGImages中。

b、数据集的处理

在完成数据集的摆放之后,我们需要对数据集进行下一步的处理,目的是获得训练用的2007_train.txt以及2007_val.txt,需要用到根目录下的voc_annotation.py。
voc_annotation.py里面有一些参数需要设置。第一次训练可以仅修改classes_path,classes_path用于指向检测类别所对应的txt。
训练自己的数据集时,可以自己建立一个cls_classes.txt,里面写自己所需要区分的类别。
model_data/cls_classes.txt文件内容为:

cat
dog
...

c、开始网络训练

通过voc_annotation.py我们已经生成了2007_train.txt以及2007_val.txt,此时我们可以开始训练了。训练的参数较多,大家可以在下载库后仔细看注释,其中最重要的部分依然是train.py里的classes_path。
classes_path用于指向检测类别所对应的txt,这个txt和voc_annotation.py里面的txt一样!训练自己的数据集必须要修改!
修改完classes_path后就可以运行train.py开始训练了,在训练多个epoch后,权值会生成在logs文件夹中。

d、训练结果预测

训练结果预测需要用到两个文件,分别是yolo.py和predict.py。我们首先需要去yolo.py里面修改model_path以及classes_path,这两个参数必须要修改。
model_path指向训练好的权值文件,在logs文件夹里。
classes_path指向检测类别所对应的txt。

完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。

预测步骤

a、使用预训练权重

  1. 下载完库后解压,在百度网盘下载yolo_weights.pth,放入model_data,运行predict.py,输入
img/street.jpg
  1. 在predict.py里面进行设置可以进行fps测试和video视频检测。

b、使用自己训练的权重

  1. 按照训练步骤训练。
  2. 在yolo.py文件里面,在如下部分修改model_path和classes_path使其对应训练好的文件;model_path对应logs文件夹下面的权值文件,classes_path是model_path对应分的类
_defaults = {
    #--------------------------------------------------------------------------#
    #   使用自己训练好的模型进行预测一定要修改model_path和classes_path!
    #   model_path指向logs文件夹下的权值文件,classes_path指向model_data下的txt
    #   如果出现shape不匹配,同时要注意训练时的model_path和classes_path参数的修改
    #--------------------------------------------------------------------------#
    "model_path"        : 'model_data/yolo_weight.h5',
    "classes_path"      : 'model_data/coco_classes.txt',
    #---------------------------------------------------------------------#
    #   anchors_path代表先验框对应的txt文件,一般不修改。
    #   anchors_mask用于帮助代码找到对应的先验框,一般不修改。
    #---------------------------------------------------------------------#
    "anchors_path"      : 'model_data/yolo_anchors.txt',
    "anchors_mask"      : [[6, 7, 8], [3, 4, 5], [0, 1, 2]],
    #---------------------------------------------------------------------#
    #   输入图片的大小,必须为32的倍数。
    #---------------------------------------------------------------------#
    "input_shape"       : [416, 416],
    #---------------------------------------------------------------------#
    #   只有得分大于置信度的预测框会被保留下来
    #---------------------------------------------------------------------#
    "confidence"        : 0.5,
    #---------------------------------------------------------------------#
    #   非极大抑制所用到的nms_iou大小
    #---------------------------------------------------------------------#
    "nms_iou"           : 0.3,
    "max_boxes"         : 100,
    #---------------------------------------------------------------------#
    #   该变量用于控制是否使用letterbox_image对输入图像进行不失真的resize,
    #   在多次测试后,发现关闭letterbox_image直接resize的效果更好
    #---------------------------------------------------------------------#
    "letterbox_image"   : False,
}
  1. 运行predict.py,输入
img/street.jpg
  1. 在predict.py里面进行设置可以进行fps测试和video视频检测。

评估步骤

  1. 本文使用VOC格式进行评估。
  2. 如果在训练前已经运行过voc_annotation.py文件,代码会自动将数据集划分成训练集、验证集和测试集。如果想要修改测试集的比例,可以修改voc_annotation.py文件下的trainval_percent。trainval_percent用于指定(训练集+验证集)与测试集的比例,默认情况下 (训练集+验证集):测试集 = 9:1。train_percent用于指定(训练集+验证集)中训练集与验证集的比例,默认情况下 训练集:验证集 = 9:1。
  3. 利用voc_annotation.py划分测试集后,前往get_map.py文件修改classes_path,classes_path用于指向检测类别所对应的txt,这个txt和训练时的txt一样。评估自己的数据集必须要修改。
  4. 运行get_map.py即可获得评估结果,评估结果会保存在map_out文件夹中。

Reference

https://github.com/qqwweee/keras-yolo3
https://github.com/eriklindernoren/PyTorch-YOLOv3
https://github.com/BobLiu20/YOLOv3_PyTorch

You might also like...
Comments
  • W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated. 	 [[{{node PyFunc}}]]

    W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated. [[{{node PyFunc}}]]

    f.write("%s %s %s %s %s %s\n" % (predicted_class, score[:6], str(int(left)), str(int(top)), str(int(right)),str(int(bottom))))
    

    OverflowError: cannot convert float infinity to integer 2022-10-17 08:14:29.544536: W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated. [[{{node PyFunc}}]] upup 这个报错是什么意思呢

    opened by 11wswhl 2
  • Resource exhausted: OOM when allocating tensor with shape[255,256,256,69] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

    Resource exhausted: OOM when allocating tensor with shape[255,256,256,69] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

    I meet an error:

    2022-01-30 10:37:36.925076: I tensorflow/core/common_runtime/bfc_allocator.cc:923] total_region_allocated_bytes_: 1074790400 memory_limit_: 3059430195 available bytes: 1984639795 curr_region_allocation_bytes_: 1073741824 2022-01-30 10:37:36.925383: I tensorflow/core/common_runtime/bfc_allocator.cc:929] Stats: Limit: 3059430195 InUse: 621716992 MaxInUse: 621716992 NumAllocs: 1595 MaxAllocSize: 33554432

    2022-01-30 10:37:36.925822: W tensorflow/core/common_runtime/bfc_allocator.cc:424] **********************************************************__________________________________________ 2022-01-30 10:37:36.926021: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at cwise_ops_common.cc:82 : Resource exhausted: OOM when allocating tensor with shape[255,256,256,69] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

    Could you please help me solve this problem

    opened by Linamiao-1998 1
  • 您好,我可能发现了一个在yolo_traing.py中的bug

    您好,我可能发现了一个在yolo_traing.py中的bug

    是这样的,因为我的数据长宽大概为256,640这样的,所以我准备设置将训练的inputshape也设置为256,640,但训练时损失会很快变为负数。 而当我全程使用正方形训练和测试都没有问题.使用inputshape为正方形训练,然后设置inputshape为矩形测试也没有问题 经过一路排查,应该是将y_true的x,y转为x,y的偏移量时grid_shapes[l]的长,宽顺序反了: image

    将其中的grid_shapes[l][:]改为grid_shapes[l][::-1]我的训练就正常了。 因为正方形长宽高相等,所以不影响,而矩形这样会导致偏移量有大于1的值产生,在使用keras.binary_crossentropy计算损失时,label大于1则会产生负损失。 keras.binary_crossentropy(label , logits, from_logits=True)的计算公式如下: x = logits, z = labels loss = max(x, 0) - x * z + log(1 + exp(-abs(x)))

    opened by kill2013110 2
  • 请问有尝试过将该模型保存为savedmodel格式的吗?我保存成这样后再读取就会出错

    请问有尝试过将该模型保存为savedmodel格式的吗?我保存成这样后再读取就会出错

    使用savedmodel是因为要后续准备冻结模型。 函数用的tf.keras.model.save_model()和tf.keras.model.load_model() 分析了一下,是因为模型中包含了自定义的Lambda层,所以报错了, 对于lambda层我准备转为tf.function, 应该可以解决问题, 您有什么好的建议吗

    opened by kill2013110 4
Releases(v3.2)
  • v3.2(Jul 16, 2022)

    重要更新

    • 增加了训练时评估,可以在train.py进行开关或者调整评估周期。
    • 更新了评估代码,可以设置计算Recall和Precision的门限。
    • 在summary.py新增网络各类参数计算。
    • 增加保存权值的方法,loss最低、最近一次等。
    • 新增大量注释。
    Source code(tar.gz)
    Source code(zip)
  • v3.1(Feb 23, 2022)

    重要更新

    • 修改了loss组成,使得分类、目标、回归loss的比例合适。
    • 支持step、cos学习率下降法。
    • 支持adam、sgd优化器选择。
    • 支持学习率根据batch_size自适应调整。
    • 支持不同预测模式的选择,单张图片预测、文件夹预测、视频预测、图片裁剪、heatmap、各个种类目标数量计算。
    • 更新summary.py文件,用于观看网络结构。
    • 增加了多GPU训练。
    Source code(tar.gz)
    Source code(zip)
  • v2.0(Feb 21, 2022)

    重要更新

    • 更新train.py文件,增加了大量的注释,增加多个可调整参数。
    • 更新predict.py文件,增加了大量的注释,增加fps、视频预测、批量预测等功能。
    • 更新yolo.py文件,增加了大量的注释,增加先验框选择、置信度、非极大抑制等参数。
    • 合并get_dr_txt.py、get_gt_txt.py和get_map.py文件,通过一个文件来实现数据集的评估。
    • 更新voc_annotation.py文件,增加多个可调整参数。
    • 更新kmeans_for_anchors.py文件,用于计算先验框的大小。
    • 更新summary.py文件,用于观看网络结构。
    Source code(tar.gz)
    Source code(zip)
Owner
Bubbliiiing
Bubbliiiing
Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers

Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers This is the repo used for human motion prediction with non-autoregress

Idiap Research Institute 26 Dec 14, 2022
Code for the AAAI-2022 paper: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification (AAAI 2022) Prerequisite PyTorch = 1.2.0 P

16 Dec 14, 2022
Digan - Official PyTorch implementation of Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

DIGAN (ICLR 2022) Official PyTorch implementation of "Generating Videos with Dyn

Sihyun Yu 147 Dec 31, 2022
Out-of-Town Recommendation with Travel Intention Modeling (AAAI2021)

TrainOR_AAAI21 This is the official implementation of our AAAI'21 paper: Haoran Xin, Xinjiang Lu, Tong Xu, Hao Liu, Jingjing Gu, Dejing Dou, Hui Xiong

Jack Xin 13 Oct 19, 2022
The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer"

Shuffle Transformer The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer" Introduction Very recently, window-

87 Nov 29, 2022
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 04, 2022
Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking

Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking (CVPR 2021) Pytorch implementation of the ArTIST motion model. In this repo

Fatemeh 38 Dec 12, 2022
CvT-ASSD: Convolutional vision-Transformerbased Attentive Single Shot MultiBox Detector (ICTAI 2021 CCF-C 会议)The 33rd IEEE International Conference on Tools with Artificial Intelligence

CvT-ASSD including extra CvT, CvT-SSD, VGG-ASSD models original-code-website: https://github.com/albert-jin/CvT-SSD new-code-website: https://github.c

金伟强 -上海大学人工智能小渣渣~ 5 Mar 07, 2022
Small little script to scrape, parse and check for active tor nodes. Can be used as proxies.

TorScrape TorScrape is a small but useful script made in python that scrapes a website for active tor nodes, parse the html and then save the nodes in

5 Dec 04, 2022
ChebLieNet, a spectral graph neural network turned equivariant by Riemannian geometry on Lie groups.

ChebLieNet: Invariant spectral graph NNs turned equivariant by Riemannian geometry on Lie groups Hugo Aguettaz, Erik J. Bekkers, Michaël Defferrard We

haguettaz 12 Dec 10, 2022
This repo is the official implementation of "L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization".

L2ight is a closed-loop ONN on-chip learning framework to enable scalable ONN mapping and efficient in-situ learning. L2ight adopts a three-stage learning flow that first calibrates the complicated p

Jiaqi Gu 9 Jul 14, 2022
Pytorch Implementation for (STANet+ and STANet)

Pytorch Implementation for (STANet+ and STANet) V2-Weakly Supervised Visual-Auditory Saliency Detection with Multigranularity Perception (arxiv), pdf:

GuotaoWang 14 Nov 29, 2022
Auto-Lama combines object detection and image inpainting to automate object removals

Auto-Lama Auto-Lama combines object detection and image inpainting to automate object removals. It is build on top of DE:TR from Facebook Research and

44 Dec 09, 2022
Code for "OctField: Hierarchical Implicit Functions for 3D Modeling (NeurIPS 2021)"

OctField(Jittor): Hierarchical Implicit Functions for 3D Modeling Introduction This repository is code release for OctField: Hierarchical Implicit Fun

55 Dec 08, 2022
This is a Pytorch implementation of paper: DropEdge: Towards Deep Graph Convolutional Networks on Node Classification

DropEdge: Towards Deep Graph Convolutional Networks on Node Classification This is a Pytorch implementation of paper: DropEdge: Towards Deep Graph Con

401 Dec 16, 2022
Fight Recognition from Still Images in the Wild @ WACVW2022, Real-world Surveillance Workshop

Fight Detection from Still Images in the Wild Detecting fights from still images is an important task required to limit the distribution of social med

Şeymanur Aktı 10 Nov 09, 2022
Source code for our paper "Empathetic Response Generation with State Management"

Source code for our paper "Empathetic Response Generation with State Management" this repository is maintained by both Jun Gao and Yuhan Liu Model Ove

Yuhan Liu 3 Oct 08, 2022
Subpopulation detection in high-dimensional single-cell data

PhenoGraph for Python3 PhenoGraph is a clustering method designed for high-dimensional single-cell data. It works by creating a graph ("network") repr

Dana Pe'er Lab 42 Sep 05, 2022
Decensoring Hentai with Deep Neural Networks. Formerly named DeepMindBreak.

DeepCreamPy Decensoring Hentai with Deep Neural Networks. Formerly named DeepMindBreak. A deep learning-based tool to automatically replace censored a

616 Jan 06, 2023
This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.

Introduction This is an official implementation of CvT: Introducing Convolutions to Vision Transformers. We present a new architecture, named Convolut

Bin Xiao 175 Jan 08, 2023