Yolact-keras实例分割模型在keras当中的实现

Overview

Yolact-keras实例分割模型在keras当中的实现


目录

  1. 性能情况 Performance
  2. 所需环境 Environment
  3. 文件下载 Download
  4. 训练步骤 How2train
  5. 预测步骤 How2predict
  6. 评估步骤 How2eval
  7. 参考资料 Reference

性能情况

训练数据集 权值文件名称 测试数据集 输入图片大小 bbox mAP 0.5:0.95 bbox mAP 0.5 segm mAP 0.5:0.95 segm mAP 0.5
COCO-Train2017 yolact_weights_coco.h5 COCO-Val2017 544x544 30.3 51.8 27.1 47.2

所需环境

keras==2.1.5
tensorflow-gpu==1.13.2

文件下载

训练所需的预训练权值可在百度网盘中下载。
链接: https://pan.baidu.com/s/1OIxe9w2t5nImstDEpjncnQ
提取码: eik3

shapes数据集下载地址如下,该数据集是使用labelme标注的结果,尚未经过其它处理,用于区分三角形和正方形:
链接: https://pan.baidu.com/s/1hrCaEYbnSGBOhjoiOKQmig
提取码: jk44

训练步骤

a、训练shapes形状数据集

  1. 数据集的准备
    文件下载部分,通过百度网盘下载数据集,下载完成后解压,将图片和对应的json文件放入根目录下的datasets/before文件夹。

  2. 数据集的处理
    打开coco_annotation.py,里面的参数默认用于处理shapes形状数据集,直接运行可以在datasets/coco文件夹里生成图片文件和标签文件,并且完成了训练集和测试集的划分。

  3. 开始网络训练
    train.py的默认参数用于训练shapes数据集,默认指向了根目录下的数据集文件夹,直接运行train.py即可开始训练。

  4. 训练结果预测
    训练结果预测需要用到两个文件,分别是yolact.py和predict.py。 首先需要去yolact.py里面修改model_path以及classes_path,这两个参数必须要修改。
    model_path指向训练好的权值文件,在logs文件夹里。
    classes_path指向检测类别所对应的txt。

    完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。

b、训练自己的数据集

  1. 数据集的准备
    本文使用labelme工具进行标注,标注好的文件有图片文件和json文件,二者均放在before文件夹里,具体格式可参考shapes数据集。
    在标注目标时需要注意,同一种类的不同目标需要使用 _ 来隔开。
    比如想要训练网络检测三角形和正方形,当一幅图片存在两个三角形时,分别标记为:
triangle_1
triangle_2
  1. 数据集的处理
    修改coco_annotation.py里面的参数。第一次训练可以仅修改classes_path,classes_path用于指向检测类别所对应的txt。
    训练自己的数据集时,可以自己建立一个cls_classes.txt,里面写自己所需要区分的类别。
    model_data/cls_classes.txt文件内容为:
cat
dog
...

修改coco_annotation.py中的classes_path,使其对应cls_classes.txt,并运行coco_annotation.py。

  1. 开始网络训练
    训练的参数较多,均在train.py中,大家可以在下载库后仔细看注释,其中最重要的部分依然是train.py里的classes_path。
    classes_path用于指向检测类别所对应的txt,这个txt和coco_annotation.py里面的txt一样!训练自己的数据集必须要修改!
    修改完classes_path后就可以运行train.py开始训练了,在训练多个epoch后,权值会生成在logs文件夹中。

  2. 训练结果预测
    训练结果预测需要用到两个文件,分别是yolact.py和predict.py。 首先需要去yolact.py里面修改model_path以及classes_path,这两个参数必须要修改。
    model_path指向训练好的权值文件,在logs文件夹里。
    classes_path指向检测类别所对应的txt。

    完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。

c、训练coco数据集

  1. 数据集的准备
    coco训练集 http://images.cocodataset.org/zips/train2017.zip
    coco验证集 http://images.cocodataset.org/zips/val2017.zip
    coco训练集和验证集的标签 http://images.cocodataset.org/annotations/annotations_trainval2017.zip

  2. 开始网络训练
    解压训练集、验证集及其标签后。打开train.py文件,修改其中的classes_path指向model_data/coco_classes.txt。
    修改train_image_path为训练图片的路径,train_annotation_path为训练图片的标签文件,val_image_path为验证图片的路径,val_annotation_path为验证图片的标签文件。

  3. 训练结果预测
    训练结果预测需要用到两个文件,分别是yolact.py和predict.py。 首先需要去yolact.py里面修改model_path以及classes_path,这两个参数必须要修改。
    model_path指向训练好的权值文件,在logs文件夹里。
    classes_path指向检测类别所对应的txt。

    完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。

预测步骤

a、使用预训练权重

  1. 下载完库后解压,在百度网盘下载权值,放入model_data,运行predict.py,输入
img/street.jpg
  1. 在predict.py里面进行设置可以进行fps测试和video视频检测。

b、使用自己训练的权重

  1. 按照训练步骤训练。
  2. 在yolact.py文件里面,在如下部分修改model_path和classes_path使其对应训练好的文件;model_path对应logs文件夹下面的权值文件,classes_path是model_path对应分的类
_defaults = {
    #--------------------------------------------------------------------------#
    #   使用自己训练好的模型进行预测一定要修改model_path和classes_path!
    #   model_path指向logs文件夹下的权值文件,classes_path指向model_data下的txt
    #
    #   训练好后logs文件夹下存在多个权值文件,选择验证集损失较低的即可。
    #   验证集损失较低不代表mAP较高,仅代表该权值在验证集上泛化性能较好。
    #   如果出现shape不匹配,同时要注意训练时的model_path和classes_path参数的修改
    #--------------------------------------------------------------------------#
    "model_path"        : 'model_data/yolact_weights_shape.h5',
    "classes_path"      : 'model_data/shape_classes.txt',
    #---------------------------------------------------------------------#
    #   输入图片的大小
    #---------------------------------------------------------------------#
    "input_shape"       : [544, 544],
    #---------------------------------------------------------------------#
    #   只有得分大于置信度的预测框会被保留下来
    #---------------------------------------------------------------------#
    "confidence"        : 0.5,
    #---------------------------------------------------------------------#
    #   非极大抑制所用到的nms_iou大小
    #---------------------------------------------------------------------#
    "nms_iou"           : 0.3,
    #---------------------------------------------------------------------#
    #   先验框的大小
    #---------------------------------------------------------------------#
    "anchors_size"      : [24, 48, 96, 192, 384],
    #---------------------------------------------------------------------#
    #   传统非极大抑制
    #---------------------------------------------------------------------#
    "traditional_nms"   : True
}
  1. 运行predict.py,输入
img/street.jpg
  1. 在predict.py里面进行设置可以进行fps测试和video视频检测。

评估步骤

a、评估自己的数据集

  1. 本文使用coco格式进行评估。
  2. 如果在训练前已经运行过coco_annotation.py文件,代码会自动将数据集划分成训练集、验证集和测试集。
  3. 如果想要修改测试集的比例,可以修改coco_annotation.py文件下的trainval_percent。trainval_percent用于指定(训练集+验证集)与测试集的比例,默认情况下 (训练集+验证集):测试集 = 9:1。train_percent用于指定(训练集+验证集)中训练集与验证集的比例,默认情况下 训练集:验证集 = 9:1。
  4. 在yolact.py里面修改model_path以及classes_path。model_path指向训练好的权值文件,在logs文件夹里。classes_path指向检测类别所对应的txt。
  5. 前往eval.py文件修改classes_path,classes_path用于指向检测类别所对应的txt,这个txt和训练时的txt一样。评估自己的数据集必须要修改。运行eval.py即可获得评估结果。

b、评估coco的数据集

  1. 下载好coco数据集。
  2. 在yolact.py里面修改model_path以及classes_path。model_path指向coco数据集的权重,在logs文件夹里。classes_path指向model_data/coco_classes.txt。
  3. 前往eval.py设置classes_path,指向model_data/coco_classes.txt。修改Image_dir为评估图片的路径,Json_path为评估图片的标签文件。 运行eval.py即可获得评估结果。

Reference

https://github.com/feiyuhuahuo/Yolact_minimal

You might also like...
Comments
  • 关于数据增强的问题

    关于数据增强的问题

    您好,B导,我看到你的这段程序中augmentation.py中有关于数据增强的代码。且在train.py中的train_dataloader = COCODetection(train_image_path, train_coco, num_classes, anchors, batch_size, COCO_LABEL_MAP, Augmentation(input_shape)) val_dataloader = COCODetection(val_image_path, train_coco, num_classes, anchors, batch_size, COCO_LABEL_MAP, Augmentation(input_shape)).这两段代码也调用了增强,但是为什么在训练时控制台输出的日志却不是增强之后的数据集规模呢?如图所示: 微信图片_20221201141209

    opened by PengboLi1998 1
Owner
Bubbliiiing
Bubbliiiing
《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)

A-CNN: Annularly Convolutional Neural Networks on Point Clouds Created by Artem Komarichev, Zichun Zhong, Jing Hua from Department of Computer Science

Artёm Komarichev 44 Feb 24, 2022
Object Tracking and Detection Using OpenCV

Object tracking is one such application of computer vision where an object is detected in a video, otherwise interpreted as a set of frames, and the object’s trajectory is estimated. For instance, yo

Happy N. Monday 4 Aug 21, 2022
The Official PyTorch Implementation of "VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models" (ICLR 2021 spotlight paper)

Official PyTorch implementation of "VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models" (ICLR 2021 Spotlight Paper) Zhisheng

NVIDIA Research Projects 45 Dec 26, 2022
Unsupervised Real-World Super-Resolution: A Domain Adaptation Perspective

Unofficial pytorch implementation of the paper "Unsupervised Real-World Super-Resolution: A Domain Adaptation Perspective"

16 Nov 21, 2022
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.

Official-PyTorch-Implementation-of-TransMEF Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fu

117 Dec 27, 2022
Atomistic Line Graph Neural Network

Table of Contents Introduction Installation Examples Pre-trained models Quick start using colab JARVIS-ALIGNN webapp Peformances on a few datasets Use

National Institute of Standards and Technology 91 Dec 30, 2022
Implementation of ProteinBERT in Pytorch

ProteinBERT - Pytorch (wip) Implementation of ProteinBERT in Pytorch. Original Repository Install $ pip install protein-bert-pytorch Usage import torc

Phil Wang 92 Dec 25, 2022
AAAI 2022: Stationary diffusion state neural estimation

Stationary Diffusion State Neural Estimation Although many graph-based clustering methods attempt to model the stationary diffusion state in their obj

绽琨 33 Nov 24, 2022
[ACMMM 2021, Oral] Code release for "Elastic Tactile Simulation Towards Tactile-Visual Perception"

EIP: Elastic Interaction of Particles Code release for "Elastic Tactile Simulation Towards Tactile-Visual Perception", in ACMMM (Oral) 2021. By Yikai

Yikai Wang 37 Dec 20, 2022
Source code for the paper "SEPP: Similarity Estimation of Predicted Probabilities for Defending and Detecting Adversarial Text" PACLIC 2021

Adversarial text generator Refer to "adversarial_text_generator"[https://github.com/quocnsh/SEPP_generator] project for generating adversarial texts A

0 Oct 05, 2021
A spherical CNN for weather forecasting

DeepSphere-Weather - Deep Learning on the sphere for weather/climate applications. The code in this repository provides a scalable and flexible framew

DeepSphere 47 Dec 25, 2022
A python-image-classification web application project, written in Python and served through the Flask Microframework

A python-image-classification web application project, written in Python and served through the Flask Microframework. This Project implements the VGG16 covolutional neural network, through Keras and

Gerald Maduabuchi 19 Dec 12, 2022
deep learning model that learns to code with drawing in the Processing language

sketchnet sketchnet - processing code generator can we teach a computer to draw pictures with code. We use Processing and java/jruby code paired with

41 Dec 12, 2022
Multiple Object Tracking with Yolov5!

Tracking with yolov5 This implementation is for who need to tracking multi-object only with detector. You can easily track mult-object with your well

9 Nov 08, 2022
LQM - Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstract Object detection aims to locate and classify object instances in ima

IM Lab., POSTECH 0 Sep 28, 2022
Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"

SinGAN Project | Arxiv | CVF | Supplementary materials | Talk (ICCV`19) Official pytorch implementation of the paper: "SinGAN: Learning a Generative M

Tamar Rott Shaham 3.2k Dec 25, 2022
PG2Net: Personalized and Group PreferenceGuided Network for Next Place Prediction

PG2Net PG2Net:Personalized and Group Preference Guided Network for Next Place Prediction Datasets Experiment results on two Foursquare check-in datase

Urban Mobility 5 Dec 20, 2022
This repository collects project-relevant Isabelle/HOL formalizations.

Isabelle/HOL formalizations related to the AuReLeE project Formalization of Abstract Argumentation Frameworks See AbstractArgumentation folder for the

AuReLeE project 1 Sep 10, 2022
[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI

[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning DouZero is a reinforcement learning framework for DouDizhu (斗地主), t

Kwai Inc. 3.1k Jan 04, 2023
NeROIC: Neural Object Capture and Rendering from Online Image Collections

NeROIC: Neural Object Capture and Rendering from Online Image Collections This repository is for the source code for the paper NeROIC: Neural Object C

Snap Research 647 Dec 27, 2022