Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Overview

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation.

Our paper is accepted by IEEE Transactions on Cybernetics (TCYB).

This repo is based on YOLACT++.

This is the code for our papers:

@Inproceedings{zheng2020diou,
  author    = {Zheng, Zhaohui and Wang, Ping and Liu, Wei and Li, Jinze and Ye, Rongguang and Ren, Dongwei},
  title     = {Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression},
  booktitle = {The AAAI Conference on Artificial Intelligence (AAAI)},
  year      = {2020},
}

@Article{zheng2021ciou,
  author    = {Zheng, Zhaohui and Wang, Ping and Ren, Dongwei and Liu, Wei and Ye, Rongguang and Hu, Qinghua and Zuo, Wangmeng},
  title     = {Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation},
  booktitle = {IEEE Transactions on Cybernetics},
  year      = {2021},
}

Description of Cluster-NMS and Its Usage

An example diagram of our Cluster-NMS, where X denotes IoU matrix which is calculated by X=jaccard(boxes,boxes).triu_(diagonal=1) > nms_thresh after sorted by score descending. (Here use 0,1 for visualization.)

The inputs of NMS are boxes with size [n,4] and scores with size [80,n]. (take coco as example)

There are two ways for NMS. One is that all classes have the same number of boxes. First, we use top k=200 to select the top 200 detections for every class. Then boxes will be [80,m,4], where m<=200. Do Cluster-NMS and keep the boxes with scores>0.01. Finally, return top 100 boxes across all classes.

The other approach is that different classes have different numbers of boxes. First, we use a score threshold (e.g. 0.01) to filter out most low score detection boxes. It results in the number of remaining boxes in different classes may be different. Then put all the boxes together and sorted by score descending. (Note that the same box may appear more than once, because its scores of multiple classes are greater than the threshold 0.01.) Adding offset for all the boxes according to their class labels. (use torch.arange(0,80).) For example, since the coordinates (x1,y1,x2,y2) of all the boxes are on interval (0,1). By adding offset, if a box belongs to class 61, its coordinates will on interval (60,61). After that, the IoU of boxes belonging to different classes will be 0. (because they are treated as different clusters.) Do Cluster-NMS and return top 100 boxes across all classes. (For this method, please refer to another our repository https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/detection/detection.py)

Getting Started

1) New released! CIoU and Cluster-NMS

  1. YOLACT (See YOLACT)

  2. YOLOv3-pytorch https://github.com/Zzh-tju/ultralytics-YOLOv3-Cluster-NMS

  3. YOLOv5 (Support batch mode Cluster-NMS. It will speed up NMS when turning on test-time augmentation like multi-scale testing.) https://github.com/Zzh-tju/yolov5

  4. SSD-pytorch https://github.com/Zzh-tju/DIoU-SSD-pytorch

2) DIoU and CIoU losses into Detection Algorithms

DIoU and CIoU losses are incorporated into state-of-the-art detection algorithms, including YOLO v3, SSD and Faster R-CNN. The details of implementation and comparison can be respectively found in the following links.

  1. YOLO v3 https://github.com/Zzh-tju/DIoU-darknet

  2. SSD https://github.com/Zzh-tju/DIoU-SSD-pytorch

  3. Faster R-CNN https://github.com/Zzh-tju/DIoU-pytorch-detectron

  4. Simulation Experiment https://github.com/Zzh-tju/DIoU

YOLACT

Codes location and options

Please take a look at ciou function of layers/modules/multibox_loss.py for our CIoU loss implementation in PyTorch.

Currently, NMS surports two modes: (See eval.py)

  1. Cross-class mode, which ignores classes. (cross_class_nms=True, faster than per-class mode but with a slight performance drop.)

  2. Per-class mode. (cross_class_nms=False)

Currently, NMS supports fast_nms, cluster_nms, cluster_diounms, spm, spm_dist, spm_dist_weighted.

See layers/functions/detection.py for our Cluster-NMS implementation in PyTorch.

Installation

In order to use YOLACT++, make sure you compile the DCNv2 code.

  • Clone this repository and enter it:
    git clone https://github.com/Zzh-tju/CIoU.git
    cd yolact
  • Set up the environment using one of the following methods:
    • Using Anaconda
      • Run conda env create -f environment.yml
    • Manually with pip
      • Set up a Python3 environment (e.g., using virtenv).
      • Install Pytorch 1.0.1 (or higher) and TorchVision.
      • Install some other packages:
        # Cython needs to be installed before pycocotools
        pip install cython
        pip install opencv-python pillow pycocotools matplotlib 
  • If you'd like to train YOLACT, download the COCO dataset and the 2014/2017 annotations. Note that this script will take a while and dump 21gb of files into ./data/coco.
    sh data/scripts/COCO.sh
  • If you'd like to evaluate YOLACT on test-dev, download test-dev with this script.
    sh data/scripts/COCO_test.sh
  • If you want to use YOLACT++, compile deformable convolutional layers (from DCNv2). Make sure you have the latest CUDA toolkit installed from NVidia's Website.
    cd external/DCNv2
    python setup.py build develop

Evaluation

Here are our YOLACT models (released on May 5th, 2020) along with their FPS on a GTX 1080 Ti and mAP on coco 2017 val:

The training is carried on two GTX 1080 Ti with command: python train.py --config=yolact_base_config --batch_size=8

Image Size Backbone Loss NMS FPS box AP mask AP Weights
550 Resnet101-FPN SL1 Fast NMS 30.6 31.5 29.1 SL1.pth
550 Resnet101-FPN CIoU Fast NMS 30.6 32.1 29.6 CIoU.pth

To evalute the model, put the corresponding weights file in the ./weights directory and run one of the following commands. The name of each config is everything before the numbers in the file name (e.g., yolact_base for yolact_base_54_800000.pth).

Quantitative Results on COCO

# Quantitatively evaluate a trained model on the entire validation set. Make sure you have COCO downloaded as above.

# Output a COCOEval json to submit to the website or to use the run_coco_eval.py script.
# This command will create './results/bbox_detections.json' and './results/mask_detections.json' for detection and instance segmentation respectively.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --output_coco_json

# You can run COCOEval on the files created in the previous command. The performance should match my implementation in eval.py.
python run_coco_eval.py

# To output a coco json file for test-dev, make sure you have test-dev downloaded from above and go
python eval.py --trained_model=weights/yolact_base_54_800000.pth --output_coco_json --dataset=coco2017_testdev_dataset

Qualitative Results on COCO

# Display qualitative results on COCO. From here on I'll use a confidence threshold of 0.15.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --display

Cluster-NMS Using Benchmark on COCO

python eval.py --trained_model=weights/yolact_base_54_800000.pth --benchmark

Hardware

  • 1 GTX 1080 Ti
  • Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz
Image Size Backbone Loss NMS FPS box AP box AP75 box AR100 mask AP mask AP75 mask AR100
550 Resnet101-FPN CIoU Fast NMS 30.6 32.1 33.9 43.0 29.6 30.9 40.3
550 Resnet101-FPN CIoU Original NMS 11.5 32.5 34.1 45.1 29.7 31.0 41.7
550 Resnet101-FPN CIoU Cluster-NMS 28.8 32.5 34.1 45.2 29.7 31.0 41.7
550 Resnet101-FPN CIoU SPM Cluster-NMS 28.6 33.1 35.2 48.8 30.3 31.7 43.6
550 Resnet101-FPN CIoU SPM + Distance Cluster-NMS 27.1 33.2 35.2 49.2 30.2 31.7 43.8
550 Resnet101-FPN CIoU SPM + Distance + Weighted Cluster-NMS 26.5 33.4 35.5 49.1 30.3 31.6 43.8

The following table is evaluated by using their pretrained weight of YOLACT. (yolact_resnet50_54_800000.pth)

Image Size Backbone Loss NMS FPS box AP box AP75 box AR100 mask AP mask AP75 mask AR100
550 Resnet50-FPN SL1 Fast NMS 41.6 30.2 31.9 42.0 28.0 29.1 39.4
550 Resnet50-FPN SL1 Original NMS 12.8 30.7 32.0 44.1 28.1 29.2 40.7
550 Resnet50-FPN SL1 Cluster-NMS 38.2 30.7 32.0 44.1 28.1 29.2 40.7
550 Resnet50-FPN SL1 SPM Cluster-NMS 37.7 31.3 33.2 48.0 28.8 29.9 42.8
550 Resnet50-FPN SL1 SPM + Distance Cluster-NMS 35.2 31.3 33.3 48.2 28.7 29.9 42.9
550 Resnet50-FPN SL1 SPM + Distance + Weighted Cluster-NMS 34.2 31.8 33.9 48.3 28.8 29.9 43.0

The following table is evaluated by using their pretrained weight of YOLACT. (yolact_base_54_800000.pth)

Image Size Backbone Loss NMS FPS box AP box AP75 box AR100 mask AP mask AP75 mask AR100
550 Resnet101-FPN SL1 Fast NMS 30.6 32.5 34.6 43.9 29.8 31.3 40.8
550 Resnet101-FPN SL1 Original NMS 11.9 32.9 34.8 45.8 29.9 31.4 42.1
550 Resnet101-FPN SL1 Cluster-NMS 29.2 32.9 34.8 45.9 29.9 31.4 42.1
550 Resnet101-FPN SL1 SPM Cluster-NMS 28.8 33.5 35.9 49.7 30.5 32.1 44.1
550 Resnet101-FPN SL1 SPM + Distance Cluster-NMS 27.5 33.5 35.9 50.2 30.4 32.0 44.3
550 Resnet101-FPN SL1 SPM + Distance + Weighted Cluster-NMS 26.7 34.0 36.6 49.9 30.5 32.0 44.3

The following table is evaluated by using their pretrained weight of YOLACT++. (yolact_plus_base_54_800000.pth)

Image Size Backbone Loss NMS FPS box AP box AP75 box AR100 mask AP mask AP75 mask AR100
550 Resnet101-FPN SL1 Fast NMS 25.1 35.8 38.7 45.5 34.4 36.8 42.6
550 Resnet101-FPN SL1 Original NMS 10.9 36.4 39.1 48.0 34.7 37.1 44.1
550 Resnet101-FPN SL1 Cluster-NMS 23.7 36.4 39.1 48.0 34.7 37.1 44.1
550 Resnet101-FPN SL1 SPM Cluster-NMS 23.2 36.9 40.1 52.8 35.0 37.5 46.3
550 Resnet101-FPN SL1 SPM + Distance Cluster-NMS 22.0 36.9 40.2 53.0 34.9 37.5 46.3
550 Resnet101-FPN SL1 SPM + Distance + Weighted Cluster-NMS 21.7 37.4 40.6 52.5 35.0 37.6 46.3

Note:

  • Things we did but did not appear in the paper: SPM + Distance + Weighted Cluster-NMS. Here the box coordinate weighted average is only performed in IoU> 0.8. We searched that IoU>0.5 is not good for YOLACT and IoU>0.9 is almost same to SPM + Distance Cluster-NMS. (Refer to CAD for the details of Weighted-NMS.)

  • The Original NMS implemented by YOLACT is faster than ours, because they firstly use a score threshold (0.05) to get the set of candidate boxes, then do NMS will be faster (taking YOLACT ResNet101-FPN as example, 22 ~ 23 FPS with a slight performance drop). In order to get the same result with our Cluster-NMS, we modify the process of Original NMS.

  • Note that Torchvision NMS has the fastest speed, that is owing to CUDA implementation and engineering accelerations (like upper triangular IoU matrix only). However, our Cluster-NMS requires less iterations for NMS and can also be further accelerated by adopting engineering tricks.

  • Currently, Torchvision NMS use IoU as criterion, not DIoU. However, if we directly replace IoU with DIoU in Original NMS, it will costs much more time due to the sequence operation. Now, Cluster-DIoU-NMS will significantly speed up DIoU-NMS and obtain exactly the same result.

  • Torchvision NMS is a function in Torchvision>=0.3, and our Cluster-NMS can be applied to any projects that use low version of Torchvision and other deep learning frameworks as long as it can do matrix operations. No other import, no need to compile, less iteration, fully GPU-accelerated and better performance.

Images

# Display qualitative results on the specified image.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --ima


ge=my_image.png

# Process an image and save it to another file.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --image=input_image.png:output_image.png

# Process a whole folder of images.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --images=path/to/input/folder:path/to/output/folder

Video

# Display a video in real-time. "--video_multiframe" will process that many frames at once for improved performance.
# If you want, use "--display_fps" to draw the FPS directly on the frame.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4 --video=my_video.mp4

# Display a webcam feed in real-time. If you have multiple webcams pass the index of the webcam you want instead of 0.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4 --video=0

# Process a video and save it to another file. This uses the same pipeline as the ones above now, so it's fast!
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4 --video=input_video.mp4:output_video.mp4

As you can tell, eval.py can do a ton of stuff. Run the --help command to see everything it can do.

python eval.py --help

Training

By default, we train on COCO. Make sure to download the entire dataset using the commands above.

  • To train, grab an imagenet-pretrained model and put it in ./weights.
    • For Resnet101, download resnet101_reducedfc.pth from here.
    • For Resnet50, download resnet50-19c8e357.pth from here.
    • For Darknet53, download darknet53.pth from here.
  • Run one of the training commands below.
    • Note that you can press ctrl+c while training and it will save an *_interrupt.pth file at the current iteration.
    • All weights are saved in the ./weights directory by default with the file name <config>_<epoch>_<iter>.pth.
# Trains using the base config with a batch size of 8 (the default).
python train.py --config=yolact_base_config

# Trains yolact_base_config with a batch_size of 5. For the 550px models, 1 batch takes up around 1.5 gigs of VRAM, so specify accordingly.
python train.py --config=yolact_base_config --batch_size=5

# Resume training yolact_base with a specific weight file and start from the iteration specified in the weight file's name.
python train.py --config=yolact_base_config --resume=weights/yolact_base_10_32100.pth --start_iter=-1

# Use the help option to see a description of all available command line arguments
python train.py --help

Multi-GPU Support

YOLACT now supports multiple GPUs seamlessly during training:

  • Before running any of the scripts, run: export CUDA_VISIBLE_DEVICES=[gpus]
    • Where you should replace [gpus] with a comma separated list of the index of each GPU you want to use (e.g., 0,1,2,3).
    • You should still do this if only using 1 GPU.
    • You can check the indices of your GPUs with nvidia-smi.
  • Then, simply set the batch size to 8*num_gpus with the training commands above. The training script will automatically scale the hyperparameters to the right values.
    • If you have memory to spare you can increase the batch size further, but keep it a multiple of the number of GPUs you're using.
    • If you want to allocate the images per GPU specific for different GPUs, you can use --batch_alloc=[alloc] where [alloc] is a comma seprated list containing the number of images on each GPU. This must sum to batch_size.

Acknowledgments

Thank you to Daniel Bolya for his fork of YOLACT & YOLACT++, which is an exellent work for real-time instance segmentation.

Visualizer using audio and semantic analysis to explore BigGAN (Brock et al., 2018) latent space.

BigGAN Audio Visualizer Description This visualizer explores BigGAN (Brock et al., 2018) latent space by using pitch/tempo of an audio file to generat

Rush Kapoor 2 Nov 21, 2022
Where2Act: From Pixels to Actions for Articulated 3D Objects

Where2Act: From Pixels to Actions for Articulated 3D Objects The Proposed Where2Act Task. Given as input an articulated 3D object, we learn to propose

Kaichun Mo 69 Nov 28, 2022
A Simulated Optimal Intrusion Response Game

Optimal Intrusion Response An OpenAI Gym interface to a MDP/Markov Game model for optimal intrusion response of a realistic infrastructure simulated u

Kim Hammar 10 Dec 09, 2022
Deep Sketch-guided Cartoon Video Inbetweening

Cartoon Video Inbetweening Paper | DOI | Video The source code of Deep Sketch-guided Cartoon Video Inbetweening by Xiaoyu Li, Bo Zhang, Jing Liao, Ped

Xiaoyu Li 37 Dec 22, 2022
Rational Activation Functions - Replacing Padé Activation Units

Rational Activations - Learnable Rational Activation Functions First introduce as PAU in Padé Activation Units: End-to-end Learning of Activation Func

<a href=[email protected]"> 38 Nov 22, 2022
Learning Neural Painters Fast! using PyTorch and Fast.ai

The Joy of Neural Painting Learning Neural Painters Fast! using PyTorch and Fast.ai Blogpost with more details: The Joy of Neural Painting The impleme

Libre AI 72 Nov 10, 2022
I will implement Fastai in each projects present in this repository.

DEEP LEARNING FOR CODERS WITH FASTAI AND PYTORCH The repository contains a list of the projects which I have worked on while reading the book Deep Lea

Thinam Tamang 43 Dec 20, 2022
Classic Papers for Beginners and Impact Scope for Authors.

There have been billions of academic papers around the world. However, maybe only 0.0...01% among them are valuable or are worth reading. Since our limited life has never been forever, TopPaper provi

Qiulin Zhang 228 Dec 18, 2022
Used to record WKU's utility bills on a regular basis.

WKU水电费小助手 一个用于定期记录WKU水电费的脚本 Looking for English Readme? 背景 由于WKU校园内的水电账单系统时常存在扣费延迟的现象,而补扣的费用缺乏令人信服的证明。不少学生为费用摸不着头脑,但也没有申诉的依据。为了更好地掌握水电费使用情况,留下一手证据,我开源

2 Jul 21, 2022
FIRM-AFL is the first high-throughput greybox fuzzer for IoT firmware.

FIRM-AFL FIRM-AFL is the first high-throughput greybox fuzzer for IoT firmware. FIRM-AFL addresses two fundamental problems in IoT fuzzing. First, it

356 Dec 23, 2022
Minimal But Practical Image Classifier Pipline Using Pytorch, Finetune on ResNet18, Got 99% Accuracy on Own Small Datasets.

PyTorch Image Classifier Updates As for many users request, I released a new version of standared pytorch immage classification example at here: http:

JinTian 106 Nov 06, 2022
A 2D Visual Localization Framework based on Essential Matrices [ICRA2020]

A 2D Visual Localization Framework based on Essential Matrices This repository provides implementation of our paper accepted at ICRA: To Learn or Not

Qunjie Zhou 27 Nov 07, 2022
Causal Imitative Model for Autonomous Driving

Causal Imitative Model for Autonomous Driving Mohammad Reza Samsami, Mohammadhossein Bahari, Saber Salehkaleybar, Alexandre Alahi. arXiv 2021. [Projec

VITA lab at EPFL 8 Oct 04, 2022
A copy of Ares that costs 30 fucking dollars.

Finalement, j'ai décidé d'abandonner cette idée, je me suis comporté comme un enfant qui été en colère. Comme m'ont dit certaines personnes j'ai des c

Bleu 24 Apr 14, 2022
minimizer-space de Bruijn graphs (mdBG) for whole genome assembly

rust-mdbg: Minimizer-space de Bruijn graphs (mdBG) for whole-genome assembly rust-mdbg is an ultra-fast minimizer-space de Bruijn graph (mdBG) impleme

Barış Ekim 148 Dec 01, 2022
Codeflare - Scale complex AI/ML pipelines anywhere

Scale complex AI/ML pipelines anywhere CodeFlare is a framework to simplify the integration, scaling and acceleration of complex multi-step analytics

CodeFlare 169 Nov 29, 2022
PyTorch implementation of our paper How robust are discriminatively trained zero-shot learning models?

How robust are discriminatively trained zero-shot learning models? This repository contains the PyTorch implementation of our paper How robust are dis

Mehmet Kerim Yucel 5 Feb 04, 2022
An example of time series augmentation methods with Keras

Time Series Augmentation This is a collection of time series data augmentation methods and an example use using Keras. News 2020/04/16: Repository Cre

九州大学 ヒューマンインタフェース研究室 229 Jan 02, 2023
The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines"

MangaLineExtraction_PyTorch The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines" Usage model_torch.py [sourc

Miaomiao Li 82 Jan 02, 2023
NAACL2021 - COIL Contextualized Lexical Retriever

COIL Repo for our NAACL paper, COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List. The code covers learning

Luyu Gao 108 Dec 31, 2022