CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.

Overview

CV Backbones

including GhostNet, TinyNet, TNT (Transformer in Transformer) developed by Huawei Noah's Ark Lab.

News

2022/01/05 PyramidTNT: An improved TNT baseline is released.

2021/09/28 The paper of TNT (Transformer in Transformer) is accepted by NeurIPS 2021.

2021/09/18 The extended version of Versatile Filters is accepted by T-PAMI.

2021/08/30 GhostNet paper is selected as the Most Influential CVPR 2020 Papers.

2021/08/26 The codes of LegoNet and Versatile Filters has been merged into this repo.

2021/06/15 The code of TNT (Transformer in Transformer) has been released in this repo.

2020/10/31 GhostNet+TinyNet achieves better performance. See details in our NeurIPS 2020 paper: arXiv.

2020/06/10 GhostNet is included in PyTorch Hub.


GhostNet Code

This repo provides GhostNet pretrained models and inference code for TensorFlow and PyTorch:

For training, please refer to tinynet or timm.

TinyNet Code

This repo provides TinyNet pretrained models and inference code for PyTorch:

TNT Code

This repo provides training code and pretrained models of TNT (Transformer in Transformer) for PyTorch:

The code of PyramidTNT is also released:

LegoNet Code

This repo provides the implementation of paper LegoNet: Efficient Convolutional Neural Networks with Lego Filters (ICML 2019)

Versatile Filters Code

This repo provides the implementation of paper Learning Versatile Filters for Efficient Convolutional Neural Networks (NeurIPS 2018)

Citation

@inproceedings{ghostnet,
  title={GhostNet: More Features from Cheap Operations},
  author={Han, Kai and Wang, Yunhe and Tian, Qi and Guo, Jianyuan and Xu, Chunjing and Xu, Chang},
  booktitle={CVPR},
  year={2020}
}
@inproceedings{tinynet,
  title={Model Rubik’s Cube: Twisting Resolution, Depth and Width for TinyNets},
  author={Han, Kai and Wang, Yunhe and Zhang, Qiulin and Zhang, Wei and Xu, Chunjing and Zhang, Tong},
  booktitle={NeurIPS},
  year={2020}
}
@inproceedings{tnt,
  title={Transformer in transformer},
  author={Han, Kai and Xiao, An and Wu, Enhua and Guo, Jianyuan and Xu, Chunjing and Wang, Yunhe},
  booktitle={NeurIPS},
  year={2021}
}
@inproceedings{legonet,
    title={LegoNet: Efficient Convolutional Neural Networks with Lego Filters},
    author={Yang, Zhaohui and Wang, Yunhe and Liu, Chuanjian and Chen, Hanting and Xu, Chunjing and Shi, Boxin and Xu, Chao and Xu, Chang},
    booktitle={ICML},
    year={2019}
  }
@inproceedings{wang2018learning,
  title={Learning versatile filters for efficient convolutional neural networks},
  author={Wang, Yunhe and Xu, Chang and Chunjing, XU and Xu, Chao and Tao, Dacheng},
  booktitle={NeurIPS},
  year={2018}
}

Other versions of GhostNet

This repo provides the TensorFlow/PyTorch code of GhostNet. Other versions and applications can be found in the following:

  1. timm: code with pretrained model
  2. Darknet: cfg file, and description
  3. Gluon/Keras/Chainer: code
  4. Paddle: code
  5. Bolt inference framework: benckmark
  6. Human pose estimation: code
  7. YOLO with GhostNet backbone: code
  8. Face recognition: cavaface, FaceX-Zoo, TFace
Comments
  • TypeError: __init__() got an unexpected keyword argument 'bn_tf'

    TypeError: __init__() got an unexpected keyword argument 'bn_tf'

    Hello, I want to ask what caused the following error when running the train.py file? thank you “ TypeError: init() got an unexpected keyword argument 'bn_tf' ”

    opened by ModeSky 16
  • Counting ReLU vs HardSwish FLOPs

    Counting ReLU vs HardSwish FLOPs

    Thank you very much for sharing the source code. I have a question related to FLOPs counting for ReLU and HardSwish. I saw in the paper the flops are the same in ReLU and HardSwish. Can you explain this situation? image

    opened by jahongir7174 10
  • kernel size in primary convolution of Ghost module

    kernel size in primary convolution of Ghost module

    Hi, It is said in your paper that the primary convolution in Ghost module can have customized kernel size, which is a major difference from existing efficient convolution schemes. However, it seems that in this code all the kernel size of primary convolution in Ghost module are set to [1, 1], and the kernel set in _CONV_DEFS_0 are only used in blocks of stride=2. Is it set intentionally?

    opened by YUHAN666 9
  • 用GhostModule替换Conv2d,loss降的很慢?

    用GhostModule替换Conv2d,loss降的很慢?

    我直接将efficientnet里面的MBConvBlock中的Conv2d替换为GhostModule: Conv2d(in_channels=inp, out_channels=oup, kernel_size=1, bias=False) 替换为 GhostModule(inp, oup), 其他参数不变,为什么损失比以前收敛的更慢了,一直降不下来?请问需要修改其他什么参数吗?

    opened by yc-cui 8
  • Training hyperparams on ImageNet

    Training hyperparams on ImageNet

    Hi, thanks for sharing such a wonderful work, I'd like to reproduce your results on ImageNet, could you please specify training parameters such as initial learning rate, how to decay it, batch size, etc. It would be even better if you can provide tricks to train GhostNet, such as label smoothing and data augmentation. Thx!

    good first issue 
    opened by sean-zhuh 8
  • Why did you exclude EfficientNetB0 from Accuracy-Latency chart?

    Why did you exclude EfficientNetB0 from Accuracy-Latency chart?

    @iamhankai Hi,

    Great work!

    1. Why did you exclude EfficientNetB0 (0.390 BFlops - 76.3% Top1) from Accuracy-Latency chart?

    2. Also what mini_batch_size did you use for training GhostNet?

    flops_latency

    opened by AlexeyAB 8
  • VIG pretrained weights

    VIG pretrained weights

    @huawei-noah-admin cna you please share the VIG pretraiend model on google drive or one drive as baidu is not accessible from our end

    THank in advance

    opened by abhigoku10 7
  • The implementation of Isotropic architecture

    The implementation of Isotropic architecture

    Hi, thanks for sharing this impressive work. The paper mentioned two architectures, Isotropic one and pyramid one. I noticed that in the code, this is a reduce_ratios, and this reduce_ratios are used by a avg_pooling operation to calculate before building the graph. I am wondering whether all I need to do is setting this reduce_ratios to [1,1,1,1] if I want to implement the Isotropic architecture. Thanks.

    self.n_blocks = sum(blocks) channels = opt.channels reduce_ratios = [4, 2, 1, 1] dpr = [x.item() for x in torch.linspace(0, drop_path, self.n_blocks)] num_knn = [int(x.item()) for x in torch.linspace(k, k, self.n_blocks)]

    opened by buptxiaofeng 6
  • Gradient overflow occurs while training tnt-ti model

    Gradient overflow occurs while training tnt-ti model

    ^@^@Train: 41 [ 0/625 ( 0%)] Loss: 4.564162 (4.5642) Time: 96.744s, 21.17/s (96.744s, 21.17/s) LR: 8.284e-04 Data: 94.025 (94.025) ^@^@^@^@Train: 41 [ 50/625 ( 8%)] Loss: 4.395192 (4.4797) Time: 2.742s, 746.96/s (7.383s, 277.38/s) LR: 8.284e-04 Data: 0.057 (4.683) ^@^@^@^@Train: 41 [ 100/625 ( 16%)] Loss: 4.424296 (4.4612) Time: 2.741s, 747.15/s (6.529s, 313.66/s) LR: 8.284e-04 Data: 0.056 (3.831) Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0

    And the top-1 acc is only 0.2 after 40 epochs.

    Any tips available here, dear @iamhankai @yitongh

    opened by jimmyflycv 6
  • Bloated model

    Bloated model

    Hi, I am using Ghostnet backbone for training YoloV3 model in Tensorflow, but I am getting a bloated model. The checkpoint data size is approx. 68MB, but the checkpoint given here is of approx 20MB https://github.com/huawei-noah/ghostnet/blob/master/tensorflow/models/ghostnet_checkpoint.data-00000-of-00001

    I am also training EfficientNet model with YoloV3 and that seems to be working fine, without any bloated size.

    Could anyone or the author please confirm if this is the correct architecture or anything seems weird? I have attached the Ghostnet architecture file out of the code.

    Thanks. ghostnet_model_arch.txt

    opened by ghost 6
  • Replace Conv2d in my network, however it becomes slower, why?

    Replace Conv2d in my network, however it becomes slower, why?

    Above all, thanks for your great work! It really inspires me a lot! But now I have a question.

    I replace all the Conv2d operations in my network except the final ones, the model parameters really becomes much more less. However, when testing, I found that the average forward time decreases a lot by the replacement (from 428FPS down to 354FPS). So, is this a normal phenomenon? Or is this because of the concat operation?

    opened by FunkyKoki 6
  • VIG for segmenation

    VIG for segmenation

    @iamhankai thanks for open-sourcing the code base . Can you please let me knw how to use the pvig for segmentation related activities its really helpful

    THanks in advance

    opened by abhigoku10 0
  • higher performance of ViG

    higher performance of ViG

    I try to train ViG-S on ImageNet and get 80.54% top1 accuracy, which is higher than that in paper, 80.4%. I wonder if 80.4 is the average of multiple trainings? If yes, how many reps do you use?

    opened by tdzdog 9
Releases(GhostNetV2)
Owner
HUAWEI Noah's Ark Lab
Working with and contributing to the open source community in data mining, artificial intelligence, and related fields.
HUAWEI Noah's Ark Lab
Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning

Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning Yansong Tang *, Zhenyu Jiang *, Zhenda Xie *, Yue

Zhenyu Jiang 12 Nov 16, 2022
Code for the paper "There is no Double-Descent in Random Forests"

Code for the paper "There is no Double-Descent in Random Forests" This repository contains the code to run the experiments for our paper called "There

2 Jan 14, 2022
TensorFlow implementation of "TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?"

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? Source: Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize

Aritra Roy Gosthipaty 23 Dec 24, 2022
Degree-Quant: Quantization-Aware Training for Graph Neural Networks.

Degree-Quant This repo provides a clean re-implementation of the code associated with the paper Degree-Quant: Quantization-Aware Training for Graph Ne

35 Oct 07, 2022
A Haskell kernel for IPython.

IHaskell You can now try IHaskell directly in your browser at CoCalc or mybinder.org. Alternatively, watch a talk and demo showing off IHaskell featur

Andrew Gibiansky 2.4k Dec 29, 2022
Viewmaker Networks: Learning Views for Unsupervised Representation Learning

Viewmaker Networks: Learning Views for Unsupervised Representation Learning Alex Tamkin, Mike Wu, and Noah Goodman Paper link: https://arxiv.org/abs/2

Alex Tamkin 31 Dec 01, 2022
My take on a practical implementation of Linformer for Pytorch.

Linformer Pytorch Implementation A practical implementation of the Linformer paper. This is attention with only linear complexity in n, allowing for v

Peter 349 Dec 25, 2022
MEDS: Enhancing Memory Error Detection for Large-Scale Applications

MEDS: Enhancing Memory Error Detection for Large-Scale Applications Prerequisites cmake and clang Build MEDS supporting compiler $ make Build Using Do

Secomp Lab at Purdue University 34 Dec 14, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Dec 28, 2022
PyTorch implementation of Barlow Twins.

Barlow Twins: Self-Supervised Learning via Redundancy Reduction PyTorch implementation of Barlow Twins. @article{zbontar2021barlow, title={Barlow Tw

Facebook Research 839 Dec 29, 2022
DIRL: Domain-Invariant Representation Learning

DIRL: Domain-Invariant Representation Learning Domain-Invariant Representation Learning (DIRL) is a novel algorithm that semantically aligns both the

Ajay Tanwani 30 Nov 07, 2022
Pywonderland - A tour in the wonderland of math with python.

A Tour in the Wonderland of Math with Python A collection of python scripts for drawing beautiful figures and animating interesting algorithms in math

Zhao Liang 4.1k Jan 03, 2023
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models Pouya Samangouei*, Maya Kabkab*, Rama Chellappa [*: authors co

Maya Kabkab 212 Dec 07, 2022
A Transformer-Based Siamese Network for Change Detection

ChangeFormer: A Transformer-Based Siamese Network for Change Detection (Under review at IGARSS-2022) Wele Gedara Chaminda Bandara, Vishal M. Patel Her

Wele Gedara Chaminda Bandara 214 Dec 29, 2022
Official implementation of "Learning Proposals for Practical Energy-Based Regression", 2021.

ebms_proposals Official implementation (PyTorch) of the paper: Learning Proposals for Practical Energy-Based Regression, 2021 [arXiv] [project]. Fredr

Fredrik Gustafsson 10 Oct 22, 2022
Real-time Neural Representation Fusion for Robust Volumetric Mapping

NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric Mapping Paper | Supplementary This repository contains the implementation of

ETHZ ASL 106 Dec 24, 2022
The code for "Deep Level Set for Box-supervised Instance Segmentation in Aerial Images".

Deep Levelset for Box-supervised Instance Segmentation in Aerial Images Wentong Li, Yijie Chen, Wenyu Liu, Jianke Zhu* Any questions or discussions ar

sunshine.lwt 112 Jan 05, 2023
CyTran: Cycle-Consistent Transformers for Non-Contrast to Contrast CT Translation

CyTran: Cycle-Consistent Transformers for Non-Contrast to Contrast CT Translation We propose a novel approach to translate unpaired contrast computed

Nicolae Catalin Ristea 13 Jan 02, 2023
OCRA (Object-Centric Recurrent Attention) source code

OCRA (Object-Centric Recurrent Attention) source code Hossein Adeli and Seoyoung Ahn Please cite this article if you find this repository useful: For

Hossein Adeli 2 Jun 18, 2022
Yolo ros - YOLO-ROS for HUAWEI ATLAS200

YOLO-ROS YOLO-ROS for NVIDIA YOLO-ROS for HUAWEI ATLAS200, please checkout for b

ChrisLiu 5 Oct 18, 2022