PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Related tags

Deep LearningSAQ
Overview

Sharpness-aware Quantization for Deep Neural Networks

License

Recent Update

2021.11.23: We release the source code of SAQ.

Setup the environments

  1. Clone the repository locally:
git clone https://github.com/zhuang-group/SAQ
  1. Install pytorch 1.8+, tensorboard and prettytable
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
pip install tensorboard
pip install prettytable

Data preparation

ImageNet

  1. Download the ImageNet 2012 dataset from here, and prepare the dataset based on this script.

  2. Change the dataset path in link_imagenet.py and link the ImageNet-100 by

python link_imagenet.py

CIFAR-100

Download the CIFAR-100 dataset from here.

After downloading ImageNet and CIFAR-100, the file structure should look like:

dataset
├── imagenet
    ├── train
    │   ├── class1
    │   │   ├── img1.jpeg
    │   │   ├── img2.jpeg
    │   │   └── ...
    │   ├── class2
    │   │   ├── img3.jpeg
    │   │   └── ...
    │   └── ...
    └── val
        ├── class1
        │   ├── img4.jpeg
        │   ├── img5.jpeg
        │   └── ...
        ├── class2
        │   ├── img6.jpeg
        │   └── ...
        └── ...
├── cifar100
    ├── cifar-100-python
    │   ├── meta
    │   ├── test
    │   ├── train
    │   └── ...
    └── ...

Training

Fixed-precision quantization

  1. Download the pre-trained full-precision models from the model zoo.

  2. Train low-precision models.

To train low-precision ResNet-20 on CIFAR-100, run:

sh script/train_qsam_cifar_r20.sh

To train low-precision ResNet-18 on ImageNet, run:

sh script/train_qsam_imagenet_r18.sh

Mixed-precision quantization

  1. Download the pre-trained full-precision models from the model zoo.

  2. Train the configuration generator.

To train the configuration generator of ResNet-20 on CIFAR-100, run:

sh script/train_generator_cifar_r20.sh

To train the configuration generator on ImageNet, run:

sh script/train_generator_imagenet_r18.sh
  1. After training the configuration generator, run following commands to fine-tune the resulting models with the obtained bitwidth configurations on CIFAR-100 and ImageNet.
sh script/finetune_cifar_r20.sh
sh script/finetune_imagenet_r18.sh

Results on CIFAR-100

Network Method Bitwidth BOPs (M) Top-1 Acc. (%) Top-5 Acc. (%)
ResNet-20 SAQ 4 674.6 68.7 91.2
ResNet-20 SAMQ MP 659.3 68.7 91.2
ResNet-20 SAQ 3 392.1 67.7 90.8
ResNet-20 SAMQ MP 374.4 68.6 91.2
MobileNetV2 SAQ 4 1508.9 75.6 93.7
MobileNetV2 SAMQ MP 1482.1 75.5 93.6
MobileNetV2 SAQ 3 877.1 74.4 93.2
MobileNetV2 SAMQ MP 869.5 75.5 93.7

Results on ImageNet

Network Method Bitwidth BOPs (G) Top-1 Acc. (%) Top-5 Acc. (%)
ResNet-18 SAQ 4 34.7 71.3 90.0
ResNet-18 SAMQ MP 33.7 71.4 89.9
ResNet-18 SAQ 2 14.4 67.1 87.3
MobileNetV2 SAQ 4 5.3 70.2 89.4
MobileNetV2 SAMQ MP 5.3 70.3 89.4

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

Acknowledgement

This repository has adopted codes from SAM, ASAM and ESAM, we thank the authors for their open-sourced code.

You might also like...
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

A bunch of random PyTorch models using PyTorch's C++ frontend
A bunch of random PyTorch models using PyTorch's C++ frontend

PyTorch Deep Learning Models using the C++ frontend Gettting started Clone the repo 1. https://github.com/mrdvince/pytorchcpp 2. cd fashionmnist or

PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

Comments
  • Quantize_first_last_layer

    Quantize_first_last_layer

    Hi! I noticed that in your code, you set bits_weights=8 and bits_activations=32 for first layer as default, it's not what is claimed in your paper " For the first and last layers of all quantized models, we quantize both weights and activations to 8-bit. " And I see an accuracy drop if I adjust the bits_activations to 8 for the first layer, could u please explain what is the reason? Thanks!

    opened by mmmiiinnnggg 0
  • 代码问题请求帮助

    代码问题请求帮助

    你好,带佬的代码写的很好,有部分代码不太懂,想请教一下, parser.add_argument( "--arch_bits", type=lambda s: [float(item) for item in s.split(",")] if len(s) != 0 else "", default=" ", help="bits configuration of each layer",

    if len(args.arch_bits) != 0: if args.wa_same_bit: set_wae_bits(model, args.arch_bits) elif args.search_w_bit: set_w_bits(model, args.arch_bits) else: set_bits(model, args.arch_bits) show_bits(model) logger.info("Set arch bits to: {}".format(args.arch_bits)) logger.info(model) 这个arch_bits主要是做什么的呢,卡在这里有段时间了

    opened by LKAMING97 0
Releases(v0.1.1)
Owner
Zhuang AI Group
Zhuang AI Group
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 360 Jan 06, 2023
CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification (ICCV2021)

CM-NAS Official Pytorch code of paper CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification in ICCV2021. Vis

JDAI-CV 40 Nov 25, 2022
Quantum-enhanced transformer neural network

Example of a Quantum-enhanced transformer neural network Get the code: git clone https://github.com/rdisipio/qtransformer.git cd qtransformer Create

Riccardo Di Sipio 61 Nov 08, 2022
DvD-TD3: Diversity via Determinants for TD3 version

DvD-TD3: Diversity via Determinants for TD3 version The implementation of paper Effective Diversity in Population Based Reinforcement Learning. Instal

3 Feb 11, 2022
git《Beta R-CNN: Looking into Pedestrian Detection from Another Perspective》(NeurIPS 2020) GitHub:[fig3]

Beta R-CNN: Looking into Pedestrian Detection from Another Perspective This is the pytorch implementation of our paper "[Beta R-CNN: Looking into Pede

35 Sep 08, 2021
Unsupervised Discovery of Object Radiance Fields

Unsupervised Discovery of Object Radiance Fields by Hong-Xing Yu, Leonidas J. Guibas and Jiajun Wu from Stanford University. arXiv link: https://arxiv

Hong-Xing Yu 148 Nov 30, 2022
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Shuaifeng Zhi 243 Jan 07, 2023
Learning and Building Convolutional Neural Networks using PyTorch

Image Classification Using Deep Learning Learning and Building Convolutional Neural Networks using PyTorch. Models, selected are based on number of ci

Mayur 126 Dec 22, 2022
FPGA: Fast Patch-Free Global Learning Framework for Fully End-to-End Hyperspectral Image Classification

FPGA & FreeNet Fast Patch-Free Global Learning Framework for Fully End-to-End Hyperspectral Image Classification by Zhuo Zheng, Yanfei Zhong, Ailong M

Zhuo Zheng 92 Jan 03, 2023
BASH - Biomechanical Animated Skinned Human

We developed a method animating a statistical 3D human model for biomechanical analysis to increase accessibility for non-experts, like patients, athletes, or designers.

Machine Learning and Data Analytics Lab FAU 66 Nov 19, 2022
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

VAENAR-TTS - PyTorch Implementation PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

Keon Lee 67 Nov 14, 2022
Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays

Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays In this repo, you will find the instructions on how to requ

Intelligent Vision Research Lab 4 Jul 21, 2022
Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Wonjong Jang 8 Nov 01, 2022
nanodet_plus,yolov5_v6.0

OAK_Detection OAK设备上适配nanodet_plus,yolov5_v6.0 Environment pytorch = 1.7.0

炼丹去了 1 Feb 18, 2022
Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

5 Nov 21, 2022
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Faster R-CNN and Mask R-CNN in PyTorch 1.0 maskrcnn-benchmark has been deprecated. Please see detectron2, which includes implementations for all model

Facebook Research 9k Jan 04, 2023
PyTorch Live is an easy to use library of tools for creating on-device ML demos on Android and iOS.

PyTorch Live is an easy to use library of tools for creating on-device ML demos on Android and iOS. With Live, you can build a working mobile app ML demo in minutes.

559 Jan 01, 2023
This repo is the official implementation for Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting

1 MAGNN This repo is the official implementation for Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting. 1.1 The frame

SZJ 12 Nov 08, 2022
The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp.

PISE The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp. Requirement conda create -n pise pyt

jinszhang 110 Nov 21, 2022