Implements Gradient Centralization and allows it to use as a Python package in TensorFlow

Overview

Gradient Centralization TensorFlow Twitter

PyPI Upload Python Package Flake8 Lint Python Version

Binder Open In Colab

GitHub license PEP8 GitHub stars GitHub forks GitHub watchers

This Python package implements Gradient Centralization in TensorFlow, a simple and effective optimization technique for Deep Neural Networks as suggested by Yong et al. in the paper Gradient Centralization: A New Optimization Technique for Deep Neural Networks. It can both speedup training process and improve the final generalization performance of DNNs.

Installation

Run the following to install:

pip install gradient-centralization-tf

Usage

gctf.centralized_gradients_for_optimizer

Create a centralized gradients functions for a specified optimizer.

Arguments:

  • optimizer: a tf.keras.optimizers.Optimizer object. The optimizer you are using.

Example:

>>> opt = tf.keras.optimizers.Adam(learning_rate=0.1)
>>> optimizer.get_gradients = gctf.centralized_gradients_for_optimizer(opt)
>>> model.compile(optimizer = opt, ...)

gctf.get_centralized_gradients

Computes the centralized gradients.

This function is ideally not meant to be used directly unless you are building a custom optimizer, in which case you could point get_gradients to this function. This is a modified version of tf.keras.optimizers.Optimizer.get_gradients.

Arguments:

  • optimizer: a tf.keras.optimizers.Optimizer object. The optimizer you are using.
  • loss: Scalar tensor to minimize.
  • params: List of variables.

Returns:

A gradients tensor.

gctf.optimizers

Pre built updated optimizers implementing GC.

This module is speciially built for testing out GC and in most cases you would be using gctf.centralized_gradients_for_optimizer though this module implements gctf.centralized_gradients_for_optimizer. You can directly use all optimizers with tf.keras.optimizers updated for GC.

Example:

>>> model.compile(optimizer = gctf.optimizers.adam(learning_rate = 0.01), ...)
>>> model.compile(optimizer = gctf.optimizers.rmsprop(learning_rate = 0.01, rho = 0.91), ...)
>>> model.compile(optimizer = gctf.optimizers.sgd(), ...)

Returns:

A tf.keras.optimizers.Optimizer object.

Developing gctf

To install gradient-centralization-tf, along with tools you need to develop and test, run the following in your virtualenv:

git clone [email protected]:Rishit-dagli/Gradient-Centralization-TensorFlow
# or clone your own fork

pip install -e .[dev]

License

Copyright 2020 Rishit Dagli

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Comments
  • On windows Tensorflow 2.5 it gives error

    On windows Tensorflow 2.5 it gives error

    On windows 10 with miniconda enviroment tensorflow 2.5 gives error on centralized_gradients.py file.

    the solution is change import keras.backend as K with import tensorflow.keras.backend as K

    bug 
    opened by mgezer 5
  • The results in the mnist example are wrong/misleading

    The results in the mnist example are wrong/misleading

    Describe the bug The results in your colab ipython notebook are misleading: https://colab.research.google.com/github/Rishit-dagli/Gradient-Centralization-TensorFlow/blob/main/examples/gctf_mnist.ipynb

    In this example, the model is first trained with a normal Adam optimizer:

    model.compile(optimizer = tf.keras.optimizers.Adam(),
                  loss = 'sparse_categorical_crossentropy',
                  metrics = ['accuracy'])
    
    history_no_gctf = model.fit(training_images, training_labels, epochs=5, callbacks = [time_callback_no_gctf])
    

    And afterwards the same model is recompiled with the gctf.optimizers.adam(). However, recompiling a keras model does not reset the weights. This means that in the first fit call the model is trained and then in the second fit call with the new optimizer the same model is used and of course then the results are better.

    This can be fixed, by recreating the model for the second run, by just adding these few lines:

    import gctf #import gctf
    
    time_callback_gctf = TimeHistory()
    
    # Model architecture
    model = tf.keras.models.Sequential([
                                        tf.keras.layers.Flatten(), 
                                        tf.keras.layers.Dense(512, activation=tf.nn.relu),
                                        tf.keras.layers.Dense(256, activation=tf.nn.relu),
                                        tf.keras.layers.Dense(64, activation=tf.nn.relu),
                                        tf.keras.layers.Dense(512, activation=tf.nn.relu),
                                        tf.keras.layers.Dense(256, activation=tf.nn.relu),
                                        tf.keras.layers.Dense(64, activation=tf.nn.relu), 
                                        tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
    
    model.compile(optimizer = gctf.optimizers.adam(),
                  loss = 'sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    
    history_gctf = model.fit(training_images, training_labels, epochs=5, callbacks=[time_callback_gctf])
    

    However, then the results are not better than without gctf:

    Type                   Execution time    Accuracy      Loss
    -------------------  ----------------  ----------  --------
    Model without gctf:           24.7659    0.88825   0.305801
    Model with gctf               24.7881    0.889567  0.30812
    

    Could you please clarify what happens here. I tried this gctf.optimizers.adam() optimizer in my own research and it didn't change the results at all and now after seeing it doesn't work in the example which was constructed here. Makes me question the results of this paper.

    To Reproduce Execute the colab file given in the repository: https://colab.research.google.com/github/Rishit-dagli/Gradient-Centralization-TensorFlow/blob/main/examples/gctf_mnist.ipynb

    Expected behavior The right comparison would be if both models start from a random initialization, not that the second model can start with the already pre-trained weights.

    Looking forward to a fast a swift explanation.

    Best, Max

    question 
    opened by themasterlink 2
  • Wider dependency requirements

    Wider dependency requirements

    The package as of now to be installed requires tensorflow ~= 2.4.0 and keras ~= 2.4.0. It turns out that this is sometimes problematic for folks who have custom installations of TensorFlow and a winder requirement could be set up.

    enhancement 
    opened by Rishit-dagli 1
  • Release 0.0.3

    Release 0.0.3

    This release includes some fixes and improvements

    βœ… Bug Fixes / Improvements

    • Allow wider versions for TensorFlow and Keras while installing the package (#14 )
    • Fixed incorrect usage example in docstrings and description for centralized_gradients_for_optimizer (#13 )
    • Add clear aims for each of the examples of using gctf (#15 )
    • Updates PyPi classifiers to clearly show the aims of this project. This should have no changes in the way you use this package (#18 )
    • Add clear instructions for using this with custom optimizers i.e. directly use get_centralized_gradients however a complete example has not been pushed due to the reasons mentioned in the issue (#16 )
    opened by Rishit-dagli 0
  • Add an

    Add an "About The Examples" section

    Add an "About The Examples" section which contains a summary of the usage example notebooks and links to run it on Binder and Colab.


    Close #15

    opened by Rishit-dagli 0
  • Update relevant pypi classifiers

    Update relevant pypi classifiers

    Add PyPI classifiers for:

    • Development status
    • Intended Audience
    • Topic

    Further also added the Programming Language :: Python :: 3 :: Only classifer


    Closes #18

    opened by Rishit-dagli 0
  • Update pypi classifiers

    Update pypi classifiers

    I am specifically thinking of adding three more categories of pypi classifiers:

    • Development status
    • Intended Audience
    • Topic

    Apart from this I also think it would be great to add the Programming Language :: Python :: 3 :: Only to make sure the audience to know that this package is intended for Python 3 only.

    opened by Rishit-dagli 0
  • Add an

    Add an "About the examples" section

    It would be great to write an "About the example" section which could demonstrate in short what the example notebooks aim to achieve and show.

    documentation 
    opened by Rishit-dagli 0
  • Error in usage example for gctf.centralized_gradients_for_optimizer

    Error in usage example for gctf.centralized_gradients_for_optimizer

    I noticed that the docstrings for gctf.centralized_gradients_for_optimizer have an error in the example usage section. The example creates an Adam optimizer instance and saves it to opt however the centralized_gradients_for_optimizer is applied on optimizer which ideally does not exist and running the example would result in an error.

    documentation 
    opened by Rishit-dagli 0
  • [ImgBot] Optimize images

    [ImgBot] Optimize images

    opened by imgbot[bot] 0
  • [ImgBot] Optimize images

    [ImgBot] Optimize images

    opened by imgbot[bot] 0
Releases(v0.0.3)
  • v0.0.3(Mar 11, 2021)

    This release includes some fixes and improvements

    βœ… Bug Fixes / Improvements

    • Allow wider versions for TensorFlow and Keras while installing the package (#14 )
    • Fixed incorrect usage example in docstrings and description for centralized_gradients_for_optimizer (#13 )
    • Add clear aims for each of the examples of using gctf (#15 )
    • Updates PyPi classifiers to clearly show the aims of this project. This should have no changes in the way you use this package (#18 )
    • Add clear instructions for using this with custom optimizers i.e. directly use get_centralized_gradients however a complete example has not been pushed due to the reasons mentioned in the issue (#16 )
    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Feb 21, 2021)

    This release includes some fixes and improvements

    βœ… Bug Fixes / Improvements

    • Fix the issue of supporting multiple modules
    • Fix multiple typos.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.1(Feb 20, 2021)

Owner
Rishit Dagli
High School, Ted-X, Ted-Ed speaker|Mentor, TFUG Mumbai|International Speaker|Microsoft Student Ambassador|#ExploreML Facilitator
Rishit Dagli
Intrinsic Image Harmonization

Intrinsic Image Harmonization [Paper] Zonghui Guo, Haiyong Zheng, Yufeng Jiang, Zhaorui Gu, Bing Zheng Here we provide PyTorch implementation and the

VISION @ OUC 44 Dec 21, 2022
Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation

NorCal Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation On Model Calibration for Long-Tailed Object Detec

Tai-Yu (Daniel) Pan 24 Dec 25, 2022
UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation

UnivNet UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation. Training python train.py --c

Rishikesh (ΰ€‹ΰ€·ΰ€Ώΰ€•ΰ₯‡ΰ€Ά) 55 Dec 26, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
【Arxiv】Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution

SANet Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution Dependencies numpy==1.18.5 scikit_image==0.16.2 torchvision==0.8.1 to

36 Jan 05, 2023
This is the official implementation for "Do Transformers Really Perform Bad for Graph Representation?".

Graphormer By Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng*, Guolin Ke, Di He*, Yanming Shen and Tie-Yan Liu. This repo is the official impl

Microsoft 1.3k Dec 29, 2022
Makes patches from huge resolution .svs slide files using openslide

openslide_patcher Makes patches from huge resolution .svs slide files using openslide Example collage I made from outputs:

2 Dec 23, 2021
ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

ST++ This is the official PyTorch implementation of our paper: ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation. Lihe Ya

Lihe Yang 147 Jan 03, 2023
Official Pytorch implementation of RePOSE (ICCV2021)

RePOSE: Iterative Rendering and Refinement for 6D Object Detection (ICCV2021) [Link] Abstract We present RePOSE, a fast iterative refinement method fo

Shun Iwase 68 Nov 15, 2022
Preprossing-loan-data-with-NumPy - In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United States.

Preprossing-loan-data-with-NumPy In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United

Dhawal Chitnavis 2 Jan 03, 2022
A PyTorch implementation of the continual learning experiments with deep neural networks

Brain-Inspired Replay A PyTorch implementation of the continual learning experiments with deep neural networks described in the following paper: Brain

182 Dec 27, 2022
Hl classification bc - A Network-Based High-Level Data Classification Algorithm Using Betweenness Centrality

A Network-Based High-Level Data Classification Algorithm Using Betweenness Centr

Esteban Vilca 3 Dec 01, 2022
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a

Tianxiang Sun 149 Jan 04, 2023
Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark Yong

19 Dec 17, 2022
Iowa Project - My second project done at General Assembly, focused on feature engineering and understanding Linear Regression as a concept

Project 2 - Ames Housing Data and Kaggle Challenge PROBLEM STATEMENT Inferring or Predicting? What's more valuable for a housing model? When creating

Adam Muhammad Klesc 1 Jan 03, 2022
Finetune the base 64 px GLIDE-text2im model from OpenAI on your own image-text dataset

Finetune the base 64 px GLIDE-text2im model from OpenAI on your own image-text dataset

Clay Mullis 82 Oct 13, 2022
Generalized Decision Transformer for Offline Hindsight Information Matching

Generalized Decision Transformer for Offline Hindsight Information Matching [arxiv] If you use this codebase for your research, please cite the paper:

Hiroki Furuta 35 Dec 12, 2022
A Pytorch reproduction of Range Loss, which is proposed in paper γ€ŠRange Loss for Deep Face Recognition with Long-Tailed Training Data》

RangeLoss Pytorch This is a Pytorch reproduction of Range Loss, which is proposed in paper γ€ŠRange Loss for Deep Face Recognition with Long-Tailed Trai

Youzhi Gu 7 Nov 27, 2021
OCR Streamlit App is used to extract text from images using python's easyocr, pytorch and streamlit packages

OCR-Streamlit-App OCR Streamlit App is used to extract text from images using python's easyocr, pytorch and streamlit packages OCR app gets an image a

Siva Prakash 5 Apr 05, 2022
Code accompanying the paper on "An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers" published at NeurIPS, 2021

Code for "An Empirical Investigation of Domian Generalization with Empirical Risk Minimizers" (NeurIPS 2021) Motivation and Introduction Domain Genera

Meta Research 15 Dec 27, 2022