DeepOBS: A Deep Learning Optimizer Benchmark Suite

Related tags

Deep LearningDeepOBS
Overview

DeepOBS - A Deep Learning Optimizer Benchmark Suite

DeepOBS

PyPI version Documentation Status License: MIT

DeepOBS is a benchmarking suite that drastically simplifies, automates and improves the evaluation of deep learning optimizers.

It can evaluate the performance of new optimizers on a variety of real-world test problems and automatically compare them with realistic baselines.

DeepOBS automates several steps when benchmarking deep learning optimizers:

  • Downloading and preparing data sets.
  • Setting up test problems consisting of contemporary data sets and realistic deep learning architectures.
  • Running the optimizers on multiple test problems and logging relevant metrics.
  • Reporting and visualization the results of the optimizer benchmark.

DeepOBS Output

This branch contains the beta of version 1.2.0 with TensorFlow and PyTorch support. It is currently in a pre-release state. Not all features are implemented and most notably we currently don't provide baselines for this version.

The full documentation of this beta version is available on readthedocs: https://deepobs-with-pytorch.readthedocs.io/

The paper describing DeepOBS has been accepted for ICLR 2019 and can be found here: https://openreview.net/forum?id=rJg6ssC5Y7

If you find any bugs in DeepOBS, or find it hard to use, please let us know. We are always interested in feedback and ways to improve DeepOBS.

Installation

pip install -e git+https://github.com/fsschneider/[email protected]#egg=DeepOBS

We tested the package with Python 3.6, TensorFlow version 1.12, Torch version 1.1.0 and Torchvision version 0.3.0. Other versions might work, and we plan to expand compatibility in the future.

Further tutorials and a suggested protocol for benchmarking deep learning optimizers can be found on https://deepobs-with-pytorch.readthedocs.io/

Comments
  • Request: Share the hyper-parameters found in the grid search

    Request: Share the hyper-parameters found in the grid search

    To lessen the burden of re-running the benchmark, would it be possible to publish the optimal hyper-parameters somewhere?

    By-reusing those hyper-parameters, one would avoid the most computationally-demanding part of reproducing the results (by 1-2 orders of magnitude).

    opened by jotaf98 2
  • Add functionality to skip existing runs, plotting modes, some refactoring

    Add functionality to skip existing runs, plotting modes, some refactoring

    • Adding parameter skip_if_exists to runner.run
      • Default value is set such that the current behavior is maintained
      • By setting to True, runs that already have a .json output file will not be executed again
    • Possible extensions
      • Make skip_if_exists arg-parsable
    opened by f-dangel 2
  • KeyError: 'optimizer_hyperparams'

    KeyError: 'optimizer_hyperparams'

    (Apologies for creating multiple issues in a row -- it seemed more clean to keep them separate.)

    I downloaded the data from DeepOBS_Baselines, and attempted to run example_analyze_pytorch.py. Unfortunately DeepOBS seems to look for keys in the JSON files that don't exist:

    $ python example_analyze_pytorch.py
    /users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:144: RuntimeWarning: Metric valid_accu
    racies does not exist for testproblem quadratic_deep. We now use fallback metric valid_losses
      default_metric), RuntimeWarning)
    /users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:229: RuntimeWarning: All settings for
    /scratch/local/ssd/user/data/deepobs/quadratic_deep/SGD on test problem quadratic_deep have the same
     number of seeds runs. Mode 'most' does not make sense and we use the fallback mode 'final'
      .format(optimizer_path, testproblem_name), RuntimeWarning)
    {'Performance': 127.96759578159877, 'Speed': 'N.A.', 'Hyperparameters': {'lr': 0.01, 'momentum': 0.9
    9, 'nesterov': False}, 'Training Parameters': {}}
    /users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:144: RuntimeWarning: Metric valid_accu
    racies does not exist for testproblem quadratic_deep. We now use fallback metric valid_losses
      default_metric), RuntimeWarning)
    /users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:229: RuntimeWarning: All settings for
    /scratch/local/ssd/user/data/deepobs/quadratic_deep/SGD on test problem quadratic_deep have the same
     number of seeds runs. Mode 'most' does not make sense and we use the fallback mode 'final'
      .format(optimizer_path, testproblem_name), RuntimeWarning)
    /users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:150: RuntimeWarning: Cannot fallback t
    o metric valid_losses for optimizer MomentumOptimizer on testproblem quadratic_deep. Will now fallba
    ck to metric test_losses
      testproblem_name), RuntimeWarning)
    /users/user/miniconda3/lib/python3.7/site-packages/numpy/core/_methods.py:193: RuntimeWarning: inva$
    id value encountered in subtract
      x = asanyarray(arr - arrmean)
    /users/user/miniconda3/lib/python3.7/site-packages/numpy/lib/function_base.py:3949: RuntimeWarning:
    invalid value encountered in multiply
      x2 = take(ap, indices_above, axis=axis) * weights_above
    Traceback (most recent call last):
      File "example_analyze_pytorch.py", line 17, in <module>
        analyzer.plot_optimizer_performance(result_path, reference_path=base + '/deepobs/baselines/quad$
    atic_deep/MomentumOptimizer')
      File "/users/user/Research/deepobs/deepobs/analyzer/analyze.py", line 514, in plot_optimizer_perfo
    rmance
        which=which)
      File "/users/user/Research/deepobs/deepobs/analyzer/analyze.py", line 462, in _plot_optimizer_perf
    ormance
        optimizer_path, mode, metric)
      File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 206, in create_setting_
    analyzer_ranking
        setting_analyzers = _get_all_setting_analyzer(optimizer_path)
      File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 184, in _get_all_settin
    g_analyzer
        setting_analyzers.append(SettingAnalyzer(sett_path))
      File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 260, in __init__
        self.aggregate = aggregate_runs(path)
      File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 101, in aggregate_runs
        aggregate['optimizer_hyperparams'] = json_data['optimizer_hyperparams']
    KeyError: 'optimizer_hyperparams'
    

    One of the JSON files in question looks like this (data points snipped for brevity):

    {
    "train_losses": [353.9337594168527, 347.5994306291853, 331.35902622767856, 307.2468915666853, ... 97.28871154785156, 91.45470428466797, 96.45774841308594, 86.27237701416016],
    "optimizer": "MomentumOptimizer",
    "testproblem": "quadratic_deep",
    "weight_decay": null,
    "batch_size": 128,
    "num_epochs": 100,
    "learning_rate": 1e-05,
    "lr_sched_epochs": null,
    "lr_sched_factors": null,
    "random_seed": 42,
    "train_log_interval": 1,
    "hyperparams": {"momentum": 0.99, "use_nesterov": false}
    }
    

    The obvious key seems to be hyperparams as opposed to optimizer_hyperparams; this occurs only for some JSON files.

    Edit: Having fixed this, there is a further key error on training_params. Perhaps these were generated with different versions of the package.

    opened by jotaf98 3
  • Installation error / unmentioned dependency

    Installation error / unmentioned dependency "bayes_opt"

    Attempting to install by following the documentation's instructions, after installing all the mentioned dependencies with conda, results in the following error:

    (base) [email protected]:~$ pip install -e git+https://github.com/abahde/[email protected]#egg=DeepOBS
    Obtaining DeepOBS from git+https://github.com/abahde/[email protected]#egg=DeepOBS
      Cloning https://github.com/abahde/DeepOBS.git (to revision master) to ./src/deepobs
      Running command git clone -q https://github.com/abahde/DeepOBS.git /users/user/src/deepobs
        ERROR: Complete output from command python setup.py egg_info:
        ERROR: Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/users/user/src/deepobs/setup.py", line 5, in <module>
            from deepobs import __version__
          File "/users/user/src/deepobs/deepobs/__init__.py", line 5, in <module>
            from . import analyzer
          File "/users/user/src/deepobs/deepobs/analyzer/__init__.py", line 2, in <module>
            from . import analyze
          File "/users/user/src/deepobs/deepobs/analyzer/analyze.py", line 12, in <module>
            from ..tuner.tuner_utils import generate_tuning_summary
          File "/users/user/src/deepobs/deepobs/tuner/__init__.py", line 4, in <module>
            from .bayesian import GP
          File "/users/user/src/deepobs/deepobs/tuner/bayesian.py", line 3, in <module>
            from bayes_opt import UtilityFunction
        ModuleNotFoundError: No module named 'bayes_opt'
        ----------------------------------------
    ERROR: Command "python setup.py egg_info" failed with error code 1 in /users/user/src/deepobs/
    

    Is this bayes_opt package really necessary? It seems a bit tangential to the package's purpose (or at most optional).

    Edit: It turns out that bayesian-optimization has relatively few requirements so this is not a big issue; perhaps just the docs need updating.

    As an aside, it might be possible to suggest a single conda command that installs everything: conda install -c conda-forge seaborn matplotlib2tikz bayesian-optimization.

    opened by jotaf98 0
  • Wall-clock time plots

    Wall-clock time plots

    Optimizers can have very different runtimes per iteration, especially 2nd-order ones.

    This means that sometimes, despite promises of "faster" convergence, the wall-clock time taken to converge is disappointingly larger.

    Is there any chance DeepOBS could implement wall-clock time plots, in addition to per-epoch ones? (E.g. X axis in minutes or hours.)

    opened by jotaf98 4
  • Improve estimate_runtime()

    Improve estimate_runtime()

    There are a couple of improvements that I suggest:

    • [ ] Return the results not as a string, but as a dict or an object.
    • [ ] (Maybe, think about that) Include the ability to test multiple optimizers simultaneously.
    • [ ] Report standard deviation and individual runtimes for SGD.
    • [ ] Add a function that generates a figure, similar to https://github.com/ludwigbald/probprec/blob/master/code/exp_perf_prec/analyze.py
    opened by ludwigbald 0
  • Implement validation set split also for TensorFlow

    Implement validation set split also for TensorFlow

    In PyTorch we split the validation set from the training set randomly. It has the size of the test set. The validation performance is used by the tuner and analyzer to obtain the best instance. This split should be implemented in the TensorFlow data sets as well. We have already prepared the test problem and the runner implementations for this change. The only change that needs to be done to the runner is marked in the code with a ToDo flag.

    bug enhancement 
    opened by abahde 0
Releases(v1.2.0-beta)
  • v1.2.0-beta(Sep 17, 2019)

    Draft of release notes:

    • A PyTorch implementation (though not for all test problems yet)
    • A refactored Analyzer module (more flexibility and interpretability)
    • A Tuning module that automates the tuning process
    • Some minor improvements of the TensorFlow code (important bugfix: fmnist_mlp now really uses F-MNIST and not MNIST)
    • For the PyTorch code a validation set metric for each test problem. However, so far, the TensorFlow code comes without validation sets.
    • Runners now break from training if the loss becomes NaN.
    • Runners now return the output dictionary.
    • Additional training parameters can be passed as kwargs to the run() method.
    • Numpy is now also seeded.
    • Small and large benchmark sets are now global variables in DeepOBS.
    • Default test problem settings are now a global variable in DeepOBS.
    • JSON output is now dumped in human readable format.
    • Accuracy is now only printed if available.
    • Simplified Runner API.
    • Learning Rate Schedule Runner is now an extra class.
    Source code(tar.gz)
    Source code(zip)
Owner
Aaron Bahde
Graduate student at the University of Tübingen, Methods of Machine Learning
Aaron Bahde
Official implementation of ETH-XGaze dataset baseline

ETH-XGaze baseline Official implementation of ETH-XGaze dataset baseline. ETH-XGaze dataset ETH-XGaze dataset is a gaze estimation dataset consisting

Xucong Zhang 134 Jan 03, 2023
Self-labelling via simultaneous clustering and representation learning. (ICLR 2020)

Self-labelling via simultaneous clustering and representation learning 🆗 🆗 🎉 NEW models (20th August 2020): Added standard SeLa pretrained torchvis

Yuki M. Asano 469 Jan 02, 2023
Convert dog pictures into various painting styles. Try LimnPet

LimnPet Cartoon stylization service project Try our service » Home page · Team notion · Members 목차 프로젝트 소개 프로젝트 목표 사용한 기술스택과 수행도구 팀원 구현 기능 주요 기능 추가 기능

LiJell 7 Jul 14, 2022
CAMoE + Dual SoftMax Loss (DSL): Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss

CAMoE + Dual SoftMax Loss (DSL): Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss This is official implement of "

程星 87 Dec 24, 2022
Repository for MeshTalk supplemental material and code once the (already approved) 16 GHS captures our lab will make publicly available are released.

meshtalk This repository contains code to run MeshTalk for face animation from audio. If you use MeshTalk, please cite @inproceedings{richard2021mesht

Meta Research 221 Jan 06, 2023
This is the official code of our paper "Diversity-based Trajectory and Goal Selection with Hindsight Experience Relay" (PRICAI 2021)

Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay This is the official implementation of our paper "Diversity-based Traje

Tianhong Dai 6 Jul 18, 2022
The implementation for paper Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets.

Joint t-sne This is the implementation for paper Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets. abstract: We present Jo

IDEAS Lab 7 Dec 18, 2022
The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution.

WSRGlow The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution. Audio sa

Kexun Zhang 96 Jan 03, 2023
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

Tensor2Tensor Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and ac

12.9k Jan 09, 2023
BanditPAM: Almost Linear-Time k-Medoids Clustering

BanditPAM: Almost Linear-Time k-Medoids Clustering This repo contains a high-performance implementation of BanditPAM from BanditPAM: Almost Linear-Tim

254 Dec 12, 2022
Anchor-free Oriented Proposal Generator for Object Detection

Anchor-free Oriented Proposal Generator for Object Detection Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han, Intro

jbwang1997 56 Nov 15, 2022
PyTorch implementation of DUL (Data Uncertainty Learning in Face Recognition, CVPR2020)

PyTorch implementation of DUL (Data Uncertainty Learning in Face Recognition, CVPR2020)

Mouxiao Huang 20 Nov 15, 2022
Membership Inference Attack against Graph Neural Networks

MIA GNN Project Starter If you meet the version mismatch error for Lasagne library, please use following command to upgrade Lasagne library. pip insta

6 Nov 09, 2022
ML From Scratch

ML from Scratch MACHINE LEARNING TOPICS COVERED - FROM SCRATCH Linear Regression Logistic Regression K Means Clustering K Nearest Neighbours Decision

Tanishq Gautam 66 Nov 02, 2022
[ICCV'21] Official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations

CrowdNav with Social-NCE This is an official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations by

VITA lab at EPFL 125 Dec 23, 2022
Quickly and easily create / train a custom DeepDream model

Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat

55 Dec 27, 2022
PyTorch Implementation of Temporal Output Discrepancy for Active Learning, ICCV 2021

Temporal Output Discrepancy for Active Learning PyTorch implementation of Semi-Supervised Active Learning with Temporal Output Discrepancy, ICCV 2021.

Siyu Huang 33 Dec 06, 2022
Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation

Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation By: Zayd Hammoudeh and Daniel Lowd Paper: Arxiv Preprint Coming soo

Zayd Hammoudeh 2 Oct 08, 2022
Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation

SUO-SLAM This repository hosts the code for our CVPR 2022 paper "Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation". ArXiv li

Robot Perception & Navigation Group (RPNG) 97 Jan 03, 2023
A lightweight library to compare different PyTorch implementations of the same network architecture.

TorchBug is a lightweight library designed to compare two PyTorch implementations of the same network architecture. It allows you to count, and compar

Arjun Krishnakumar 5 Jan 02, 2023