B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search

Related tags

Deep LearningBBEA
Overview

B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search

This is the offical implementation of the aforementioned paper. Graphical Abstract


Abstract

The early pioneering Neural Architecture Search (NAS) works were multi-trial methods applicable to any general search space. The subsequent works took advantage of the early findings and developed weight-sharing methods that assume a structured search space typically with pre-fixed hyperparameters. Despite the amazing computational efficiency of the weight-sharing NAS algorithms, it is becoming apparent that multi-trial NAS algorithms are also needed for identifying very high-performance architectures, especially when exploring a general search space. In this work, we carefully review the latest multi-trial NAS algorithms and identify the key strategies including Evolutionary Algorithm (EA), Bayesian Optimization (BO), diversification, input and output transformations, and lower fidelity estimation. To accommodate the key strategies into a single framework, we develop B2EA that is a surrogate assisted EA with two BO surrogate models and a mutation step in between. To show that B2EA is robust and efficient, we evaluate three performance metrics over 14 benchmarks with general and cell-based search spaces. Comparisons with state-of-the-art multi-trial algorithms reveal that B2EA is robust and efficient over the 14 benchmarks for three difficulty levels of target performance.

Citation

To be updated soon


Requirements

Prerequisite

This project is developed and tested on Linux OS. If you want to run on Windows, we strongly suggest using Linux Subsystem for Windows. To avoid conflicting dependencies, we recommend to create a new virtual enviornment. For this reason, installing Anaconda suitable to the OS system is pre-required to create the virtual environment.

Package Installation

The following is creating an environment and also installing requried packages automatically using conda.

(base) device:path/BBEA$ conda create -n bbea python=3.6
(base) device:path/BBEA$ conda activate bbea
(bbea) device:path/BBEA$ sh install.sh

Tabular Dataset Installation

Pre-evaluated datasets enable to benchmark Hyper-Parameter Optimization(HPO) algorithm performance without hugh computational costs of DNN training.

HPO Benchmark

  • To run algorithms on the HPO-bench dataset, download the database files as follows:
(bbea) device:path/BBEA$ cd lookup
(bbea) device:path/BBEA/lookup$ wget http://ml4aad.org/wp-content/uploads/2019/01/fcnet_tabular_benchmarks.tar.gz
(bbea) device:path/BBEA/lookup$ tar xf fcnet_tabular_benchmarks.tar.gz

Note that *.hdf5 files should be located under /lookup/fcnet_tabular_benchmarks.

Two NAS Benchmarks

  • To run algorithms on the the NAS-bench-101 dataset,
    • download the tfrecord file and save it into /lookup.
    • NAS-bench-101 API requires to install the CPU version of TensorFlow 1.12.
(bbea)device:path/BBEA/lookup$ wget https://storage.googleapis.com/nasbench/nasbench_full.tfrecord

  • To run algorithms on the NAS-bench-201,
    • download NAS-Bench-201-v1_1-096897.pth file in the /lookup according to this doc.
    • NAS-bench-201 API requires to install pytorch CPU version. Refer to pytorch installation guide.
(bbea)device:path/BBEA$ conda install pytorch torchvision cpuonly -c pytorch

DNN Benchmark

  • To run algorithms on the DNN benchmark, download the zip file from the link.
    • Vaildate the file contains CSV files and JSON files in /lookup and /hp_conf, respectively.
    • Unzip the downloaded file and copy two directories into this project. Note the folders already exists in this project.

HPO Run

To run the B2EA algorithms

The experiment using the proposed method of the paper can be performed using the following runner:

  • bbea_runner.py
    • This runner can conduct the experiment that the input arguments have configured.
    • Specifically, the hyperparameter space configuration and the maximum runtime are two mandatory arguments. In the default setting, the names of the search spaces configurations denote the names of JSON configuration files in /hp_conf. The runtime, on the other hand, can be set using seconds. For convenience, 'm', 'h', 'd' can be postfixed to denote minutes, hours, and days.
    • Further detailed options such that the algorithm hyperparameters' setting and the run configuration such as repeated runs are optional.
    • Refer to the help (-h) option as the command line argument.
usage: bbea_runner.py [-h] [-dm] [-bm BENCHMARK_MODE] [-nt NUM_TRIALS]
                      [-etr EARLY_TERM_RULE] [-hd HP_CONFIG_DIR]
                      hp_config exp_time

positional arguments:
  hp_config             Hyperparameter space configuration file name.
  exp_time              The maximum runtime when an HPO run expires.

optional arguments:
  -h, --help            show this help message and exit
  -dm, --debug_mode     Set debugging mode.
  -nt NUM_TRIALS, --num_trials NUM_TRIALS
                        The total number of repeated runs. The default setting
                        is "1".
  -etr EARLY_TERM_RULE, --early_term_rule EARLY_TERM_RULE
                        Early termination rule. A name of compound rule, such
                        as "PentaTercet" or "DecaTercet", can be used. The
                        default setting is DecaTercet.
  -hd HP_CONFIG_DIR, --hp_config_dir HP_CONFIG_DIR
                        Hyperparameter space configuration directory. The
                        default setting is "./hp_conf/"


Results

Experimental results will be saved as JSON files under the /results directory. While the JSON file is human-readable and easily interpretable, we further provide utility functions in the python scripts of the above directory, which can analyze the results and plot the figures shown in the paper.

Owner
SNU ADSL
Applied Data Science Lab., Seoul National University
SNU ADSL
[ACM MM 2021] Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation)

Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation) [arXiv] [paper] @inproceedings{hou2021multiview, title={Multiview

Yunzhong Hou 27 Dec 13, 2022
OBBDetection: an oriented object detection toolbox modified from MMdetection

OBBDetection note: If you have questions or good suggestions, feel free to propose issues and contact me. introduction OBBDetection is an oriented obj

MIXIAOXIN_HO 3 Nov 11, 2022
Release of the ConditionalQA dataset

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

14 Oct 17, 2022
PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build

simple, elegant and safe Introduction PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to ha

Johnsz 2 Mar 02, 2022
The datasets and code of ACL 2021 paper "Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions".

Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction This repo contains the data sets and source code of our paper: Aspect-Category-Opinion-S

NUSTM 144 Jan 02, 2023
Official Implementation of "Third Time's the Charm? Image and Video Editing with StyleGAN3" https://arxiv.org/abs/2201.13433

Third Time's the Charm? Image and Video Editing with StyleGAN3 Yuval Alaluf*, Or Patashnik*, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, Da

531 Dec 20, 2022
VD-BERT: A Unified Vision and Dialog Transformer with BERT

VD-BERT: A Unified Vision and Dialog Transformer with BERT PyTorch Code for the following paper at EMNLP2020: Title: VD-BERT: A Unified Vision and Dia

Salesforce 44 Nov 01, 2022
pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

230 Dec 07, 2022
Seq2seq - Sequence to Sequence Learning with Keras

Seq2seq Sequence to Sequence Learning with Keras Hi! You have just found Seq2Seq. Seq2Seq is a sequence to sequence learning add-on for the python dee

Fariz Rahman 3.1k Dec 18, 2022
Prototype python implementation of the ome-ngff table spec

Prototype python implementation of the ome-ngff table spec

Kevin Yamauchi 8 Nov 20, 2022
unofficial pytorch implement of "Squareplus: A Softplus-Like Algebraic Rectifier"

SquarePlus (Pytorch implement) unofficial pytorch implement of "Squareplus: A Softplus-Like Algebraic Rectifier" SquarePlus Squareplus is a Softplus-L

SeeFun 3 Dec 29, 2021
CONditionals for Ordinal Regression and classification in PyTorch

CONDOR pytorch implementation for ordinal regression with deep neural networks. Documentation: https://GarrettJenkinson.github.io/condor_pytorch About

7 Jul 25, 2022
A medical imaging framework for Pytorch

Welcome to MedicalTorch MedicalTorch is an open-source framework for PyTorch, implementing an extensive set of loaders, pre-processors and datasets fo

Christian S. Perone 799 Jan 03, 2023
An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches

Transformer-in-Transformer An Implementation of the Transformer in Transformer paper by Han et al. for image classification, attention inside local pa

Rishit Dagli 40 Jul 25, 2022
PyTorch Implementation for "ForkGAN with SIngle Rainy NIght Images: Leveraging the RumiGAN to See into the Rainy Night"

ForkGAN with Single Rainy Night Images: Leveraging the RumiGAN to See into the Rainy Night By Seri Lee, Department of Engineering, Seoul National Univ

Seri Lee 52 Oct 12, 2022
Learning-based agent for Google Research Football

TiKick 1.Introduction Learning-based agent for Google Research Football Code accompanying the paper "TiKick: Towards Playing Multi-agent Football Full

Tsinghua AI Research Team for Reinforcement Learning 90 Dec 26, 2022
SOTA model in CIFAR10

A PyTorch Implementation of CIFAR Tricks 调研了CIFAR10数据集上各种trick,数据增强,正则化方法,并进行了实现。目前项目告一段落,如果有更好的想法,或者希望一起维护这个项目可以提issue或者在我的主页找到我的联系方式。 0. Requirement

PJDong 58 Dec 21, 2022
Understanding Convolutional Neural Networks from Theoretical Perspective via Volterra Convolution

nnvolterra Run Code Compile first: make compile Run all codes: make all Test xconv: make npxconv_test MNIST dataset needs to be downloaded, converted

1 May 24, 2022
Hypersearch weight debugging and losses tutorial

tutorial Activate tensorboard option Running TensorBoard remotely When working on a remote server, you can use SSH tunneling to forward the port of th

1 Dec 11, 2021
[NeurIPS 2020] Official repository for the project "Listening to Sound of Silence for Speech Denoising"

Listening to Sounds of Silence for Speech Denoising Introduction This is the repository of the "Listening to Sounds of Silence for Speech Denoising" p

Henry Xu 40 Dec 20, 2022