Skip to content
This repository has been archived by the owner on Aug 3, 2021. It is now read-only.
/ Milano Public archive

Milano is a tool for automating hyper-parameters search for your models on a backend of your choice.

License

Notifications You must be signed in to change notification settings

NVIDIA/Milano

Repository files navigation

License Documentation

Milano

(This is a research project, not an official NVIDIA product.)

Milano

Documentation

https://nvidia.github.io/Milano

Milano (Machine learning autotuner and network optimizer) is a tool for enabling machine learning researchers and practitioners to perform massive hyperparameters and architecture searches.

You can use it to:

Your script can use any framework of your choice, for example, TensorFlow, PyTorch, Microsoft Cognitive Toolkit etc. or no framework at all. Milano only requires minimal changes to what your script accepts via command line and what it returns to stdout.

Currently supported backends:

  • Azkaban - on a single multi-GPU machine or server with Azkaban installed
  • AWS - Amazon cloud using GPU instances
  • SLURM - any cluster which is running SLURM

Prerequisites

  • Linux
  • Python 3
  • Ensure you have Python version 3.5 or later with packages listed in the requirements.txt file.
  • Backend with NVIDIA GPU

How to Get Started

  1. Install all dependencies with the following command pip install -r requirements.txt.
  2. Follow this mini-tutorial for local machine or this mini-tutorial for AWS

Visualize

We provide a script to convert the csv file output into two kinds of graphs:

  • Graphs of each hyperparameter with the benchmark (e.g. valid perplexity)
  • Color graphs that show the relationship between any two hyperparameters and the benchmark

To run the script, use:

python3 visualize.py --file [the name of the results csv file] 
                     --n [the number of samples to visualize]
                     --subplots [the number of subplots to show in a plot]
                     --max [the max value of benchmark you care about]

Releases

No releases published

Packages

No packages published