Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

Overview

NN Template

PyTorch Lightning Conf: hydra Logging: wandb Conf: hydra Code style: black

Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for:

  • PyTorch Lightning, lightweight PyTorch wrapper for high-performance AI research.
  • Hydra, a framework for elegantly configuring complex applications.
  • DVC, track large files, directories, or ML models. Think "Git for data".
  • Weights and Biases, organize and analyze machine learning experiments. (educational account available)

nn-template is opinionated so you don't have to be. If you use this template, please add to your README.

Usage Examples

Checkout the mwe branch to view a minimum working example on MNIST.

Structure

.
├── conf                # Hydra compositional config
│   ├── default.yaml    # current experiment configuration
│   ├── data
│   ├── hydra
│   ├── logging
│   ├── model
│   ├── optim
│   └── train
├── data                # datasets
├── experiments         # local logs
├── README.md
├── requirements.txt    # basic requirements
└── src
    ├── common          # common Python modules
    ├── pl_data         # PyTorch Lightning datamodules and datasets
    ├── pl_modules      # PyTorch Lightning modules
    └── run.py          # entry point to run current conf

Data Version Control

DVC runs alongside git and uses the current commit hash to version control the data.

Initialize the dvc repository:

$ dvc init

To start tracking a file or directory, use dvc add:

$ dvc add data/ImageNet

DVC stores information about the added file (or a directory) in a special .dvc file named data/ImageNet.dvc, a small text file with a human-readable format. This file can be easily versioned like source code with Git, as a placeholder for the original data (which gets listed in .gitignore):

git add data/ImageNet.dvc data/.gitignore
git commit -m "Add raw data"

Making changes

When you make a change to a file or directory, run dvc add again to track the latest version:

$ dvc add data/ImageNet

Switching between versions

The regular workflow is to use git checkout first to switch a branch, checkout a commit, or a revision of a .dvc file, and then run dvc checkout to sync data:

$ git checkout <...>
$ dvc checkout

Read more in the docs!

Weights and Biases

Weights & Biases helps you keep track of your machine learning projects. Use tools to log hyperparameters and output metrics from your runs, then visualize and compare results and quickly share findings with your colleagues.

This is an example of a simple dashboard.

Quickstart

Login to your wandb account, running once wandb login. Configure the logging in conf/logging/*.


Read more in the docs. Particularly useful the log method, accessible from inside a PyTorch Lightning module with self.logger.experiment.log.

W&B is our logger of choice, but that is a purely subjective decision. Since we are using Lightning, you can replace wandb with the logger you prefer (you can even build your own). More about Lightning loggers here.

Hydra

Hydra is an open-source Python framework that simplifies the development of research and other complex applications. The key feature is the ability to dynamically create a hierarchical configuration by composition and override it through config files and the command line. The name Hydra comes from its ability to run multiple similar jobs - much like a Hydra with multiple heads.

The basic functionalities are intuitive: it is enough to change the configuration files in conf/* accordingly to your preferences. Everything will be logged in wandb automatically.

Consider creating new root configurations conf/myawesomeexp.yaml instead of always using the default conf/default.yaml.

Sweeps

You can easily perform hyperparameters sweeps, which override the configuration defined in /conf/*.

The easiest one is the grid-search. It executes the code with every possible combinations of the specified hyperparameters:

PYTHONPATH=. python src/run.py -m optim.optimizer.lr=0.02,0.002,0.0002 optim.lr_scheduler.T_mult=1,2 optim.optimizer.weight_decay=0,1e-5

You can explore aggregate statistics or compare and analyze each run in the W&B dashboard.


We recommend to go through at least the Basic Tutorial, and the docs about Instantiating objects with Hydra.

PyTorch Lightning

Lightning makes coding complex networks simple. It is not a high level framework like keras, but forces a neat code organization and encapsulation.

You should be somewhat familiar with PyTorch and PyTorch Lightning before using this template.

Environment Variables

System specific variables (e.g. absolute paths to datasets) should not be under version control, otherwise there will be conflicts between different users.

The best way to handle system specific variables is through environment variables.

You can define new environment variables in a .env file in the project root. A copy of this file (e.g. .env.template) can be under version control to ease new project configurations.

To define a new variable write inside .env:

export MY_VAR=/home/user/my_system_path

You can dynamically resolve the variable name from Python code with:

get_env('MY_VAR')

and in the Hydra .yaml configuration files with:

${env:MY_VAR}
You might also like...
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥
A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR)
PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR)

This is a PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR), using subpixel convolution to optimize the inference speed of TecoGAN VSR model. Please refer to the official implementation ESPCN and TecoGAN for more information.

Official implementation of our paper "Learning to Bootstrap for Combating Label Noise"

Learning to Bootstrap for Combating Label Noise This repo is the official implementation of our paper "Learning to Bootstrap for Combating Label Noise

An essential implementation of BYOL in PyTorch + PyTorch Lightning
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

a generic C++ library for image analysis

VIGRA Computer Vision Library Copyright 1998-2013 by Ullrich Koethe This file is part of the VIGRA computer vision library. You may use,

Generic Event Boundary Detection: A Benchmark for Event Segmentation

Generic Event Boundary Detection: A Benchmark for Event Segmentation We release our data annotation & baseline codes for detecting generic event bound

[ICCV2021] IICNet: A Generic Framework for Reversible Image Conversion
[ICCV2021] IICNet: A Generic Framework for Reversible Image Conversion

IICNet - Invertible Image Conversion Net Official PyTorch Implementation for IICNet: A Generic Framework for Reversible Image Conversion (ICCV2021). D

Comments
  • Curious if you checked out DAGsHub

    Curious if you checked out DAGsHub

    Hi @lucmos, this looks like an awesome repo. I stumbled on it while doing some research on project templates for ML projects. I'm one of the creators of DAGsHub which is a platform built on Git, DVC, and MLflow. It integrates with GitHub and provides a free DVC remote and MLflow server so that you can track experiments and share your data & models in one UI.

    Here's an example project to showcase the abilities: https://dagshub.com/OperationSavta/SavtaDepth

    It seems really in line with what you're creating here, and I would love to hear your thoughts about it.

    opened by deanp70 4
  • Streamlit UI - Weights and Biases login

    Streamlit UI - Weights and Biases login

    The template is really awesome.

    I had a small issue. When I run the Streamlit UI without being logged in in Weights and Biases. The UI just hanged with the loading status without giving me any feedback about what was happening, so that I have to log in first in wandb. I had to login manually from the console. Is there any way to solve this issue? For example to have feedback from the UI if I'm not logged in.

    Thanks!

    opened by andreim14 1
  • load_model

    load_model

    Hi, This issue concerns the function from nn_core.serialization import load_model Suppose i train a pytorch model with class MyLightningModule, and that I saved the checkpoint in model_path. Suppose now that the class MyLightningModule has received some minor changes, like a new class variable has been added. Let's call this version MyLightningModuleV2. When I load a model using this function, like:

    self.model = load_model(module_class=MyLightningModuleOld, checkpoint_path=Path(model_path), map_location=self.device).to(self.device).eval()

    I get an error because the chekpoint refers to the model of class MyLightningModule and therefore the new variable is (obviously) missing. To make it work, i need to load the model with the old version of the class, that is, MyLightningModule, and then manually setting "model.new_variable" to the value i want to get, like the following:

    self.model = load_model(module_class=MyLightningModuleOld, checkpoint_path=Path(model_path), map_location=self.device).to(self.device).eval()
    self.model.new_variable = False
    

    It would be nice to have this option in the load_model function to avoid creating multiple versions of the same class.

    opened by framolfese 0
  • PyTorch Lightning EcoCI integration to check for compatibility with latest & upcoming releases

    PyTorch Lightning EcoCI integration to check for compatibility with latest & upcoming releases

    Hey Valentino & Luca,

    I am just catching up with some bookmarks and remembered your repo here :). As someone who constantly fuzzes about the ideal project structure, that's actually pretty cool. I have been using an adapted version of the data science cookiecutter for generic ML projects, but nothing sophisticated like this here with code stubs.

    Haven't thoroughly played with it yet, though, besides creating an example folder and looking at the pl_module.py and datamodule.py files, which look good to me!

    In any case, long story short, I was wondering if you'd be interested in the PyTorch Lightning's ecosystem CI to make sure that it stays fresh and relevant wrt to upcoming version releases (comes with free CPU and multi-GPU CI tests): https://devblog.pytorchlightning.ai/stay-ahead-of-breaking-changes-with-the-new-lightning-ecosystem-ci-b7e1cf78a6c7

    If you are interested in that, I am sure my colleague @Borda would be happy to assist with questions & technical details -- he built this thing, so he probably knows best :)

    opened by rasbt 4
Releases(0.2.3)
  • 0.2.3(Dec 15, 2022)

    What's Changed

    • Bump dependency versions by @lucmos in https://github.com/grok-ai/nn-template/pull/79
    • Version 0.2.3 by @lucmos in https://github.com/grok-ai/nn-template/pull/80

    Full Changelog: https://github.com/grok-ai/nn-template/compare/0.2.2...0.2.3

    Source code(tar.gz)
    Source code(zip)
  • 0.2.2(Jun 13, 2022)

    What's Changed

    • Update README.md by @Flegyas in https://github.com/grok-ai/nn-template/pull/70
    • Improve documentation by @Flegyas in https://github.com/grok-ai/nn-template/pull/71
    • Update documentation by @Flegyas in https://github.com/grok-ai/nn-template/pull/72
    • Add asciinema gif in the README and docs by @lucmos in https://github.com/grok-ai/nn-template/pull/74
    • Add papers by @lucmos in https://github.com/grok-ai/nn-template/pull/76
    • Update precommits versions by @lucmos in https://github.com/grok-ai/nn-template/pull/75
    • Version 0.2.2 by @lucmos in https://github.com/grok-ai/nn-template/pull/77

    Full Changelog: https://github.com/grok-ai/nn-template/compare/0.2.1...0.2.2

    Source code(tar.gz)
    Source code(zip)
  • 0.2.1(Mar 1, 2022)

    Changelog for nn-template 0.2.1 (2022-03-01)

    What's Changed

    • Fix status badge in the documentation by @lucmos in https://github.com/grok-ai/nn-template/pull/64
    • Minor fixes post release by @lucmos in https://github.com/grok-ai/nn-template/pull/65
    • Fix typos in the documentation by @mikcnt in https://github.com/grok-ai/nn-template/pull/67
    • Fix broken relative links due to mike root folder by @lucmos in https://github.com/grok-ai/nn-template/pull/68
    • Version 0.2.1 by @lucmos in https://github.com/grok-ai/nn-template/pull/69

    New Contributors

    • @mikcnt made their first contribution in https://github.com/grok-ai/nn-template/pull/67

    Full Changelog: https://github.com/grok-ai/nn-template/compare/0.2.0...0.2.1

    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Mar 1, 2022)

    We are very pleased to present you NN Template 0.2.0!

    Changelog for nn-template 0.2.0 (2022-03-01)

    Summary

    • Cookiecutter parametrization
    • CI/CD Integration via GitHub Actions
    • Automate testing of your projects
    • Logic decoupling thanks to nn-template-core
    • Advanced restore options for trainings
    • Documentation website
    • Support for Python logging (with colors!)

    What's Changed

    • Refactor configuration by @lucmos in https://github.com/grok-ai/nn-template/pull/8
    • Refactor project to a python package by @lucmos in https://github.com/grok-ai/nn-template/pull/10
    • Add tooling configuration by @lucmos in https://github.com/grok-ai/nn-template/pull/9
    • Refactor codebase to be compliant to the pre-commits by @lucmos in https://github.com/grok-ai/nn-template/pull/11
    • Refactor the project root management by @lucmos in https://github.com/grok-ai/nn-template/pull/12
    • Added wandb to .gitignore by @Flegyas in https://github.com/grok-ai/nn-template/pull/14
    • Refactor logging by @lucmos in https://github.com/grok-ai/nn-template/pull/15
    • Enable pin-memory if not on CPU by @lucmos in https://github.com/grok-ai/nn-template/pull/16
    • Factor our PyTorch Module from the Lightning Module by @lucmos in https://github.com/grok-ai/nn-template/pull/17
    • Force the .cache folder to be in the PROJECT_ROOT by @lucmos in https://github.com/grok-ai/nn-template/pull/19
    • Add the configuration to the Lightning checkpoints by @lucmos in https://github.com/grok-ai/nn-template/pull/20
    • Use extend-ignore instead of ignore in .flake8 by @lucmos in https://github.com/grok-ai/nn-template/pull/21
    • Fix formatting by @lucmos in https://github.com/grok-ai/nn-template/pull/22
    • Log the code used in the current experiment to wandb by @lucmos in https://github.com/grok-ai/nn-template/pull/18
    • Functionalities decoupling via external library (nn-core). by @Flegyas in https://github.com/grok-ai/nn-template/pull/23
    • Add tests by @lucmos in https://github.com/grok-ai/nn-template/pull/24
    • Implement resuming behaviour by @lucmos in https://github.com/grok-ai/nn-template/pull/25
    • Refactor NNLogger usages by @lucmos in https://github.com/grok-ai/nn-template/pull/27
    • Add CI on pre-commits and tests by @lucmos in https://github.com/grok-ai/nn-template/pull/26
    • Remove some trigger from the Test Suite workflow by @lucmos in https://github.com/grok-ai/nn-template/pull/28
    • Overwrite Lightning logging configuration by @lucmos in https://github.com/grok-ai/nn-template/pull/29
    • Ensure tags are defined asking interactively for them by @lucmos in https://github.com/grok-ai/nn-template/pull/30
    • Introduce the seed index concept by @lucmos in https://github.com/grok-ai/nn-template/pull/31
    • Force execution of init.py on direct execution by @lucmos in https://github.com/grok-ai/nn-template/pull/33
    • Move functions from template to core by @lucmos in https://github.com/grok-ai/nn-template/pull/34
    • Add functionality to upload the run files in the storage to wandb by @lucmos in https://github.com/grok-ai/nn-template/pull/35
    • Move ui_utils entirely to nn-core by @lucmos in https://github.com/grok-ai/nn-template/pull/36
    • Add dynamic parametrized badges for the Test Suite and docs by @lucmos in https://github.com/grok-ai/nn-template/pull/45
    • Fix files hashing in workflow cache keys by @lucmos in https://github.com/grok-ai/nn-template/pull/46
    • Add seed_index determinism test by @lucmos in https://github.com/grok-ai/nn-template/pull/44
    • Refactor references to organization name into grok-ai by @lucmos in https://github.com/grok-ai/nn-template/pull/48
    • Push the default version in mike on release by @lucmos in https://github.com/grok-ai/nn-template/pull/49
    • Improve docs status badge to monitor the github-pages environment by @lucmos in https://github.com/grok-ai/nn-template/pull/50
    • Fix mike rebasing and pushing logic on release by @lucmos in https://github.com/grok-ai/nn-template/pull/51
    • Add a DAG in the post hook interactive setup by @lucmos in https://github.com/grok-ai/nn-template/pull/47
    • Skip test if no dataset is provided by @Flegyas in https://github.com/grok-ai/nn-template/pull/52
    • Fix remote parametrization in the README by @lucmos in https://github.com/grok-ai/nn-template/pull/53
    • Fix type hint in dataset.py by @lucmos in https://github.com/grok-ai/nn-template/pull/55
    • Improve the "add git remote" message in the post hook by @lucmos in https://github.com/grok-ai/nn-template/pull/54
    • Update nn-template-core dependency to 0.0.7 by @lucmos in https://github.com/grok-ai/nn-template/pull/56
    • Update docs by @lucmos in https://github.com/grok-ai/nn-template/pull/57
    • Add custom collate function by @Flegyas in https://github.com/grok-ai/nn-template/pull/58
    • Set metadata as a cached property in DataModule by @Flegyas in https://github.com/grok-ai/nn-template/pull/59
    • Pass run tags to the WandbLogger by @Flegyas in https://github.com/grok-ai/nn-template/pull/60
    • Feature/bump core by @Flegyas in https://github.com/grok-ai/nn-template/pull/61
    • Version 0.2.0 by @Flegyas in https://github.com/grok-ai/nn-template/pull/62

    Full Changelog: https://github.com/grok-ai/nn-template/compare/0.1.0...0.2.0

    Source code(tar.gz)
    Source code(zip)
Owner
Luca Moschella
PhD student at University of Rome La Sapienza in Computer Science.
Luca Moschella
Head and Neck Tumour Segmentation and Prediction of Patient Survival Project

Head-and-Neck-Tumour-Segmentation-and-Prediction-of-Patient-Survival Welcome to the Head and Neck Tumour Segmentation and Prediction of Patient Surviv

5 Oct 20, 2022
Neural network-based build time estimation for additive manufacturing

Neural network-based build time estimation for additive manufacturing Oh, Y., Sharp, M., Sprock, T., & Kwon, S. (2021). Neural network-based build tim

Yosep 1 Nov 15, 2021
Official repository for the ISBI 2021 paper Transformer Assisted Convolutional Neural Network for Cell Instance Segmentation

SegPC-2021 This is the official repository for the ISBI 2021 paper Transformer Assisted Convolutional Neural Network for Cell Instance Segmentation by

Datascience IIT-ISM 13 Dec 14, 2022
Fake News Detection Using Machine Learning Methods

Fake-News-Detection-Using-Machine-Learning-Methods Fake news is always a real and dangerous issue. However, with the presence and abundance of various

Achraf Safsafi 1 Jan 11, 2022
Numerai tournament example scripts using NN and optuna

numerai_NN_example Numerai tournament example scripts using pytorch NN, lightGBM and optuna https://numer.ai/tournament Performance of my model based

Takahiro Maeda 12 Oct 10, 2022
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation (CVPR 2021)

Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation Input Image Initial CAM Successive Maps with adversar

Jungbeom Lee 110 Dec 07, 2022
Y. Zhang, Q. Yao, W. Dai, L. Chen. AutoSF: Searching Scoring Functions for Knowledge Graph Embedding. IEEE International Conference on Data Engineering (ICDE). 2020

AutoSF The code for our paper "AutoSF: Searching Scoring Functions for Knowledge Graph Embedding" and this paper has been accepted by ICDE2020. News:

AutoML Research 64 Dec 17, 2022
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

AI Secure 57 Dec 15, 2022
Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561

Meta-Solver for Neural Ordinary Differential Equations Towards robust neural ODEs using parametrized solvers. Main idea Each Runge-Kutta (RK) solver w

Julia Gusak 25 Aug 12, 2021
Performance Analysis of Multi-user NOMA Wireless-Powered mMTC Networks: A Stochastic Geometry Approach

Performance Analysis of Multi-user NOMA Wireless-Powered mMTC Networks: A Stochastic Geometry Approach Thanh Luan Nguyen, Tri Nhu Do, Georges Kaddoum

Thanh Luan Nguyen 2 Oct 10, 2022
Accelerate Neural Net Training by Progressively Freezing Layers

FreezeOut A simple technique to accelerate neural net training by progressively freezing layers. This repository contains code for the extended abstra

Andy Brock 203 Jun 19, 2022
Evolution Strategies in PyTorch

Evolution Strategies This is a PyTorch implementation of Evolution Strategies. Requirements Python 3.5, PyTorch = 0.2.0, numpy, gym, universe, cv2 Wh

Andrew Gambardella 333 Nov 14, 2022
Differential rendering based motion capture blender project.

TraceArmature Summary TraceArmature is currently a set of python scripts that allow for high fidelity motion capture through the use of AI pose estima

William Rodriguez 4 May 27, 2022
Projects of Andfun Yangon

AndFunYangon Projects of Andfun Yangon First Commit We can use gsearch.py to sea

Htin Aung Lu 1 Dec 28, 2021
Pytorch implementation of PCT: Point Cloud Transformer

PCT: Point Cloud Transformer This is a Pytorch implementation of PCT: Point Cloud Transformer.

Yi_Zhang 265 Dec 22, 2022
PyTorch Connectomics: segmentation toolbox for EM connectomics

Introduction The field of connectomics aims to reconstruct the wiring diagram of the brain by mapping the neural connections at the level of individua

Zudi Lin 132 Dec 26, 2022
Code accompanying the paper Shared Independent Component Analysis for Multi-subject Neuroimaging

ShICA Code accompanying the paper Shared Independent Component Analysis for Multi-subject Neuroimaging Install Move into the ShICA directory cd ShICA

8 Nov 07, 2022
CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks

CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks

Facebook Research 721 Jan 03, 2023
Notification Triggers for Python

Notipyer Notification triggers for Python Send async email notifications via Python. Get updates/crashlogs from your scripts with ease. Installation p

Chirag Jain 17 May 16, 2022
Repository for code and dataset for our EMNLP 2021 paper - “So You Think You’re Funny?”: Rating the Humour Quotient in Standup Comedy.

AI-OpenMic Dataset The dataset is available for download via the follwing link. Repository for code and dataset for our EMNLP 2021 paper - “So You Thi

6 Oct 26, 2022