A PyTorch-based library for fast prototyping and sharing of deep neural network models.

Related tags

Deep Learningclassy
Overview

classy logo

A PyTorch-based library for fast prototyping and sharing of deep neural network models.


Python PyPI PyTorch Lightning Config: hydra Code style: black Codecov

Quick Links

In this README

Getting Started using classy

If this is your first time meeting classy, don't worry! We have plenty of resources to help you learn how it works and what it can do for you.

For starters, have a look at our amazing website and our documentation!

If you want to get your hands dirty right away, have a look at our base classy template. Also, we have a few examples that you can look at to get to know classy!

Installation

For a more in-depth installation guide (covering also installing from source and through docker), please visit our installation page.

If you are using one of our templates, there is a handy setup.sh script you can use that will execute the commands to create the environment and install classy for you.

Installing via pip

Setting up a virtual environment

We strongly recommend using Conda as the environment manager when dealing with deep learning / data science / machine learning. It's also recommended that you install the PyTorch ecosystem before installing classy by following the instructions on pytorch.org

If you already have a Python 3 environment you want to use, you can skip to the installing via pip section.

  1. Download and install Conda.

  2. Create a Conda environment with Python 3.7-3.9:

    conda create -n classy python=3.7
  3. Activate the Conda environment:

    conda activate classy

Installing the library and dependencies

Simply execute

pip install classy-core

and voilà! You're all set.

Looking for some adventures? Install nightly releases directly from pypi! You will not regret it :)

Running classy

Once it is installed, classy is available as a command line tool. It offers a wide variety of subcommands, all listed below. Detailed guides and references for each command is available in the documentation. Every one of classy's subcommands have a -h|--help flag available which details the various arguments & options you can use (e.g., classy train -h).

classy train

In its simplest form, classy train lets you train a transformer-based neural network for one of the tasks supported by classy (see the documentation).

classy train sentence-pair path/to/dataset/folder-or-file -n my-model

The command above will train a model to predict a label given a pair of sentences as input (e.g., Natural Language Inference or NLI) and save it under experiments/my-model. This same model can be further used by all other classy commands which require a classy model (predict, evaluate, serve, demo, upload).

classy predict

classy predict actually has two subcommands: interactive and file.

The first loads the model in memory and lets you try it out through the shell directly, so that you can test the model you trained and see what it predicts given some input. It is particularly useful when your machine cannot open a port for classy demo.

The second, instead, works on a file and produces an output where, for each input, it associates the corresponding predicted label. It is very useful when doing pre-processing or when you need to evaluate your model (although we offer classy evaluate for that).

classy evaluate

classy evaluate lets you evaluate your model on standard metrics for the task your model was trained upon. Simply run classy evaluate my-model path/to/file -o path/to/output/file and it will dump the evaluation at path/to/output/file

classy serve

classy serve loads the model in memory and spawns a REST API you can use to query your model with any REST client.

classy demo

classy demo spawns a Streamlit interface which lets you quickly show and query your model.

classy describe

classy describe --dataset path/to/dataset runs some common metrics on a file formatted for the specific task. Great tool to run before training your model!

classy upload

classy upload lets you upload your classy-trained model on the HuggingFace Hub and lets other users download / use it. (NOTE: you need a HuggingFace Hub account in order to upload to their hub)

Models uploaded via classy upload will be available for download by other classy users by simply executing classy download [email protected].

classy download

classy download downloads a previously uploaded classy-trained model from the HuggingFace Hub and stores it on your machine so that it is usable with any other classy command which requires a trained model (predict, evaluate, serve, demo, upload).

You can find SunglassesAI's list of pre-trained models here.

Models uploaded via classy upload are available by doing classy download [email protected].

Enabling Shell Completion

To install shell completion, activate your conda environment and then execute

classy --install-autocomplete

From now on, whenever you activate your conda environment with classy installed, you are going to have autocompletion when pressing [TAB]!

Issues

You are more than welcome to file issues with either feature requests, bug reports, or general questions. If you already found a solution to your problem, don't hesitate to share it. Suggestions for new best practices and tricks are always welcome!

Contributions

We warmly welcome contributions from the community. If it is your first time as a contributor, we recommend you start by reading our CONTRIBUTING.md guide.

Small contributions can be made directly in a pull request. For contributing major features, we recommend you first create a issue proposing a design, so that it can be discussed before you risk wasting time.

Pull requests (PRs) must have one approving review and no requested changes before they are merged. As classy is primarily driven by SunglassesAI, we reserve the right to reject or revert contributions that we don't think are good additions or might not fit into our roadmap.

Comments
  • Documentation overhaul for release

    Documentation overhaul for release

    We need to restructure the documentation.

    • Intro (feature overview: supported tasks and commands)
    • Getting Started (big classy tutorial):
      • Installation (maybe move before "getting started"? right after Intro)
      • Basic (everything that does not require touching code):
        • Intro (explanation of raw dataset and end goal)
        • Formatting the data (convert data to tsv / jsonl)
        • Choosing a profile (high level - the profile list goes in the reference manual)
        • Train (quick overview of how to train from the CLI with classy train - no frills)
        • Inference (quick overview of inference commands + ref to CLI sections)
      • Customizing things
        • Template (we introduce the template as the go-to way to work with classy)
        • Changing profile
        • Custom data format
        • Custom model
        • [...] all other customizations
    • Reference manual:
      • CLI (detailed description) of classy commands
        • train
        • predict
        • other inference commands (evaluate, serve, demo)
        • up/download
        • describe (maybe as first?)
      • Tasks and input formats (joined together, for each task we show both formats - currently they're separated)
      • Profiles
      • Mixins? (what should we say about them?)
    documentation priority: high 
    opened by Valahaar 5
  • How to fine-tune the existing model

    How to fine-tune the existing model

    Hi, I'm always frustrated when fine-tuning the checkpoint. There has no access to it, let alone continuing to train the SOTA model by a different loss or use it as a submodule. I would be much grateful if you can provide this function.

    opened by yyq63728198 4
  • Boundaries of adjacent entities

    Boundaries of adjacent entities

    Hello, Using the current token annotation format, I can't set the boundaries of the following entities (they all have the same label):

    name personal_data address personal_data id card number personal_data

    ...in the following example please tell me your name address and identity card number

    The resulting labeled sentence would be O O O O PERSONAL_DATA PERSONAL_DATA O PERSONAL_DATA PERSONAL_DATA PERSONAL_DATA

    With this format, it is impossible to know if name and address are one or two different entities. Did you consider this issue?

    Thanks!

    opened by JuanFF 2
  • Cannot modify from command line a parameter introduced in a custom Profile

    Cannot modify from command line a parameter introduced in a custom Profile

    Describe the bug If you try to start a training modifying from the command line a parameter that has been introduced (i.e. was not in the default profile for the task you are doing) in a custom profile the process crashes.

    To Reproduce Create a custom model (CustomModel) with a custom parameter (custom_parameter). Then, create your own profile for the task (custom_profile). The model section of your profile will be something like this:

    model:
      _target_: my_project.my_models.CustomModel
      transformer_model: bert-base-cased
      custom_parameter: custom_value
    

    Then run from the command line:

    classy train my_task my_data -n my_model --profile custom_profile -c model.custom_parameter=new_value
    

    Expected behaviour The model should start with the model.custom_parameter set to the new value (new_value).

    Actual behaviour The training process crashes with the following hydra error:

    Could not override 'model.custom_parameter'.
    To append to your config use +model.custom_parameter=new_value
    Key 'custom_parameter' is not in struct
        full_key: model.custom_parameter
        object_type=dict
    

    Desktop (please complete the following information):

    • OS: Ubuntu Server 20.04
    • PyTorch Version: 1.9.0
    • Classy Version: 0.1.0
    bug priority: high 
    opened by edobobo 1
  • classy export / import

    classy export / import

    Is your feature request related to a problem? Please describe. How do we easily move specific runs across machines (e.g., train on a docker instance, run a demo on local machine)? At the moment, this has to be done manually, which is painful considering that we have to include everything under a certain folder and then we need to recreate the same structure in the target machine.

    Describe the solution you'd like classy export <experiment> should produce a zip/tgz containing everything necessary to move experiments (or single runs) across machines. We sort of do this already with classy upload / download, but just rely on the hub. Instead of uploading, we can just create the zip in the local folder (or a path specified by the user).

    Then, we should be able to run classy import <classy-exported-file.zip> to find our experiment in our local folder (better yet, every classy inference command could work with the export! :) )

    enhancement 
    opened by Valahaar 0
  • Profiles must have yaml extension

    Profiles must have yaml extension

    Custom profiles are only recognized if their extension is yaml (and the error message was completely unrelated and impossible to debug easily).

    We should allow profiles to end in yml as well :)

    bug 
    opened by Valahaar 0
  • Fixed resume from checkpoint

    Fixed resume from checkpoint

    Fixes the bug: The trainer parameters were not loaded correctly when the model was loaded from a checkpoint. E.g. val_check_interval was not set, so validation callbacks were not called rightly.

    opened by andreim14 0
  • Resume bug

    Resume bug

    Describe the bug The trainer parameters were not loaded correctly when the model was loaded from a checkpoint. E.g. val_check_interval was not set, so validation callbacks were not called rightly.

    To Reproduce Steps to reproduce the behaviour:

    1. Train a model for x steps
    2. Stop the training
    3. Restore the training from the checkpoint using resume_from parameter

    Expected behaviour The trainer parameters should be preserved. E.g if val_check_interval is set, the validation callbacks should be called accordingly.

    Actual behaviour The validation callbacks are not called and val_check_interval is set to 0.

    Desktop (please complete the following information):

    • OS: Ubuntu 20.04
    • PyTorch Version: 1.11
    bug 
    opened by andreim14 0
  • Token offsets computation fails when input is truncated

    Token offsets computation fails when input is truncated

    Describe the bug Title. In classy/data/dataset/hf/classification.py#L89 we invoke self.tokenize (#L109) which correctly truncates the input.

    The issue arises due to tuple(tok_encoding.word_to_tokens(wi)) for wi in range(len(tokens)): when a token is not included in the input due to truncation, word_to_tokens returns None, and tuple(None) raises a TypeError, which triggers the catch condition and makes the function return None, which cannot be unpacked in input_ids, token_offsets = self.tokenize(token_sample.tokens), resulting in another unhandled exception that finally crashes classy.

    To Reproduce In the token classification setting, input a sentence that has too many tokens (or reduce truncation to obtain the same effect).

    Expected behaviour I think there is a way to know how many of the original tokens were kept, and we can iterate over them instead of len(tokens), otherwise we can just iterate until word_to_tokens(wi) is not None. Comments?

    bug 
    opened by Valahaar 0
  • Allow users to choose which evaluation keys to log to progress bar

    Allow users to choose which evaluation keys to log to progress bar

    Should be pretty easy to implement: we could employ two methods / properties / classmethods for Evaluation, one for a white list of keys to log and the other for a black list, with the white list taking precedence over the black list (so if the user specifies both, we first get the subset of the white list and then remove from them the keys in the black list). By default, everything is logged to pb (as it is now).

    opened by Valahaar 0
  • Support pre-trained Transformers models at inference time

    Support pre-trained Transformers models at inference time

    Is your feature request related to a problem? Please describe. Huggingface Transformers has several task-pretrained models in its hub (e.g. facebook/bart-large-xsum). classy automatically supports using them at training time for further fine-tuning, but doesn't support using them directly at inference time. The current way to go about it is essentially weight porting.

    Rather, classy should support something like classy demo ... facebook/bart-large-xsum ...

    Describe the solution you'd like Personally, I'd like a solution that's symmetric with classy train. So, if you train bart with classy train generation data/xsum --profile bart-large -c transformer_model=facebook/bart-large-xsum, you could do inference tasks such as classy demo with something like classy demo "generation data/xsum --profile bart-large -c transformer_model=facebook/bart-large-xsum".

    I am unsure whether this would be friendly (same syntax) or unfriendly (overkill syntax), and suggestions/help are welcome.

    enhancement priority: low 
    opened by poccio 0
Releases(v0.3.2)
  • v0.3.2(Sep 8, 2022)

  • v0.3.1(Apr 25, 2022)

    What's Changed

    • README update to version 0.3.0 + bug fixes by @edobobo in https://github.com/sunglasses-ai/classy/pull/74
    • Hotfix for classy v0.3.0 by @poccio in https://github.com/sunglasses-ai/classy/pull/75

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.3.0...v0.3.1

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Apr 22, 2022)

    What's Changed

    • Feature/multi dataset implementation by @edobobo in https://github.com/sunglasses-ai/classy/pull/65
    • training coordinates fix by @edobobo in https://github.com/sunglasses-ai/classy/pull/68
    • implement choice of evaluation output in classy evaluate by @Valahaar in https://github.com/sunglasses-ai/classy/pull/67
    • [feature] changed profile behavior by @poccio in https://github.com/sunglasses-ai/classy/pull/70
    • classy import/export by @Valahaar in https://github.com/sunglasses-ai/classy/pull/66
    • T5 support and bug fixes by @poccio in https://github.com/sunglasses-ai/classy/pull/71

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.2.1...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Mar 2, 2022)

    What's Changed

    • Feature/adam implementation by @edobobo in https://github.com/sunglasses-ai/classy/pull/60
    • Misc/v0.2.1 by @edobobo in https://github.com/sunglasses-ai/classy/pull/63

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 26, 2022)

    What's Changed

    • Prediction output reification by @edobobo in https://github.com/sunglasses-ai/classy/pull/52
    • Feature/interfaces enhancements by @poccio in https://github.com/sunglasses-ai/classy/pull/53
    • Devops/black precommit by @poccio in https://github.com/sunglasses-ai/classy/pull/54
    • profile can now be provided as a path by @poccio in https://github.com/sunglasses-ai/classy/pull/57
    • Feature: Reference API by @Valahaar in https://github.com/sunglasses-ai/classy/pull/58

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.1.2...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Dec 3, 2021)

    What's Changed

    We now support specifying an entity (a team for wandb), a group and tags in logger's config.

    • Hotfix logging by @edobobo in https://github.com/sunglasses-ai/classy/pull/51

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.1.1...v0.1.2

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Nov 22, 2021)

    At some point setuptools must have broken, we have updated it and now it should be working properly. Release v0.1.0 is being removed from pypi to avoid problems.

    What's Changed

    • Release 0.1.1: fix to packaging by @Valahaar in https://github.com/sunglasses-ai/classy/pull/50

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.1.0...v0.1.1

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Nov 19, 2021)

    What's Changed

    • Sentence pair classification by @Valahaar in https://github.com/sunglasses-ai/classy/pull/1
    • Classy cli + setup.py by @Valahaar in https://github.com/sunglasses-ai/classy/pull/2
    • Serve by @poccio in https://github.com/sunglasses-ai/classy/pull/3
    • Evaluate + Demo + Bug fixes by @poccio in https://github.com/sunglasses-ai/classy/pull/4
    • Wandb support added by @edobobo in https://github.com/sunglasses-ai/classy/pull/5
    • resume-train implemented by @edobobo in https://github.com/sunglasses-ai/classy/pull/6
    • Dataset improvement by @edobobo in https://github.com/sunglasses-ai/classy/pull/7
    • Dataset shuffling by @edobobo in https://github.com/sunglasses-ai/classy/pull/8
    • Small functionalities and bug fixes by @edobobo in https://github.com/sunglasses-ai/classy/pull/9
    • Extractive qa by @edobobo in https://github.com/sunglasses-ai/classy/pull/10
    • profiles implementation by @edobobo in https://github.com/sunglasses-ai/classy/pull/11
    • Optimizer by @edobobo in https://github.com/sunglasses-ai/classy/pull/13
    • Describe by @edobobo in https://github.com/sunglasses-ai/classy/pull/14
    • Small features by @edobobo in https://github.com/sunglasses-ai/classy/pull/15
    • Demo UI by @poccio in https://github.com/sunglasses-ai/classy/pull/17
    • Autocomplete experiments by @Valahaar in https://github.com/sunglasses-ai/classy/pull/12
    • Consec by @Valahaar in https://github.com/sunglasses-ai/classy/pull/16
    • classy download implementation by @edobobo in https://github.com/sunglasses-ai/classy/pull/18
    • Describe update by @edobobo in https://github.com/sunglasses-ai/classy/pull/19
    • Profiles by @edobobo in https://github.com/sunglasses-ai/classy/pull/22
    • bunch of bug fixes by @poccio in https://github.com/sunglasses-ai/classy/pull/21
    • fixes for pip packaging by @Valahaar in https://github.com/sunglasses-ai/classy/pull/20
    • Bump nltk from 3.4.5 to 3.6.5 by @dependabot in https://github.com/sunglasses-ai/classy/pull/25
    • udpates and bug fixes on notebooks datasets by @edobobo in https://github.com/sunglasses-ai/classy/pull/23
    • Generation profiles by @poccio in https://github.com/sunglasses-ai/classy/pull/24
    • typing fixes by @edobobo in https://github.com/sunglasses-ai/classy/pull/27
    • qa fixes + special tokens by @poccio in https://github.com/sunglasses-ai/classy/pull/28
    • Qa bug fix by @poccio in https://github.com/sunglasses-ai/classy/pull/29
    • Develop by @edobobo in https://github.com/sunglasses-ai/classy/pull/30
    • Demo update by @Valahaar in https://github.com/sunglasses-ai/classy/pull/31
    • Develop by @edobobo in https://github.com/sunglasses-ai/classy/pull/32
    • Helpers added to every parameter in the cli by @edobobo in https://github.com/sunglasses-ai/classy/pull/37
    • Bringing (updated) docs into develop by @Valahaar in https://github.com/sunglasses-ai/classy/pull/41
    • Profile cli issue bug fix by @Valahaar in https://github.com/sunglasses-ai/classy/pull/42
    • Batch size and evaluation by @poccio in https://github.com/sunglasses-ai/classy/pull/43
    • release 0.1.0 by @Valahaar in https://github.com/sunglasses-ai/classy/pull/44
    • develop rebase + ci fix by @Valahaar in https://github.com/sunglasses-ai/classy/pull/45
    • ci fix by @Valahaar in https://github.com/sunglasses-ai/classy/pull/46

    Full Changelog: https://github.com/sunglasses-ai/classy/commits/v0.1.0

    Source code(tar.gz)
    Source code(zip)
Code for the paper Task Agnostic Morphology Evolution.

Task-Agnostic Morphology Optimization This repository contains code for the paper Task-Agnostic Morphology Evolution by Donald (Joey) Hejna, Pieter Ab

Joey Hejna 18 Aug 04, 2022
Leaderboard and Visualization for RLCard

RLCard Showdown This is the GUI support for the RLCard project and DouZero project. RLCard-Showdown provides evaluation and visualization tools to hel

Data Analytics Lab at Texas A&M University 246 Dec 26, 2022
Stratified Transformer for 3D Point Cloud Segmentation (CVPR 2022)

Stratified Transformer for 3D Point Cloud Segmentation Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia

DV Lab 195 Jan 01, 2023
Python implementation of "Single Image Haze Removal Using Dark Channel Prior"

##Dependencies pillow(~2.6.0) Numpy(~1.9.0) If the scripts throw AttributeError: __float__, make sure your pillow has jpeg support e.g. try: $ sudo ap

Joyee Cheung 73 Dec 20, 2022
PyTorch Code for the paper "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives"

Improving Visual-Semantic Embeddings with Hard Negatives Code for the image-caption retrieval methods from VSE++: Improving Visual-Semantic Embeddings

Fartash Faghri 441 Dec 05, 2022
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation (CVPR 2021)

Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation Input Image Initial CAM Successive Maps with adversar

Jungbeom Lee 110 Dec 07, 2022
Meta-TTS: Meta-Learning for Few-shot SpeakerAdaptive Text-to-Speech

Meta-TTS: Meta-Learning for Few-shot SpeakerAdaptive Text-to-Speech This repository is the official implementation of "Meta-TTS: Meta-Learning for Few

Sung-Feng Huang 128 Dec 25, 2022
Detecting Blurred Ground-based Sky/Cloud Images

Detecting Blurred Ground-based Sky/Cloud Images With the spirit of reproducible research, this repository contains all the codes required to produce t

1 Oct 20, 2021
ISBI 2022: Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image.

Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image Introduction This repository contains the PyTorch implem

25 Nov 09, 2022
Detect roadway lanes using Python OpenCV for project during the 5th semester at DHBW Stuttgart for lecture in digital image processing.

Find Line Detection (Image Processing) Identifying lanes of the road is very common task that human driver performs. It's important to keep the vehicl

LMF 4 Jun 21, 2022
The Power of Scale for Parameter-Efficient Prompt Tuning

The Power of Scale for Parameter-Efficient Prompt Tuning Implementation of soft embeddings from https://arxiv.org/abs/2104.08691v1 using Pytorch and H

Kip Parker 208 Dec 30, 2022
ACV is a python library that provides explanations for any machine learning model or data.

ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any model or data and different Shapley Values for tree-based mod

Salim Amoukou 85 Dec 27, 2022
OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers (NeurIPS 2021)

OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers (NeurIPS 2021) This is an PyTorch implementation of OpenMatc

Vision and Learning Group 38 Dec 26, 2022
A Python Package for Convex Regression and Frontier Estimation

pyStoNED pyStoNED is a Python package that provides functions for estimating multivariate convex regression, convex quantile regression, convex expect

Sheng Dai 17 Jan 08, 2023
Papers about explainability of GNNs

Papers about explainability of GNNs

Dongsheng Luo 236 Jan 04, 2023
R3Det based on mmdet 2.19.0

R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object Installation # install mmdetection first if you haven't installed it

SJTU-Thinklab-Det 38 Dec 15, 2022
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Text-AutoAugment (TAA) This repository contains the code for our paper Text AutoAugment: Learning Compositional Augmentation Policy for Text Classific

LancoPKU 105 Jan 03, 2023
So-ViT: Mind Visual Tokens for Vision Transformer

So-ViT: Mind Visual Tokens for Vision Transformer        Introduction This repository contains the source code under PyTorch framework and models trai

Jiangtao Xie 44 Nov 24, 2022
Get 2D point positions (e.g., facial landmarks) projected on 3D mesh

points2d_projection_mesh Input 2D points (e.g. facial landmarks) on an image Camera parameters (extrinsic and intrinsic) of the image Aligned 3D mesh

5 Dec 08, 2022
Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering (NAACL 2021)

Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering Abstract In open-domain question answering (QA), retrieve-and-read mec

Clova AI Research 34 Apr 13, 2022