A python library for self-supervised learning on images.

Overview

Lightly Logo

GitHub Unit Tests codecov

Lightly is a computer vision framework for self-supervised learning.

We, at Lightly, are passionate engineers who want to make deep learning more efficient. We want to help popularize the use of self-supervised methods to understand and filter raw image data. Our solution can be applied before any data annotation step and the learned representations can be used to analyze and visualize datasets as well as for selecting a core set of samples.

Tutorials

Want to jump to the tutorials and see lightly in action?

Benchmarks

Currently implemented models and their accuracy on cifar10. All models have been evaluated using kNN. We report the max test accuracy over the epochs as well as the maximum GPU memory consumption. All models in this benchmark use the same augmentations as well as the same ResNet-18 backbone. Training precision is set to FP32 and SGD is used as an optimizer with cosineLR.

Model Epochs Batch Size Test Accuracy Peak GPU usage
MoCo 200 128 0.83 2.1 GBytes
SimCLR 200 128 0.78 2.0 GBytes
SimSiam 200 128 0.73 3.0 GBytes
MoCo 200 512 0.85 7.4 GBytes
SimCLR 200 512 0.83 7.8 GBytes
SimSiam 200 512 0.81 7.0 GBytes
MoCo 800 512 0.90 7.2 GBytes
SimCLR 800 512 0.89 7.7 GBytes
SimSiam 800 512 0.91 6.9 GBytes

Terminology

Below you can see a schematic overview of the different concepts present in the lightly Python package. The terms in bold are explained in more detail in our documentation.

Overview of the lightly pip package

Quick Start

Lightly requires Python 3.6+. We recommend installing Lightly in a Linux or OSX environment.

Requirements

  • hydra-core>=1.0.0
  • numpy>=1.18.1
  • pytorch_lightning>=0.10.0
  • requests>=2.23.0
  • torchvision
  • tqdm

Installation

You can install Lightly and its dependencies from PyPI with:

pip3 install lightly

We strongly recommend that you install Lightly in a dedicated virtualenv, to avoid conflicting with your system packages.

Command-Line Interface

Lightly is accessible also through a command-line interface (CLI). To train a SimCLR model on a folder of images you can simply run the following command:

lightly-train input_dir=/mydataset

To create an embedding of a dataset you can use:

lightly-embed input_dir=/mydataset checkpoint=/mycheckpoint

The embeddings with the corresponding filename are stored in a human-readable .csv file.

Next Steps

Head to the documentation and see the things you can achieve with Lightly!

Development

To install dev dependencies (for example to contribute to the framework) you can use the following command:

pip3 install -e ".[dev]"

For more information about how to contribute have a look here.

Running Tests

Unit tests are within the tests folder and we recommend to run them using pytest. There are two test configurations available. By default only a subset will be run. This is faster and should take less than a minute. You can run it using

python -m pytest -s -v

To run all tests (including the slow ones) you can use the following command.

python -m pytest -s -v --runslow

Code Linting

We provide a Pylint config following the Google Python Style Guide.

You can run the linter from your terminal either on a folder

pylint lightly/

or on a specific file

pylint lightly/core.py

Further Reading

Self-supervised Learning:

FAQ

  • Why should I care about self-supervised learning? Aren't pre-trained models from ImageNet much better for transfer learning?

    • Self-supervised learning has become increasingly popular among scientists over the last year because the learned representations perform extraordinarily well on downstream tasks. This means that they capture the important information in an image better than other types of pre-trained models. By training a self-supervised model on your dataset, you can make sure that the representations have all the necessary information about your images.
  • How can I contribute?

    • Create an issue if you encounter bugs or have ideas for features we should implement. You can also add your own code by forking this repository and creating a PR. More details about how to contribute with code is in our contribution guide.
  • Is this framework for free?

    • Yes, this framework completely free to use and we provide the code. We believe that we need to make training deep learning models more data efficient to achieve widespread adoption. One step to achieve this goal is by leveraging self-supervised learning. The company behind lightly commited to keep this framework open-source.
  • If this framework is free, how is the company behind lightly making money?

    • Training self-supervised models is only part of the solution. The company behind lightly focuses on processing and analyzing embeddings created by self-supervised models. By building, what we call a self-supervised active learning loop we help companies understand and work with their data more efficiently. This framework acts as an interface for our platform to easily upload and download datasets, embeddings and models. Whereas the platform will cost for additional features this frameworks will always remain free of charge (even for commercial use).
Comments
  • Add Masked Autoencoder implementation

    Add Masked Autoencoder implementation

    The paper Masked Autoencoders Are Scalable Vision Learners https://arxiv.org/abs/2111.06377 is suggesting that a masked auto-encoder (similar to pre-training on NLP) works very well as a pretext task for self-supervised learning. Let's add it to Lightly.

    image

    type: idea 
    opened by IgorSusmelj 19
  • Feature proposals

    Feature proposals

    Hi; first of all thanks for making this library; it looks very useful.

    There are two papers that id like to try and implement in your repo:

    Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere

    This paper introduces a contrastive loss function with some quite different properties from the more cited ones. Out of all loss functions ive tried for my projects, this is the one I had most success with; and also the one that a-priori seems the most sensible to me, of whats currently been published. The paper provides source and its a trivially implemented additional loss function really. Ive got experience implementing it, plus some generalizations I came up with I could provide. Should be an easy addition to this package. My motivation for contribution is a selfish one, as having it included here in the benchmarks would help me better understand the relative strengths and weaknesses on different types of datasets.

    Scaling Deep Contrastive Learning Batch Size with Almost Constant Peak Memory Usage

    I also recently came across this paper. It also provides (rather ugly imo) torch code. Implementing it in this package first would be easier to me than implementing it in my private repos; since itd be easier to properly test the implementation it using the infrastructure provided here. The title is quite descriptive; given the importance of batch-size for within-batch mining techniques, and the fact that many of us are working with single gpus, being able to scale batch sizes arbitrarily is super useful, and I think this paper has the right idea of how to go about it.

    The contribution guide mentions to first discuss such additions; so tell me what you think!

    type: idea 
    opened by EelcoHoogendoorn 14
  • Adding DINO to lightly

    Adding DINO to lightly

    First of all Kudos on creating this amazing library ❤️.

    I think all of us in self supervised learning community have heard of DINO. For the past couple of weeks, I have been trying to port the DINO implementation in facebook's implementation to PL. I have implemented it atleast for my use case which is a kaggle competition. I initially looked at lightly for implmenetation but I did not see any, so I borrowed and adopted code from original implementation and converted it into lightning.

    Honestly it was a tedious task. I was wondering if you guys would be interested in adding DINO to lightly.

    Here's how I think we should structure the implementations

    1. Augmentations : Dino heavily relies on multi-crop strategy. Since lightly already has a collate_fn for implementing augmentations, dino augmentations can be implemented in the same way.
    2. Model forward pass and Model Heads : The forward pass for the model is weird since we have to deal with multicrop and global crops, so this needs to be implemented as a nn.Module like other heads in lightly.
    3. Loss Functions : I am not sure how this should be implemented for lightly, although FB have a custom class for that.
    4. Utility functions : FB has used some tricks for stable training and results, so these need to be included as well.

    I have used lightning to implement all this and so far at least for my use case I was able to train vit-base-16 due my hardware constraints.

    This goes without saying I would personally like to work on PR :heart:

    Please let me know.

    opened by Atharva-Phatak 11
  • Queue for SwaV

    Queue for SwaV

    Aims to solve https://github.com/lightly-ai/lightly/issues/1006

    Implements queue for SwaV with minimal code change in the training loop. Implemented as per details from the paper:

    Rationale

    image

    A trick for small batch sizes - start using queue prototypes at a later epoch

    image
    opened by ibro45 10
  • Malte lig 1262 advanced selection external docs

    Malte lig 1262 advanced selection external docs

    Description

    • adds docs for selection using the new config
    • adapts existing docs (getting started, docker active learning)
    • already included: DIVERSIFY->DIVERSITY

    Tabs vs. Dropdowns

    Here tabs are used for the SelectionStrategy and Dropdowns for the configuration examples. image

    opened by MalteEbner 8
  • MSN: Masked Siamese Networks for Label-Efficient Learning

    MSN: Masked Siamese Networks for Label-Efficient Learning

    Hello, I would like to share this new interesting SSL method proposed by Meta that achieves SOTA results and shows interesting generalization properties.

    It is based on ViT and I think it could be worth to integrate it in this codebase. Also, the official implementation is available :)

    MSN Paper, MSN GIthub code

    opened by masc-it 8
  • recipe for using the Lightly Docker as API worker

    recipe for using the Lightly Docker as API worker

    opened by MalteEbner 8
  • Guarin lig 364 add download urls function to lightlyapi

    Guarin lig 364 add download urls function to lightlyapi

    Draft for downloading samples with readurl. I included an iterable dataset for completeness.

    The following code takes 20s to load 1k samples:

    import torch
    from tqdm import tqdm
    
    from lightly.data.iterable_dataset import ImageIterableDataset
    from lightly.api import ApiWorkflowClient
    
    client = ApiWorkflowClient(
        token="token",
        dataset_id="dataset id"
    )
    
    samples = client.download_raw_samples(client.dataset_id)
    dataset = ImageIterableDataset(samples)
    dataloader = torch.utils.data.DataLoader(
        dataset,
        num_workers=8,
        batch_size=8
    )
    
    for image, filename, frame_idx in tqdm(dataloader):
        pass
    

    The same code for videos runs in 10s for 5 videos with 300 frames each using only 2 workers.

    opened by guarin 8
  • enhance the upload speed

    enhance the upload speed

    using lightly-uploadand lightly-magic is not very fast and does not utilise all the cores. There seems to be a bottleneck somewhere.

    Tasks

    • [ ] figure out where most of the time goes when processing
      • [ ] split apart numbers of images/s to images/s processing and images/s uploading
    • [ ] ensure when num_workers is set, we really process with the amount of workers
    type: enhancement 
    opened by japrescott 8
  • [Active Learning] Compute Active learning scores for object detection

    [Active Learning] Compute Active learning scores for object detection

    Create a scorer for object detection that can compute scores like prediction entropy, ...

    Input of predictions: For each of the N samples, there are B bounding boxes giving the probability of one of the C classes.

    • [x] Find a good way to represent the model predictions of an object detection model
    • [x] Decide which kind of models to support (Yolo, SSD, ...), see also https://heartbeat.fritz.ai/a-2019-guide-to-object-detection-9509987954c3
    • [x] Decide which kinds of scores to compute and find a good way to compute them, e.g. see https://heartbeat.fritz.ai/a-2019-guide-to-object-detection-9509987954c3 or https://arxiv.org/pdf/2004.04699.pdf
    • [x] Implement it
    type: enhancement 
    opened by MalteEbner 8
  • NNMemoryBank not working with DataParallel

    NNMemoryBank not working with DataParallel

    I have been using the NNMemoryBank as a component in my module and noticed that at each forward pass NNMemoryBank.bank is equal to None. This only occurs when my module is wrapped in DataParallel. As a result, throughout training my NN pairs are always random noise (surprisingly, this only hurt the contrastive learning performance by a few percentage point on linear probing??).

    Here is a simple test case that highlights the issue:

    import torch
    from lightly.models.modules import NNMemoryBankModule
    memory_bank = NNMemoryBankModule(size=1000)
    print(memory_bank.bank)
    memory_bank(torch.randn((100, 512)))
    print(memory_bank.bank)
    
    memory_bank = NNMemoryBankModule(size=1000)
    memory_bank = torch.nn.DataParallel(memory_bank, device_ids=[0,1])
    print(memory_bank.module.bank)
    memory_bank(torch.randn((100, 512)))
    print(memory_bank.module.bank)
    

    The output of the first is None and a torch.Tensor, as expected. The output for the second is None for both.

    opened by kfallah 7
  • Malte lig 2288 download corrupted files as artifact pip

    Malte lig 2288 download corrupted files as artifact pip

    Description

    • Generated new openapi code
    • Adds the download_compute_worker_run_corruptness_check_information method to download the new corruptness check artifacts

    How was it tested?

    Manually running the example code.

    Why no unittests?

    The other artifact download functions are not tested either, because the code is really tiny. Thus I kept it consistent.

    ONLY MERGE AFTER https://github.com/lightly-ai/lightly-core/pull/2212

    opened by MalteEbner 1
  • MemoryBankModule - no distributed all gather prior to queueing a batch

    MemoryBankModule - no distributed all gather prior to queueing a batch

    If I'm not mistaken, the batch is not all_gathered before the queue is updated. I was just comparing the code against the one it was based on (MoCo's memory bank) and noticed this: https://github.com/facebookresearch/moco/blob/78b69cafae80bc74cd1a89ac3fb365dc20d157d3/moco/builder.py#L55

    Was this done on purpose?

    Is it because different processes would use the same queued batches, unlike the DistributedDataLoader batches which are indexed in DDP way?

    opened by ibro45 0
  • how can I reconstruct images (MAE)

    how can I reconstruct images (MAE)

    Hi and thank you for writing this great library.

    I'm trying to figure out if my MAE training worked beyond looking at mse numbers. I input an image of shape 256x256 (rgb) and then I get predictions of shape bx38x3072 I'm using the default torchvision.models.vit_b_32

    I am not sure how to reshape this back into an image? it also just doesn't make sense based on the numbers - there should be 64 patches of 3072?

    opened by DanTaranis 7
  • SSL Online Evaluator Callback with Custom Data (i.e., different than pre-training data)

    SSL Online Evaluator Callback with Custom Data (i.e., different than pre-training data)

    Hi,

    Is it possible to use online evaluation of SSL encoder with a linear classifier to run on different data then the one being used for pre-training?

    opened by aqibsaeed 2
Releases(v1.2.40)
Owner
Lightly
Lightly
KoRean based ELECTRA pre-trained models (KR-ELECTRA) for Tensorflow and PyTorch

KoRean based ELECTRA (KR-ELECTRA) This is a release of a Korean-specific ELECTRA model with comparable or better performances developed by the Computa

12 Jun 03, 2022
GPOEO is a micro-intrusive GPU online energy optimization framework for iterative applications

GPOEO GPOEO is a micro-intrusive GPU online energy optimization framework for iterative applications. We also implement ODPP [1] as a comparison. [1]

瑞雪轻飏 8 Sep 10, 2022
Evaluation framework for testing segmentation networks in PyTorch

Evaluation framework for testing segmentation networks in PyTorch. What segmentation network to choose for next Kaggle competition? This benchmark knows the answer!

Eugene Khvedchenya 37 Apr 27, 2022
An implementation of Deep Forest 2021.2.1.

Deep Forest (DF) 21 DF21 is an implementation of Deep Forest 2021.2.1. It is designed to have the following advantages: Powerful: Better accuracy than

LAMDA Group, Nanjing University 795 Jan 03, 2023
Pytorch implementation of FlowNet by Dosovitskiy et al.

FlowNetPytorch Pytorch implementation of FlowNet by Dosovitskiy et al. This repository is a torch implementation of FlowNet, by Alexey Dosovitskiy et

Clément Pinard 762 Jan 02, 2023
A Multi-modal Model Chinese Spell Checker Released on ACL2021.

ReaLiSe ReaLiSe is a multi-modal Chinese spell checking model. This the office code for the paper Read, Listen, and See: Leveraging Multimodal Informa

DaDa 106 Dec 29, 2022
System Design course at HSE (2021)

System Design course at HSE (2021) Wiki-страница курса Структура репозитория: slides - директория с презентациями с занятий tasks - материалы для выпо

22 Dec 25, 2022
Deep Learning to Improve Breast Cancer Detection on Screening Mammography

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Deep Learning to Improve Breast

Li Shen 305 Jan 03, 2023
Official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo'

IterMVS official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo' Introduction IterMVS is a novel lear

Fangjinhua Wang 127 Jan 04, 2023
Fast and accurate optimisation for registration with little learningconvexadam

convexAdam Learn2Reg 2021 Submission Fast and accurate optimisation for registration with little learning Excellent results on Learn2Reg 2021 challeng

17 Dec 06, 2022
Code for our paper "Multi-scale Guided Attention for Medical Image Segmentation"

Medical Image Segmentation with Guided Attention This repository contains the code of our paper: "'Multi-scale self-guided attention for medical image

Ashish Sinha 394 Dec 28, 2022
the code used for the preprint Embedding-based Instance Segmentation of Microscopy Images.

EmbedSeg Introduction This repository hosts the version of the code used for the preprint Embedding-based Instance Segmentation of Microscopy Images.

JugLab 88 Dec 25, 2022
Implementation for the paper 'YOLO-ReT: Towards High Accuracy Real-time Object Detection on Edge GPUs'

YOLO-ReT This is the original implementation of the paper: YOLO-ReT: Towards High Accuracy Real-time Object Detection on Edge GPUs. Prakhar Ganesh, Ya

69 Oct 19, 2022
wmctrl ported to Python Ctypes

work in progress wmctrl is a command that can be used to interact with an X Window manager that is compatible with the EWMH/NetWM specification. wmctr

Iyad Ahmed 22 Dec 31, 2022
Mapping Conditional Distributions for Domain Adaptation Under Generalized Target Shift

This repository contains the official code of OSTAR in "Mapping Conditional Distributions for Domain Adaptation Under Generalized Target Shift" (ICLR 2022).

Matthieu Kirchmeyer 5 Dec 06, 2022
Implementation of Hierarchical Transformer Memory (HTM) for Pytorch

Hierarchical Transformer Memory (HTM) - Pytorch Implementation of Hierarchical Transformer Memory (HTM) for Pytorch. This Deepmind paper proposes a si

Phil Wang 63 Dec 29, 2022
This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the time series forecasting research space.

TSForecasting This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the tim

Rakshitha Godahewa 80 Dec 30, 2022
Painting app using Python machine learning and vision technology.

AI Painting App We are making an app that will track our hand and helps us to draw from that. We will be using the advance knowledge of Machine Learni

Badsha Laskar 3 Oct 03, 2022
RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into tables through jointly extracting intervention, outcome and outcome measure entities and their relations.

Randomised controlled trial abstract result tabulator RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into

2 Sep 16, 2022
PyTorch Implement of Context Encoders: Feature Learning by Inpainting

Context Encoders: Feature Learning by Inpainting This is the Pytorch implement of CVPR 2016 paper on Context Encoders 1) Semantic Inpainting Demo Inst

321 Dec 25, 2022