FLSim a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API

Related tags

Deep LearningFLSim
Overview

Federated Learning Simulator (FLSim)

Federated Learning Simulator (FLSim) is a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API. FLSim is domain-agnostic and accommodates many use cases such as computer vision and natural text. Currently FLSim supports cross-device FL, where millions of clients' devices (e.g. phones) traing a model collaboratively together.

FLSim is scalable and fast. It supports differential privacy (DP), secure aggregation (secAgg), and variety of compression techniques.

In FL, a model is trained collaboratively by multiple clients that each have their own local data, and a central server moderates training, e.g. by aggregating model updates from multiple clients.

In FLSim, developers only need to define a dataset, model, and metrics reporter. All other aspects of FL training are handled internally by the FLSim core library.

FLSim

Library Structure

FLSim core components follow the same semantic as FedAvg. The server comprises three main features: selector, aggregator, and optimizer at a high level. The selector selects clients for training, and the aggregate aggregates client updates until a round is complete. Then, the optimizer optimizes the server model based on the aggregated gradients. The server communicates with the clients via the channel. The channel then compresses the message between the server and the clients. Locally, the client composes of a dataset and a local optimizer. This local optimizer can be SGD, FedProx, or a custom Pytorch optimizer.

Installation

The latest release of FLSim can be installed via pip:

pip install flsim

You can also install directly from the source for the latest features (along with its quirks and potentially ocassional bugs):

git clone https://github.com/facebookresearch/FLSim.git
cd FLSim
pip install -e .

Getting started

To implement a central training loop in the FL setting using FLSim, a developer simply performs the following steps:

  1. Build their own data pipeline to assign individual rows of training data to client devices (to simulate data is distributed across client devices)
  2. Create a corresponding nn/Module model and wrap it in an FL model.
  3. Define a custom metrics reporter that computes and collects metrics of interest (e.g., accuracy) throughout training.
  4. Set the desired hyperparameters in a config.

Usage Example

Tutorials

To see the details, please refer to the tutorials that we have prepared.

Examples

We have prepared the runnable exampels for 2 of the tutorials above:

Contributing

See the CONTRIBUTING for how to contribute to this library.

License

This code is released under Apache 2.0, as found in the LICENSE file.

Comments
  • Bug Fix#36: fix imports in tests.

    Bug Fix#36: fix imports in tests.

    Types of changes

    • [x ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Docs change / refactoring / dependency upgrade

    Motivation and Context / Related issue

    Bug Fix#36: fix imports in tests.

    How Has This Been Tested (if it applies)

    pytest -ra is able to discover all tests now.

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [x ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by ghaccount 8
  • Vr

    Vr

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Docs change / refactoring / dependency upgrade

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by JohnlNguyen 6
  • Move optimizer_test_utils to optimizers directory

    Move optimizer_test_utils to optimizers directory

    Summary: it is currently located at the top-level tests directory. However the top-level tests directory does not really make sense as each component is organized into its dedicated directory. optimizer_test_utils.py belongs to the optimizer directory in that sense. In this diff, we move the file to the optimizer directory and fixes the reference.

    Differential Revision: D32241821

    CLA Signed fb-exported Merged 
    opened by jessemin 3
  • Does the backend handle Federated learning asynchronously?

    Does the backend handle Federated learning asynchronously?

    I found this repo from this blog: - https://ai.facebook.com/blog/asynchronous-federated-learning/ However I do not find any mentioning on this repo and also I cannot decipher from the code examples whether this is synchronous version or asynchronous version of Federated learning? Can you please clarify this for me? And also if this is the asynchronous version how can I dive deeper in to the libraries and look at the code of implementation for the asynch handling mechanism?

    Thank you

    opened by 111Kaushal 2
  • Remove test_pytorch_local_dataset_factory

    Remove test_pytorch_local_dataset_factory

    Summary: This test had been very flaky about 1+ year ago an d never been revived since then. Deleting it from the codebase.

    Differential Revision: D32415979

    CLA Signed fb-exported Merged 
    opened by jessemin 2
  • FedSGD with virtual batching

    FedSGD with virtual batching

    🚀 Feature

    Motivation

    Create a memory efficient client to run FedSGD. If a client has many examples, running FedSGD (taking the gradient of the model based on all of the client's data) can lead to OOM. In this PR, we fix this problem by still calling optimizer.step once at the end of local training to simulate the effect of FedSGD.>

    opened by JohnlNguyen 0
  • Add Fednova as a benchmark

    Add Fednova as a benchmark

    Summary:

    What?

    Adding FedNova as a benchmark

    Why?

    FedNova is a well known paper that fixes the objective inconsistency problem

    Differential Revision: D34668291

    CLA Signed fb-exported 
    opened by JohnlNguyen 1
  • Having to `import flsim.configs`  before creating config from json is unintuitive

    Having to `import flsim.configs` before creating config from json is unintuitive

    🚀 Feature

    This code works

    import flsim.configs <-- 
    from flsim.utils.config_utils import fl_config_from_json
    
    json_config = {
        "trainer": {
        }
    }
    cfg = fl_config_from_json(json_config)
    

    This code doesn't work

    from flsim.utils.config_utils import fl_config_from_json
    
    json_config = {
        "trainer": {
        }
    }
    cfg = fl_config_from_json(json_config)
    

    Motivation

    Having to import flsim.configs is unintuitive and not clear from the user perspective

    Pitch

    Alternatives

    Additional context

    opened by JohnlNguyen 0
  • Fix sent140 example

    Fix sent140 example

    Summary:

    What?

    Fix tutorial to word embedding to resolve the poor accuracy problem

    Why?

    https://github.com/facebookresearch/FLSim/issues/34

    Differential Revision: D34149392

    CLA Signed fb-exported 
    opened by JohnlNguyen 1
  • low test accuracy in Sentiment classification with LEAF's Sent140 tutorial?

    low test accuracy in Sentiment classification with LEAF's Sent140 tutorial?

    ❓ Questions and Help

    Until we move the questions to another medium, feel free to use this as your question:

    Question

    I tried this tutorial https://github.com/facebookresearch/FLSim/blob/main/tutorials/sent140_tutorial.ipynb And accuracy is less that random guess (50%)!

    Any suggestions or approaches to improve accuracy for this tutorial?

    from tutorial: Running (epoch = 1, round = 1, global round = 1) for Test (epoch = 1, round = 1, global round = 1), Loss/Test: 0.8683878255035598 (epoch = 1, round = 1, global round = 1), Accuracy/Test: 49.61439588688946 {'Accuracy': 49.61439588688946}

    opened by ghaccount 0
Releases(v0.1.0)
  • v0.0.1(Dec 9, 2021)

    We are excited to announce the release of FLSim 0.0.1.

    Introduction

    How does one train a machine learning model without access to user data? Federated Learning (FL) is the technology that answers this question. In a nutshell, FL is a way for many users to learn a machine learning model without sharing data collaboratively. The two scenarios for FL, cross-silo and cross-device. Cross-silo provides technologies for collaborative learning between a few large organizations with massive silo datasets. Cross-device provides collaborative learning between many small user devices with small local datasets. Cross-device FL, where millions or even billions of users cooperate on learning a model, is a much more complex problem and attracted less attention from the research community. We designed FLSim to address the cross-device FL use case.

    Federated Learning at Scale

    Large-scale cross-device Federated Learning (FL) is a federated learning paradigm with several challenges that differentiate it from cross-silo FL: millions of clients coordinating with a central server and training instability due to the significant cohort problem. With these challenges in mind, we built FLSim to be scalable while easy to use, and FLSim can scale to thousands of clients per round using only 1 GPU. We hope FLSim will equip researchers to tackle problems with federated learning at scale.

    FLSim

    Library Structure

    FLSim core components follow the same semantic as FedAvg. The server comprises three main features: selector, aggregator, and optimizer at a high level. The selector selects clients for training, and the aggregate aggregates client updates until a round is complete. Then, the optimizer optimizes the server model based on the aggregated gradients. The server communicates with the clients via the channel. The channel then compresses the message between the server and the clients. Locally, the client composes of a dataset and a local optimizer. This local optimizer can be SGD, FedProx, or a custom Pytorch optimizer.

    Included Datasets

    Currently, FLSim supports all datasets from LEAF including FEMNIST, Shakespeare, Sent140, CelebA, Synthetic and Reddit. Additionally, we support MNIST and CIFAR-10.

    Included Algorithms

    FLSim supports standard FedAvg, and other federated learning methods such as FedAdam, FedProx, FedAvgM, FedBuff, FedLARS, and FedLAMB.

    What’s next?

    We hope FLSim will foster large-scale cross-device FL research. Soon, we plan to add support for personalization in early 2022. Throughout 2022, we plan to gather feedback and improve usability. We plan to continue to grow our collection of algorithms, datasets, and models.

    Source code(tar.gz)
    Source code(zip)
Owner
Meta Research
Meta Research
Event sourced bank - A wide-and-shallow example using the Python event sourcing library

Event Sourced Bank A "wide but shallow" example of using the Python event sourci

3 Mar 09, 2022
R3Det based on mmdet 2.19.0

R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object Installation # install mmdetection first if you haven't installed it

SJTU-Thinklab-Det 38 Dec 15, 2022
ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers

ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers Official implementation of ViewFormer. ViewFormer is a NeRF-free neural rend

Jonáš Kulhánek 169 Dec 30, 2022
This is the code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields This is the code for "HyperNeRF: A Higher-Dimensional

Google 702 Jan 02, 2023
A curated list of awesome Deep Learning tutorials, projects and communities.

Awesome Deep Learning Table of Contents Books Courses Videos and Lectures Papers Tutorials Researchers Websites Datasets Conferences Frameworks Tools

Christos 20k Jan 05, 2023
[NeurIPS 2020] Semi-Supervision (Unlabeled Data) & Self-Supervision Improve Class-Imbalanced / Long-Tailed Learning

Rethinking the Value of Labels for Improving Class-Imbalanced Learning This repository contains the implementation code for paper: Rethinking the Valu

Yuzhe Yang 656 Dec 28, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

61 Jan 07, 2023
unet-family: Ultimate version

unet-family: Ultimate version 基于之前my-unet代码,我整理出来了这一份终极版本unet-family,方便其他人阅读。 相比于之前的my-unet代码,代码分类更加规范,有条理 对于clone下来的代码不需要修改各种复杂繁琐的路径问题,直接就可以运行。 并且代码有

2 Sep 19, 2022
ROS Basics and TurtleSim

Waypoint Follower Anna Garverick This package draws given waypoints, then waits for a service call with a start position to send the turtle to each wa

Anna Garverick 1 Dec 13, 2021
GAN JAX - A toy project to generate images from GANs with JAX

GAN JAX - A toy project to generate images from GANs with JAX This project aims to bring the power of JAX, a Python framework developped by Google and

Valentin Goldité 14 Nov 29, 2022
This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian Sign Language.

LIBRAS-Image-Classifier This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian

Aryclenio Xavier Barros 26 Oct 14, 2022
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin Sinanoğlu 2 Mar 04, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

DV Lab 115 Dec 23, 2022
Neural Module Network for VQA in Pytorch

Neural Module Network (NMN) for VQA in Pytorch Note: This is NOT an official repository for Neural Module Networks. NMN is a network that is assembled

Harsh Trivedi 111 Nov 24, 2022
Awesome Monocular 3D detection

Awesome Monocular 3D detection Paper list of 3D detetction, keep updating! Contents Paper List 2022 2021 2020 2019 2018 2017 2016 KITTI Results Paper

Zhikang Zou 184 Jan 04, 2023
Neural Motion Learner With Python

Neural Motion Learner Introduction This work is to extract skeletal structure from volumetric observations and to learn motion dynamics from the detec

Jinseok Bae 14 Nov 28, 2022
The official PyTorch code implementation of "Human Trajectory Prediction via Counterfactual Analysis" in ICCV 2021.

Human Trajectory Prediction via Counterfactual Analysis (CausalHTP) The official PyTorch code implementation of "Human Trajectory Prediction via Count

46 Dec 03, 2022
General Vision Benchmark, a project from OpenGVLab

Introduction We build GV-B(General Vision Benchmark) on Classification, Detection, Segmentation and Depth Estimation including 26 datasets for model e

174 Dec 27, 2022
The Empirical Investigation of Representation Learning for Imitation (EIRLI)

The Empirical Investigation of Representation Learning for Imitation (EIRLI)

Center for Human-Compatible AI 31 Nov 06, 2022