Breaching - Breaching privacy in federated learning scenarios for vision and text

Overview

Breaching - A Framework for Attacks against Privacy in Federated Learning

This PyTorch framework implements a number of gradient inversion attacks that breach privacy in federated learning scenarios, covering examples with small and large aggregation sizes and examples both vision and text domains.

This includes implementations of recent work such as:

But also a range of implementations of other attacks from optimization attacks (such as "Inverting Gradients" and "See through Gradients") to recent analytic and recursive attacks. Jupyter notebook examples for these attacks can be found in the examples/ folder.

Overview:

This repository implements two main components. A list of modular attacks under breaching.attacks and a list of relevant use cases (including server threat model, user setup, model architecture and dataset) under breaching.cases. All attacks and scenarios are highly modular and can be customized and extended through the configuration at breaching/config.

Installation

Either download this repository (including notebooks and examples) directly using git clone or install the python package via pip install breaching for easy access to key functionality.

Because this framework covers several use cases across vision and language, it also accumulates a kitchen-sink of dependencies. The full list of all dependencies can be found at environment.yml (and installed with conda by calling conda env create --file environment.yml ), but the full list of dependencies not installed by default. Install these as necessary (for example install huggingface packages only if you are interested in language applications).

You can verify your installation by running python simulate_breach.py dryrun=True. This tests the simplest reconstruction setting with a single iteration.

Usage

You can load any use case by

cfg_case = breaching.get_case_config(case="1_single_imagenet")
user, server, model, loss = breaching.cases.construct_case(cfg_case)

and load any attack by

cfg_attack = breaching.get_attack_config(attack="invertinggradients")
attacker = breaching.attacks.prepare_attack(model, loss, cfg_attack)

This is a good spot to print out an overview over the loaded threat model and setting, maybe you would want to change some settings?

breaching.utils.overview(server, user, attacker)

To evaluate the attack, you can then simulate an FL exchange:

shared_user_data, payloads, true_user_data = server.run_protocol(user)

And then run the attack (which consumes only the user update and the server state):

reconstructed_user_data, stats = attacker.reconstruct(payloads, shared_user_data)

For more details, have a look at the notebooks in the examples/ folder, the cmd-line script simulate_breach.py or the minimal examples in minimal_example.py and minimal_example_robbing_the_fed.py.

What is this framework?

This framework is modular collections of attacks against federated learning that breach privacy by recovering user data from their updates sent to a central server. The framework covers gradient updates as well as updates from multiple local training steps and evaluates datasets and models in language and vision. Requirements and variations in the threat model for each attack (such as the existence of labels or number of data points) are made explicit. Modern initializations and label recovery strategies are also included.

We especially focus on clarifying the threat model of each attack and constraining the attacker to only act based on the shared_user_data objects generated by the user. All attacks should be as use-case agnostic as possible based only on these limited transmissions of data and implementing a new attack should require no knowledge of any use case. Likewise implementing a new use case should be entirely separate from the attack portion. Everything is highly configurable through hydra configuration syntax.

What does this framework not do?

This framework focuses only on attacks, implementing no defense aside from user-level differential privacy and aggregation. We wanted to focus only on attack evaluations and investigate the questions "where do these attacks work currently", and "where are the limits". Accordingly, the FL simulation is "shallow". No model is actually trained here and we investigate fixed checkpoints (which can be generated somewhere else). Other great repositories, such as https://github.com/Princeton-SysML/GradAttack focus on defenses and their performance during a full simulation of a FL protocol.

Attacks

A list of all included attacks with references to their original publications can be found at examples/README.md.

Datasets

Many examples for vision attacks show ImageNet examples. For this to work, you need to download the ImageNet ILSVRC2012 dataset manually. However, almost all attacks require only the small validation set, which can be easily downloaded onto a laptop and do not look for the whole training set. If this is not an option for you, then the Birdsnap dataset is a reasonably drop-in replacement for ImageNet. By default, we further only show examples from ImageNetAnimals, which are the first 397 classes of the ImageNet dataset. This reduces the number of weird pictures of actual people substantially. Of course CIFAR10 and CIFAR100 are also around. For these vision datasets there are several options in the literature on how to partition them for a FL simulation. We implement a range of such partitions with data.partition, ranging from random (but replicable and with no repetitions of data across users), over balanced (separate classes equally across users) to unique-class (every user owns data from a single class). When changing the partition you might also have to adjust the number of expected clients data.default_clients (for example, for unique_class there can be only len(classes) many users).

For language data, you can load wikitext which we split into separate users on a per-article basis, or the stackoverflow and shakespeare FL datasets from tensorflow federated, which are already split into users (installing tensorflow-cpu is required for these tensorflow-federated datasets).

Further, nothing stops you from skipping the breaching.cases sub-module and using your own code to load a model and dataset. An example can be found in minimal_example.py.

Metrics

We implement a range of metrics which can be queried through breaching.analysis.report. Several metrics (such as CW-SSIM and R-PSNR) require additional packages to be installed - they will warn about this. For language data we hook into a range of huggingface metrics. Overall though, we note that most of these metrics give only a partial picture of the actual severity of a breach of privacy, and are best handled with care.

Additional Topics

Benchmarking

A script to benchmark attacks is included as benchmark_breaches.py. This script will iterate over the first valid num_trials users, attack each separately and average the resulting metrics. This can be useful for quantitative analysis of these attacks. The default case takes about a day to benchmark on a single GTX2080 GPU for optimization-based attacks, and less than 30 minutes for analytic attacks. Using the default scripts for benchmarking and cmd-line executes also includes a bunch of convenience based mostly on hydra. This entails the creation of separate sub-folders for each experiment in outputs/. These folders contain logs, metrics and optionally recovered data for each run. Summary tables are written to tables/.

System Requirements

All attacks can be run on both CPU/GPU (any torch.device actually). However, the optimization-based attacks are very compute intensive and using a GPU is highly advised. The other attacks are cheap enough to be run on CPUs (The Decepticon attack for example does most of the heavy lifting in assignment problems on CPU anyway, for example).

Options

It is probably best to have a look into breaching/config to see all possible options.

Citation

For now, please cite the respective publications for each attack and use case.

License

We integrate several snippets of code from other repositories and refer to the licenses included in those files for more info. We're especially thankful for related projects such as https://www.tensorflow.org/federated, https://github.com/NVlabs/DeepInversion, https://github.com/JunyiZhu-AI/R-GAP, https://github.com/facebookresearch/functorch, https://github.com/ildoonet/pytorch-gradual-warmup-lr and https://github.com/nadavbh12/VQ-VAE from which we incorporate components.

For the license of our code, refer to LICENCE.md.

Authors

This framework was built by me (Jonas Geiping), Liam Fowl and Yuxin Wen while working at the University of Maryland, College Park.

Contributing

If you have an attack that you are interested in implementing in this framework, or a use case that is interesting to you, don't hesitate to contact us or open a pull-request.

Contact

If you have any questions, also don't hesitate to open an issue here on github or write us an email.

Owner
Jonas Geiping
Researching optimization problems in machine learning with security applications.
Jonas Geiping
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
PromptDet: Expand Your Detector Vocabulary with Uncurated Images

PromptDet: Expand Your Detector Vocabulary with Uncurated Images Paper Website Introduction The goal of this work is to establish a scalable pipeline

103 Dec 20, 2022
Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks

Uniformer - Pytorch Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification ta

Phil Wang 90 Nov 24, 2022
A PyTorch Implementation of Single Shot Scale-invariant Face Detector.

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector. Eval python wider_eval_pytorch.

carwin 235 Jan 07, 2023
Preprocessed Datasets for our Multimodal NER paper

Unified Multimodal Transformer (UMT) for Multimodal Named Entity Recognition (MNER) Two MNER Datasets and Codes for our ACL'2020 paper: Improving Mult

76 Dec 21, 2022
Implementation of CVAE. Trained CVAE on faces from UTKFace Dataset to produce synthetic faces with a given degree of happiness/smileyness.

Conditional Smiles! (SmileCVAE) About Implementation of AE, VAE and CVAE. Trained CVAE on faces from UTKFace Dataset. Using an encoding of the Smile-s

Raúl Ortega 3 Jan 09, 2022
A dual benchmarking study of visual forgery and visual forensics techniques

A dual benchmarking study of facial forgery and facial forensics In recent years, visual forgery has reached a level of sophistication that humans can

8 Jul 06, 2022
Object detection GUI based on PaddleDetection

PP-Tracking GUI界面测试版 本项目是基于飞桨开源的实时跟踪系统PP-Tracking开发的可视化界面 在PaddlePaddle中加入pyqt进行GUI页面研发,可使得整个训练过程可视化,并通过GUI界面进行调参,模型预测,视频输出等,通过多种类型的识别,简化整体预测流程。 GUI界面

杨毓栋 68 Jan 02, 2023
MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark We propose a benchmark to evaluate different quantization algorithms on vari

494 Dec 29, 2022
This is an official implementation for "PlaneRecNet".

PlaneRecNet This is an official implementation for PlaneRecNet: A multi-task convolutional neural network provides instance segmentation for piece-wis

yaxu 50 Nov 17, 2022
なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモ

FaceDetection-Anti-Spoof-Demo なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモです。 モデルはPINTO_model_zoo/191_anti-spoof-mn3からONNX形式のモデルを使用しています。 Requirement mediapipe

KazuhitoTakahashi 8 Nov 18, 2022
Mscp jamf - Build compliance in jamf

mscp_jamf Build compliance in Jamf. This will build the following xml pieces to

Bob Gendler 3 Jul 25, 2022
Official release of MSHT: Multi-stage Hybrid Transformer for the ROSE Image Analysis of Pancreatic Cancer axriv: http://arxiv.org/abs/2112.13513

MSHT: Multi-stage Hybrid Transformer for the ROSE Image Analysis This is the official page of the MSHT with its experimental script and records. We de

Tianyi Zhang 53 Dec 27, 2022
CenterNet:Objects as Points目标检测模型在Pytorch当中的实现

CenterNet:Objects as Points目标检测模型在Pytorch当中的实现

Bubbliiiing 267 Dec 29, 2022
A python script to lookup Passport Index Dataset

visa-cli A python script to lookup Passport Index Dataset Installation pip install visa-cli Usage usage: visa-cli [-h] [-d DESTINATION_COUNTRY] [-f]

rand-net 16 Oct 18, 2022
This repo contains the code for paper Inverse Weighted Survival Games

Inverse-Weighted-Survival-Games This repo contains the code for paper Inverse Weighted Survival Games instructions general loss function (--lfn) can b

3 Jan 12, 2022
This repository contains code demonstrating the methods outlined in Path Signature Area-Based Causal Discovery in Coupled Time Series presented at Causal Analysis Workshop 2021.

signed-area-causal-inference This repository contains code demonstrating the methods outlined in Path Signature Area-Based Causal Discovery in Coupled

Will Glad 1 Mar 11, 2022
Machine learning framework for both deep learning and traditional algorithms

NeoML is an end-to-end machine learning framework that allows you to build, train, and deploy ML models. This framework is used by ABBYY engineers for

NeoML 704 Dec 27, 2022
An open-source Kazakh named entity recognition dataset (KazNERD), annotation guidelines, and baseline NER models.

Kazakh Named Entity Recognition This repository contains an open-source Kazakh named entity recognition dataset (KazNERD), named entity annotation gui

ISSAI 9 Dec 23, 2022
A simple Rock-Paper-Scissors game using CV in python

ML18_Rock-Paper-Scissors-using-CV A simple Rock-Paper-Scissors game using CV in python For IITISOC-21 Rules and procedure to play the interactive game

Anirudha Bhagwat 3 Aug 08, 2021