Security evaluation module with onnx, pytorch, and SecML.

Overview

🚀 🐼 🔥 PandaVision

Integrate and automate security evaluations with onnx, pytorch, and SecML!

Installation

Starting the server without Docker

If you want to run the server with docker, skip to the next section.

This project uses Redis-RQ for handling the queue of requested jobs. Please install Redis if you plan to run this Flask server without using Docker.

Then, install the Python requirements, running the following command in your shell:

pip install -r requirements.txt

Make sure your Redis server is running on your local machine. Test the Redis connection with the following command:

redis-cli ping

The response PONG should appear in the shell.

If the database servers is down, check the linked docs for finding out how to restart it in your system.

Notice: the code is expected to connect to the database through its default port, 6379 for Redis.

Now we are ready to start the server. Don't forget that this system uses external workers to process the long-running tasks, so we need to start the workers along with the sever. Run the following commands from the app folder:

python app/worker.py

Now open another shell and run the server:

python app/runserver.py

Starting the server with docker

If you already started the server locally, you can skip to the next section.

If you already started the server locally, but you want to start it with docker instead, you should stop the running services. On linux, press CTRL + C to stop the server and the worker, then stop the redis service on the machine.

sudo service redis stop

In order to use the docker-compose file provided, install Docker and start the Docker service.

Since this project uses different interconnected containers, it is recommended to install and use Docker Compose.

Once set up, Docker Compose will automatically take care of the setup process. Just type the following commands in your shell, from the app path:

docker build . -t pandavision && docker-compose build && docker-compose up

If you want to use more workers, the following command should be used(replace the number 2 with the number of workers you want to set up):

docker-compose up --scale worker=2

Usage

Quick start

For a demo example, you can download a sample containing few images of the imagenet dataset and a resnet50-pretrained model from the onnx zoo.

Download the files and place them in a known directory.

Supported models

You can export your own ONNX pretrained model from the library of your choice, and pass them to the module. This project uses onnx2pytorch as a dependency to load the ONNX models. Check out the supported operations if you encounter problems when importing the models. A list of pretrained models is also available in the main page.

Data preparation

The module accepts HDF5 files as data sources. The file should contain the samples as the format NCHV.

Note that, while the standardization can be performed through the APIs themselves (preferred), the preprocessing such as resize, reshape, rotation and normalization should be applied in this step.

An example, that creates a subset of the imagenet dataset, can be found in this gist.

How to start a security evaluation job

The easy way

You can access the APIs through the web interface by connecting at http://localhost:8080. You will be prompted to the home page of the service. Click then on the "Try it out!" button, and you will see a form to configure the security evaluation. Upload the model and the dataset of choice, then select the paramters. Finally, click "Submit", and wait for the evaluation to finish. As soon as the worker finishes processing the data, you will see the security evaluation curve on the interface.

You can follow this video tutorial (click for YouTube video) for configuring the security evaluation:

Demo PandaVision

Coming soon ➡️ download data in csv format.

The nerdy way

A security evaluation job can be enqueued with a POST request to /security_evaluations. The API returns the job unique ID that can be used to access job status and results. Running workers will wait for new jobs in the queue and consume them with a FIFO rule.

The request should specify the following parameters in its body:

  • dataset (string): the path where to find the dataset to be loaded (validation dataset should be used, otherwise check out the "indexes" input parameter).
  • trained-model (string): the path of the onnx trained model.
  • performance-metric (string): the performance metric type that should be used to evaluate the system adversarial robustness. Currently implemented only the classification-accuracy metric.
  • evaluation-mode (string): one of 'fast', 'complete'. A fast evaluation will perform the experiment with a subset of the whole dataset (100 samples). For more info on the fast evaluation, see this paper.
  • task (string): type of task that the model is supposed to perform. This determines the attack scenario. (available: "classification" - support for more use cases will be provided in the future).
  • perturbation-type (string): type of perturbation to apply (available: "max-norm" or "random").
  • perturbation-values (Array of floats): array of values to use for crafting the adversarial examples. These are specified as percentage of the input range, fixed, in [0, 1] (e.g., a value of 0.05 will apply a perturbation of maximum 5% of the input scale).
  • indexes (Array of ints): if the list of indexes is specified, it will be used for creating a specific sample from the dataset.
  • preprocessing (dict): dictionary with keys "mean" and "std" for defining custom preprocessing. The values should be expressed as lists. If not set, standard imagenet preprocessing will be applied. Otherwise, specify an empty dict for no preprocessing.
{
  "dataset": "<dataset-path>.hdf5",
  "trained-model": "<model_path>.onnx",
  "performance-metric": "classification-accuracy",
  "evaluation-mode": "fast",
  "task": "classification",
  "perturbation-type": "max-norm",
  "perturbation-values": [
    0, 0.01, 0.02, 0.03, 0.04, 0.05
  ]
}

The API can also be tested with Postman (it is configured already to get the ID and use it for fetching results):

Run in Postman

Job status API

Job status can be retrieved by sending a GET request to /security_evaluations/{id}, where the id of the job should be replaced with the job ID of the previous point. A GET to /security_evaluations will return the status of all jobs found in the queues and in the finished job registries.

Job results API

Job results can be retrieved, once the job has entered the finished state, with a GET request to /security_evaluations/{id}/output. A request to this path with a job ID that is not yet in the finished status will redirect to the job status API.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

If you don't have time to contribute yourself, feel free to open an issue with your suggestions.

License

This project is licensed under the terms of the MIT license. See LICENSE for more information.

Credits

Based on the Security evaluation module - ALOHA.eu project

Comments
  • Adv examples api (PGD support)

    Adv examples api (PGD support)

    Changelog

    • [x] Add caching for PGD attack

    • [x] Add curve visualization for PGD attack

    • [x] Add adversarial example visualization for PGD attack

    • [x] Extend to other attacks

    • [x] Fix min-distance attacks and PGD caching

    • [x] Document the changes

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Updates - attack logging, adversarial example inspection, debugging.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) Major changes.

    • Other information:(does the pr fix some issues? Tag them with #)

      Fixes #6 .

    opened by maurapintor 0
  • Fix ram problems

    Fix ram problems

    Changelog

    • Fixed CW attack memory problem
    • Efficient computation of adversarial examples in maximum-norm case

    What kind of change does this PR introduce?

    • Clear cache for CW attack (temporary fix until secml is updated to support optional caching).

    • PGD attack is run, for each value of perturbation, only in the cases that were not found adversarial for smaller norms.

    • Other information:

      Fixes #21

    opened by maurapintor 0
  • Memory problems when running complete evaluation

    Memory problems when running complete evaluation

    Evaluation fails with some particular configuration of parameters. The reason seems to be related to cached adversarial examples.

    Expected Behavior

    The attack should not make the ram memory explode.

    Current Behavior

    The ram memory fills, then the swap memory, then everything freezes.

    Possible Solution

    Possibly free unused data, such as the attack paths.

    Steps to Reproduce

    The evaluation fails with the following set of parameters:

    • resnet 50 net
    • imagenet data from the demo data
    • L2 CW attack

    Context (Environment)

    • OS: Ubuntu 20.04 LTS
    • Python Version: 3.8
    • Pandavision Version: 0.3
    • Browser: Mozilla Firefox
    bug enhancement 
    opened by maurapintor 0
  • fixed conflict for picker

    fixed conflict for picker

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Bug fix

    • What is the current behavior? Now the GUI updates the attack selection and the perturbation size choices simultaneously.

    • Other information: Fixes #19

    opened by maurapintor 0
  • Attack selector bug

    Attack selector bug

    Attack choices not shown.

    Expected Behavior

    On the GUI, if the perturbation type is picked, the selector for the attack should visualize the attack choices for the specified perturbation model.

    Current Behavior

    The attack choices are not updated.

    Possible Solution

    Possible conflict with the jquery call that updates the perturbation values.

    bug 
    opened by maurapintor 0
  • Fix docker compose version

    Fix docker compose version

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Bug fix for docker container. Feature: picker for perturbation size.

    • What is the new behavior?

    • Now the docker-compose should at least be v1.16, as it supports the yaml file format used in this repo for building the pandavision architecture.
    • The GUI now allows to pick the perturbation sizes for the evaluation.
    • Other information: Fixes #14 Fixes #17
    opened by maurapintor 0
  • Docker compose problem with services key

    Docker compose problem with services key

    Docker compose file format is incompatible with old versions.

    Expected Behavior

    The command:

    docker build . -t pandavision && docker-compose build && docker-compose up
    

    should build the container and run smoothly.

    Current Behavior

    The command produces, with some Docker-compose versions, the following output:

    Successfully tagged pandavision:latest ERROR: The Compose file './docker-compose.yml' is invalid because: Unsupported config option for services: 'worker'

    Possible Solution

    The problem seems related to the docker-compose versions that have incompatible specifications for the expected yaml: https://docs.docker.com/compose/compose-file/compose-versioning/#versioning

    A suggested solution, from this StackOverflow question, is to upgrade the docker-compose version, and specify the version number in the top of the yaml file.

    Possible Implementation

    1. add line in the yaml file, stating version: "3" in the header.
    2. suggest minimum version required for docker-compose, i.e. at least 1.6, in the readme file.
    opened by maurapintor 0
  • Chart x-axis based on eps values rather than order

    Chart x-axis based on eps values rather than order

    The sec-eval curve is now presenting results in a "linspace" way. The possibility of adding scatter values should be added, so that the list of eps values can be dynamically adjusted to arbitrary ranges.

    bug enhancement 
    opened by maurapintor 0
  • GUI for security evaluations

    GUI for security evaluations

    Add visual interface for testing APIs. It should display at least the model and data selection, plus the results of the security evaluation when completed.

    enhancement 
    opened by maurapintor 0
  • Sequential attacks

    Sequential attacks

    I'm submitting a ...

    • feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    A multi-attack interface should be used. The interface should allow to specify a sequence of attacks that is used for testing the robustness of a model. The sequence will run the first attack on the whole dataset, then run the next attack in the sequence only on the points that fail for the given perturbation model.

    enhancement 
    opened by maurapintor 0
  • RobustBench models

    RobustBench models

    I'm submitting a ...

    • feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    Models from RobustBench should be available through the interface. The choice should be available next to the upload model button, where a dropdown menu should be displayed.

    enhancement 
    opened by maurapintor 0
  • Dataset samples

    Dataset samples

    I'm submitting a ...

    [x] feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    The interface should allow for selecting subsamples of commonly-used datasets without uploading them to the server. At least a sample from the following datasets should be included:

    • [ ] MNIST
    • [ ] CIFAR10
    • [ ] CIFAR100
    • [ ] ImageNet
    enhancement 
    opened by maurapintor 0
  • Feature request: other tasks

    Feature request: other tasks

    I'm submitting a ...

    • Feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    More use cases could be supported, as in https://gitlab.com/aloha.eu/security_evaluation. Possible use cases are:

    • detection
    • segmentation
    enhancement 
    opened by maurapintor 0
  • GPU support for container

    GPU support for container

    GPU can be currently used by running the server and worker locally. Using a container that also works with GPU might be beneficial for speedups and ease installation.

    enhancement help wanted 
    opened by maurapintor 0
Releases(v0.5)
Owner
Maura Pintor
🐼 Fighting evil adversarial pandas.
Maura Pintor
Veri Setinizi Yolov5 Formatına Dönüştürün

Veri Setinizi Yolov5 Formatına Dönüştürün! Bu Repo da Neler Var? Xml Formatındaki Veri Setini .Txt Formatına Çevirme Xml Formatındaki Dosyaları Silme

Kadir Nar 4 Aug 22, 2022
A task Provided by A respective Artenal Ai and Ml based Company to complete it

A task Provided by A respective Alternal Ai and Ml based Company to complete it .

Parth Madan 1 Jan 25, 2022
Efficient 3D Backbone Network for Temporal Modeling

VoV3D is an efficient and effective 3D backbone network for temporal modeling implemented on top of PySlowFast. Diverse Temporal Aggregation and

102 Dec 06, 2022
The official homepage of the COCO-Stuff dataset.

The COCO-Stuff dataset Holger Caesar, Jasper Uijlings, Vittorio Ferrari Welcome to official homepage of the COCO-Stuff [1] dataset. COCO-Stuff augment

Holger Caesar 715 Dec 31, 2022
RMTD: Robust Moving Target Defence Against False Data Injection Attacks in Power Grids

RMTD: Robust Moving Target Defence Against False Data Injection Attacks in Power Grids Real-time detection performance. This repo contains the code an

0 Nov 10, 2021
Generic U-Net Tensorflow implementation for image segmentation

Tensorflow Unet Warning This project is discontinued in favour of a Tensorflow 2 compatible reimplementation of this project found under https://githu

Joel Akeret 1.8k Dec 10, 2022
A boosting-based Multiple Instance Learning (MIL) package that includes MIL-Boost and MCIL-Boost

A boosting-based Multiple Instance Learning (MIL) package that includes MIL-Boost and MCIL-Boost

Jun-Yan Zhu 27 Aug 08, 2022
This is the paddle code for SeBoW(Self-Born wiring for neural trees), a kind of neural tree born form a large search space

SeBoW: Self-Born Wiring for neural trees(PaddlePaddle version) This is the paddle code for SeBoW(Self-Born wiring for neural trees), a kind of neural

HollyLee 13 Dec 08, 2022
Here we present the implementation in TensorFlow of our work about liver lesion segmentation accepted in the Machine Learning 4 Health Workshop

Detection-aided liver lesion segmentation Here we present the implementation in TensorFlow of our work about liver lesion segmentation accepted in the

Image Processing Group - BarcelonaTECH - UPC 96 Oct 26, 2022
Object Detection using YOLO from PyImageSearch

Object Detection using YOLO from PyImageSearch By applying object detection, you’ll not only be able to determine what is in an image, but also where

Mohamed NIANG 1 Feb 09, 2022
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark

Introduction English | 简体中文 MMAction2 is an open-source toolbox for video understanding based on PyTorch. It is a part of the OpenMMLab project. The m

OpenMMLab 2.7k Jan 07, 2023
A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".

Mugs: A Multi-Granular Self-Supervised Learning Framework This is a PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-

Sea AI Lab 62 Nov 08, 2022
My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control

My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control

yobi byte 29 Oct 09, 2022
This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian Sign Language.

LIBRAS-Image-Classifier This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian

Aryclenio Xavier Barros 26 Oct 14, 2022
A Library for Modelling Probabilistic Hierarchical Graphical Models in PyTorch

A Library for Modelling Probabilistic Hierarchical Graphical Models in PyTorch

Korbinian Pöppel 47 Nov 28, 2022
CrossMLP - The repository offers the official implementation of our BMVC 2021 paper (oral) in PyTorch.

CrossMLP Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation Bin Ren1, Hao Tang2, Nicu Sebe1. 1University of Trento, Italy, 2ETH, Switzerla

Bingoren 16 Jul 27, 2022
Implicit Deep Adaptive Design (iDAD)

Implicit Deep Adaptive Design (iDAD) This code supports the NeurIPS paper 'Implicit Deep Adaptive Design: Policy-Based Experimental Design without Lik

Desi 12 Aug 14, 2022
TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data.

TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data.

Microsoft 1.3k Dec 30, 2022
A collection of Google research projects related to Federated Learning and Federated Analytics.

Federated Research Federated Research is a collection of research projects related to Federated Learning and Federated Analytics. Federated learning i

Google Research 483 Jan 05, 2023
这是一个unet-pytorch的源码,可以训练自己的模型

Unet:U-Net: Convolutional Networks for Biomedical Image Segmentation目标检测模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Downl

Bubbliiiing 567 Jan 05, 2023