Demonstrates iterative FGSM on Apple's NeuralHash model.

Overview

apple-neuralhash-attack

Demonstrates iterative FGSM on Apple's NeuralHash model.

TL;DR: It is possible to apply noise to CSAM images and make them look like regular images to the NeuralHash model. The noise does degrade the CSAM image (see samples). But this was achieved without tuning learning rate and there are more refined attacks available too.

Example

Here is an example that uses a Grumpy Cat image in place of a CSAM image. The attack adds noise to the Grumpy Cat image and makes the model see it as a Doge image.

As a result, both of these images have the same neural hash of 11d9b097ac960bd2c6c131fa, computed via ONNX Runtime, with the script by AsuharietYgvar/AppleNeuralHash2ONNX.

doge adv_cat

More generally, because the attack optimizes the model output, the adversarial image will generate largely the same hash as the good image, regardless of the seed.

Instructions

Get ONNX model

Obtain the ONNX model from AsuharietYgvar/AppleNeuralHash2ONNX. You should have a path to a model.onnx file.

Convert ONNX model to TF model

Then convert the ONNX model to a Tensorflow model by first installing the onnx_tf library via onnx/onnx-tensorflow. Then run the following:

python3 convert.py -o /path/to/model.onnx

This will save a Tensorflow model to the current directory as model.pb.

Run adversarial attack

Finally, run the adversarial attack with the following:

python3 nnhash_attack.py --seed /path/to/neuralhash_128x96_seed1.dat

Other arguments:

-m           Path to Tensorflow model (defaults to "model.pb")
--good       Path to good image (defaults to "samples/doge.png")
--bad        Path to bad image (defaults to "samples/grumpy_cat.png")
--lr         Learning rate (defaults to 3e-1)
--save_every Save every interval (defaults to 2000)

This will save generated images to samples/iteration_{i}.png.

Note that the hash similarity may decrease initially before increasing again.

Also, for the sample images and with default parameters, the hash was identical after 28000 iterations.

Terminal output:

# Some Tensorflow boilerplate...
Iteration #2000: L2-loss=134688, Hash Similarity=0.2916666666666667
Good Hash: 11d9b097ac960bd2c6c131fa
Bad Hash : 20f1089728150af2ca2de49a
Saving image to samples/iteration2000.png...
Iteration #4000: L2-loss=32605, Hash Similarity=0.41666666666666677
Good Hash: 11d9b097ac960bd2c6c131fa
Bad Hash : 20d9b097ac170ad2cfe170da
Saving image to samples/iteration4000.png...
Iteration #6000: L2-loss=18547, Hash Similarity=0.4166666666666667
Good Hash: 11d9b097ac960bd2c6c131fa
Bad Hash : 20d9b097ac170ad2c7c1f0de
Saving image to samples/iteration6000.png...

Credit

Owner
Lim Swee Kiat
Lim Swee Kiat
GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. (CVPR 2021)

GDR-Net This repo provides the PyTorch implementation of the work: Gu Wang, Fabian Manhardt, Federico Tombari, Xiangyang Ji. GDR-Net: Geometry-Guided

169 Jan 07, 2023
Optimising chemical reactions using machine learning

Summit Summit is a set of tools for optimising chemical processes. We’ve started by targeting reactions. What is Summit? Currently, reaction optimisat

Sustainable Reaction Engineering Group 75 Dec 14, 2022
《Train in Germany, Test in The USA: Making 3D Object Detectors Generalize》(CVPR 2020)

Train in Germany, Test in The USA: Making 3D Object Detectors Generalize This paper has been accpeted by Conference on Computer Vision and Pattern Rec

Xiangyu Chen 101 Jan 02, 2023
Second-Order Neural ODE Optimizer, NeurIPS 2021 spotlight

Second-order Neural ODE Optimizer (NeurIPS 2021 Spotlight) [arXiv] ✔️ faster convergence in wall-clock time | ✔️ O(1) memory cost | ✔️ better test-tim

Guan-Horng Liu 39 Oct 22, 2022
Code for the paper "Query Embedding on Hyper-relational Knowledge Graphs"

Query Embedding on Hyper-Relational Knowledge Graphs This repository contains the code used for the experiments in the paper Query Embedding on Hyper-

DimitrisAlivas 19 Jul 26, 2022
Training code and evaluation benchmarks for the "Self-Supervised Policy Adaptation during Deployment" paper.

Self-Supervised Policy Adaptation during Deployment PyTorch implementation of PAD and evaluation benchmarks from Self-Supervised Policy Adaptation dur

Nicklas Hansen 101 Nov 01, 2022
OMLT: Optimization and Machine Learning Toolkit

OMLT is a Python package for representing machine learning models (neural networks and gradient-boosted trees) within the Pyomo optimization environment.

C⚙G - Imperial College London 179 Jan 02, 2023
UNet model with VGG11 encoder pre-trained on Kaggle Carvana dataset

TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation By Vladimir Iglovikov and Alexey Shvets Introduction TernausNet is

Vladimir Iglovikov 1k Dec 28, 2022
Doing the asl sign language classification on static images using graph neural networks.

SignLangGNN When GNNs 💜 MediaPipe. This is a starter project where I tried to implement some traditional image classification problem i.e. the ASL si

10 Nov 09, 2022
AAAI-22 paper: SimSR: Simple Distance-based State Representationfor Deep Reinforcement Learning

SimSR Code and dataset for the paper SimSR: Simple Distance-based State Representationfor Deep Reinforcement Learning (AAAI-22). Requirements We assum

7 Dec 19, 2022
Pytorch implementation for "Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion" (NeurIPS 2021)

Density-aware Chamfer Distance This repository contains the official PyTorch implementation of our paper: Density-aware Chamfer Distance as a Comprehe

Tong WU 93 Dec 15, 2022
PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

Yang Li 12 May 30, 2022
PyTorch DepthNet Training on Still Box dataset

DepthNet training on Still Box Project page This code can replicate the results of our paper that was published in UAVg-17. If you use this repo in yo

Clément Pinard 115 Nov 21, 2022
Tensorflow implementation of Human-Level Control through Deep Reinforcement Learning

Human-Level Control through Deep Reinforcement Learning Tensorflow implementation of Human-Level Control through Deep Reinforcement Learning. This imp

Devsisters Corp. 2.4k Dec 26, 2022
This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports"

Introduction: X-Ray Report Generation This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports". O

no name 36 Dec 16, 2022
SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks (Scientific Reports)

SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks Molecular interaction networks are powerful resources for the discovery. While dee

Kexin Huang 49 Oct 15, 2022
Image De-raining Using a Conditional Generative Adversarial Network

Image De-raining Using a Conditional Generative Adversarial Network [Paper Link] [Project Page] He Zhang, Vishwanath Sindagi, Vishal M. Patel In this

He Zhang 216 Dec 18, 2022
MIRACLE (Missing data Imputation Refinement And Causal LEarning)

MIRACLE (Missing data Imputation Refinement And Causal LEarning) Code Author: Trent Kyono This repository contains the code used for the "MIRACLE: Cau

van_der_Schaar \LAB 15 Dec 29, 2022
Oriented Response Networks, in CVPR 2017

Oriented Response Networks [Home] [Project] [Paper] [Supp] [Poster] Torch Implementation The torch branch contains: the official torch implementation

ZhouYanzhao 217 Dec 12, 2022
BABEL: Bodies, Action and Behavior with English Labels [CVPR 2021]

BABEL is a large dataset with language labels describing the actions being performed in mocap sequences. BABEL labels about 43 hours of mocap sequences from AMASS [1] with action labels.

113 Dec 28, 2022