Official implementation of the paper "Steganographer Detection via a Similarity Accumulation Graph Convolutional Network"

Overview

SAGCN - Official PyTorch Implementation

| Paper | Project Page

This is the official implementation of the paper "Steganographer detection via a similarity accumulation graph convolutional network". NOTE: We are refactoring this project to the best practice of engineering.

Abstract

Steganographer detection aims to identify guilty users who conceal secret information in a number of images for the purpose of covert communication in social networks. Existing steganographer detection methods focus on designing discriminative features but do not explore relationship between image features or effectively represent users based on features. In these methods, each image is recognized as an equivalent, and each user is regarded as the distribution of all images shared by the corresponding user. However, the nuances of guilty users and innocent users are difficult to recognize with this flattened method. In this paper, the steganographer detection task is formulated as a multiple-instance learning problem in which each user is considered to be a bag, and the shared images are multiple instances in the bag. Specifically, we propose a similarity accumulation graph convolutional network to represent each user as a complete weighted graph, in which each node corresponds to features extracted from an image and the weight of an edge is the similarity between each pair of images. The constructed unit in the network can take advantage of the relationships between instances so that common patterns of positive instances can be enhanced via similarity accumulations. Instead of operating on a fixed original graph, we propose a novel strategy for reconstructing and pooling graphs based on node features to iteratively operate multiple convolutions. This strategy can effectively address oversmoothing problems that render nodes indistinguishable although they share different instance-level labels. Compared with the state-of-the-art method and other representative graph-based models, the proposed framework demonstrates its effectiveness and reliability ability across image domains, even in the context of large-scale social media scenarios. Moreover, the experimental results also indicate that the proposed network can be generalized to other multiple-instance learning problems.

Roadmap

After many rounds of revision, the project code implementation is not elegant. Thus, in order to help the readers to reproduce the experimental results of this paper quickly, we will open-source our study following this roadmap:

  • refactor and open-source all the model files, training files, and test files of the proposed method for comparison experiments.
  • refactor and open-source the visualization experiments.
  • refactor and open-source the APIs for the real-world steganographer detection in an out-of-box fashion.

Quick Start

Dataset and Pre-processing

We use the MDNNSD model to extract a 320-D feature from each image and save the extracted features in different .mat files. You should check ./data/train and ./data/test to confirm you have the dataset ready before experiments. For example, cover.mat and suniward_01.mat should be placed in the ./data/train and ./data/test folders.

Then, we provide a dataset tool to distribute image features and construct innocent users and guilty users as described in the paper, for example:

python preprocess_dataset.py --target suniward_01_100 --guilty_file suniward_01 --is_train --is_test --is_reset --mixin_num 0

Train the proposed SAGCN

To obtain our designed model for detecting steganographers, we provide an entry file with flexible command-line options, arguments to train the proposed SAGCN on the desired dataset under various experiment settings, for example:

python main.py --epochs 80 --batch_size 100 --model_name SAGCN --folder_name suniward_01_100 --parameters_name=sagcn_suniward_01_100 --mode train --learning_rate 1e-2 --gpu 1
python main.py --epochs 80 --batch_size 100 --model_name SAGCN --folder_name suniward_01_100 --parameters_name=sagcn_suniward_01_100 --mode train --learning_rate 1e-2 --gpu 1

Test the proposed SAGCN

For reproducing the reported experimental results, you just need to pass command-line options of the corresponding experimental setting, such as:

python main.py --batch_size 100 --model_name SAGCN --parameters_name sagcn_suniward_01_100 --folder_name suniward_01_100 --mode test --gpu 1

Visualize

If you set summary to True during training, you can use tensorboard to visualize the training process.

tensorboard --logdir logs --host 0.0.0.0 --port 8088

Requirement

  • Hardware: GPUs Tesla V100-PCIE (our version)
  • Software:
    • h5py==2.7.1 (our version)
    • scipy==1.1.0 (our version)
    • tqdm==4.25.0 (our version)
    • numpy==1.14.3 (our version)
    • torch==0.4.1 (our version)

Contact

If you have any questions, please feel free to open an issue.

Contribution

We thank all the people who already contributed to this project:

  • Zhi ZHANG
  • Mingjie ZHENG
  • Shenghua ZHONG
  • Yan LIU

Citation Information

If you find the project useful, please cite:

@article{zhang2021steganographer,
  title={Steganographer detection via a similarity accumulation graph convolutional network},
  author={Zhang, Zhi and Zheng, Mingjie and Zhong, Sheng-hua and Liu, Yan},
  journal={Neural Networks},
  volume={136},
  pages={97--111},
  year={2021}
}
Owner
ZHANG Zhi
日知其所亡,月无忘其所能
ZHANG Zhi
Deep Networks with Recurrent Layer Aggregation

RLA-Net: Recurrent Layer Aggregation Recurrence along Depth: Deep Networks with Recurrent Layer Aggregation This is an implementation of RLA-Net (acce

Joy Fang 21 Aug 16, 2022
DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time

DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time Introduction This is official implementation for DR-GAN (IEEE TCS

Kang Liao 18 Dec 23, 2022
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

Graph Transformer - Pytorch Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2. This was recently used by bot

Phil Wang 97 Dec 28, 2022
Kaggle Lyft Motion Prediction for Autonomous Vehicles 4th place solution

Lyft Motion Prediction for Autonomous Vehicles Code for the 4th place solution of Lyft Motion Prediction for Autonomous Vehicles on Kaggle. Discussion

44 Jun 27, 2022
DI-smartcross - Decision Intelligence Platform for Traffic Crossing Signal Control

DI-smartcross DI-smartcross - Decision Intelligence Platform for Traffic Crossin

OpenDILab 213 Jan 02, 2023
Plugin adapted from Ultralytics to bring YOLOv5 into Napari

napari-yolov5 Plugin adapted from Ultralytics to bring YOLOv5 into Napari. Training and detection can be done using the GUI. Training dataset must be

2 May 05, 2022
City Surfaces: City-scale Semantic Segmentation of Sidewalk Surfaces

City Surfaces: City-scale Semantic Segmentation of Sidewalk Surfaces Paper Temporary GitHub page for City Surfaces paper. More soon! While designing s

14 Nov 10, 2022
Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two

512x512 flowers after 12 hours of training, 1 gpu 256x256 flowers after 12 hours of training, 1 gpu Pizza 'Lightweight' GAN Implementation of 'lightwe

Phil Wang 1.5k Jan 02, 2023
A library for efficient similarity search and clustering of dense vectors.

Faiss Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any

Meta Research 18.8k Jan 08, 2023
Cossim - Sharpened Cosine Distance implementation in PyTorch

Sharpened Cosine Distance PyTorch implementation of the Sharpened Cosine Distanc

Istvan Fehervari 10 Mar 22, 2022
ReLoss - Official implementation for paper "Relational Surrogate Loss Learning" ICLR 2022

Relational Surrogate Loss Learning (ReLoss) Official implementation for paper "R

Tao Huang 31 Nov 22, 2022
Tensorflow-seq2seq-tutorials - Dynamic seq2seq in TensorFlow, step by step

seq2seq with TensorFlow Collection of unfinished tutorials. May be good for educational purposes. 1 - simple sequence-to-sequence model with dynamic u

Matvey Ezhov 1k Dec 17, 2022
This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018

Learning-to-See-in-the-Dark This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018, by Chen Chen, Qifeng Chen, Jia Xu, and Vl

5.3k Jan 01, 2023
NeRViS: Neural Re-rendering for Full-frame Video Stabilization

Neural Re-rendering for Full-frame Video Stabilization

Yu-Lun Liu 9 Jun 17, 2022
PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

Impersonator PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer an

SVIP Lab 1.7k Jan 06, 2023
Custom studies about block sparse attention.

Block Sparse Attention 研究总结 本人近半年来对Block Sparse Attention(块稀疏注意力)的研究总结(持续更新中)。按时间顺序,主要分为如下三部分: PyTorch 自定义 CUDA 算子——以矩阵乘法为例 基于 Triton 的 Block Sparse A

Chen Kai 2 Jan 09, 2022
Supervised multi-SNE (S-multi-SNE): Multi-view visualisation and classification

S-multi-SNE Supervised multi-SNE (S-multi-SNE): Multi-view visualisation and classification A repository containing the code to reproduce the findings

Theodoulos Rodosthenous 3 Apr 15, 2022
Text to image synthesis using thought vectors

Text To Image Synthesis Using Thought Vectors This is an experimental tensorflow implementation of synthesizing images from captions using Skip Though

Paarth Neekhara 2.1k Jan 05, 2023
Azua - build AI algorithms to aid efficient decision-making with minimum data requirements.

Project Azua 0. Overview Many modern AI algorithms are known to be data-hungry, whereas human decision-making is much more efficient. The human can re

Microsoft 197 Jan 06, 2023
Vikrant Deshpande 1 Nov 17, 2022