Official repository of "DeepMIH: Deep Invertible Network for Multiple Image Hiding", TPAMI 2022.

Overview

DeepMIH: Deep Invertible Network for Multiple Image Hiding (TPAMI 2022)

This repo is the official code for

Published on IEEE Transactions of Pattern Analysis and Machine Intelligence (TPAMI 2022). @ Beihang University.

1. Pre-request

1.1 Dependencies and Installation

1.2 Dataset

  • In this paper, we use the commonly used dataset DIV2K, COCO, and ImageNet.
  • For train or test on your own path, change the code in config.py:
    line50: TRAIN_PATH_DIV2K = ''
    line51: VAL_PATH_DIV2K = ''
    line54: VAL_PATH_COCO = ''
    line55: TEST_PATH_COCO = ''
    line57: VAL_PATH_IMAGENET = ''

2. Test

  1. Here we provide a trained model.
  2. Download and update the MODEL_PATH and the file name suffix before testing by the trained model.
    For example, if the model name is model_checkpoint_03000_1.pt, model_checkpoint_03000_2.pt, model_checkpoint_03000_3.pt,
    and its path is /home/usrname/DeepMIH/model/,
    set:
    PRETRAIN_PATH = '/home/usrname/DeepMIH/model/',
    PRETRAIN_PATH_3 = '/home/usrname/DeepMIH/model/',
    file name suffix = 'model_checkpoint_03000'.
  3. Check the dataset path is correct.
  4. Create an image path to save the generated images. Update TEST_PATH.
  5. Run test_oldversion.py.

3. Train

  1. Create a path to save the trained models and update MODEL_PATH.
  2. Check the optim parameters in config.py is correct. Make sure the sub-model(net1, net2, net3...) you want to train is correct.
  3. Run train_old_version.py. Following the Algorithm 1 to train the model.
  4. Note: DeepMIH may be hard to train. The model may suffer from explosion. Our solution is to stop the training process at a normal node and abate the learning rate. Then, continue to train the model.

4. Further explanation

In the train_old_version.py at line 223:
rev_secret_dwt_2 = rev_dwt_2.narrow(1, 4 * c.channels_in, 4 * c.channels_in) # channels = 12,
the recovered secret image_2 is obtained by spliting the middle 12 channels of the varible rev_dwt_2. However, in the forward process_2, the input is obtained by concatenating (stego, imp, secret_2) together. This means that the original code train_old_version.py has a bug on recovery process (the last 12 channels of the varible rev_dwt_2 should be splited to be the recovered secret image_2, instead of the middle 12 one). We found that in this way the network is still able to converge, thus we keep this setting in the test process.
We also offer a corrected version train.py (see line 225) and test.py. You can also train your own model in this way.

Feel free to contact: [email protected].

Citation

If you find this repository helpful, you may cite:

@ARTICLE{9676416,
  author={Guan, Zhenyu and Jing, Junpeng and Deng, Xin and Xu, Mai and Jiang, Lai and Zhang, Zhou and Li, Yipeng},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={DeepMIH: Deep Invertible Network for Multiple Image Hiding}, 
  year={2022},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TPAMI.2022.3141725}}
Owner
Junpeng Jing
Junpeng Jing
Unet network with mean teacher for altrasound image segmentation

Unet network with mean teacher for altrasound image segmentation

5 Nov 21, 2022
ANEA: Distant Supervision for Low-Resource Named Entity Recognition

ANEA: Distant Supervision for Low-Resource Named Entity Recognition ANEA is a tool to automatically annotate named entities in unlabeled text based on

Saarland University Spoken Language Systems Group 15 Mar 30, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

75 Dec 16, 2022
Code for "Learning Graph Cellular Automata"

Learning Graph Cellular Automata This code implements the experiments from the NeurIPS 2021 paper: "Learning Graph Cellular Automata" Daniele Grattaro

Daniele Grattarola 37 Oct 26, 2022
implementation for paper "ShelfNet for fast semantic segmentation"

ShelfNet-lightweight for paper (ShelfNet for fast semantic segmentation) This repo contains implementation of ShelfNet-lightweight models for real-tim

Juntang Zhuang 252 Sep 16, 2022
ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.

This repo contains some of the codes for the following paper Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code

Xuewen Yang 56 Dec 08, 2022
Automatic labeling, conversion of different data set formats, sample size statistics, model cascade

Simple Gadget Collection for Object Detection Tasks Automatic image annotation Conversion between different annotation formats Obtain statistical info

llt 4 Aug 24, 2022
[AAAI22] Reliable Propagation-Correction Modulation for Video Object Segmentation

Reliable Propagation-Correction Modulation for Video Object Segmentation (AAAI22) Preview version paper of this work is available at: https://arxiv.or

Xiaohao Xu 70 Dec 04, 2022
Bridging Vision and Language Model

BriVL BriVL (Bridging Vision and Language Model) 是首个中文通用图文多模态大规模预训练模型。BriVL模型在图文检索任务上有着优异的效果,超过了同期其他常见的多模态预训练模型(例如UNITER、CLIP)。 BriVL论文:WenLan: Bridgi

235 Dec 27, 2022
The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

PlantStereo This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Wang Qingyu 14 Nov 28, 2022
Space Ship Simulator using python

FlyOver Basic space-ship simulator using python How to run? Just double click run.py What modules do i need? All modules that i currently using is bui

0 Oct 09, 2022
Code for KDD'20 "Generative Pre-Training of Graph Neural Networks"

GPT-GNN: Generative Pre-Training of Graph Neural Networks GPT-GNN is a pre-training framework to initialize GNNs by generative pre-training. It can be

Ziniu Hu 346 Dec 19, 2022
Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)

MusCaps: Generating Captions for Music Audio Ilaria Manco1 2, Emmanouil Benetos1, Elio Quinton2, Gyorgy Fazekas1 1 Queen Mary University of London, 2

Ilaria Manco 57 Dec 07, 2022
AFLFast (extends AFL with Power Schedules)

AFLFast Power schedules implemented by Marcel Böhme [email protected]

Marcel Böhme 380 Jan 03, 2023
Teaches a student network from the knowledge obtained via training of a larger teacher network

Distilling-the-knowledge-in-neural-network Teaches a student network from the knowledge obtained via training of a larger teacher network This is an i

Abhishek Sinha 146 Dec 11, 2022
First-Order Probabilistic Programming Language

FOPPL: A First-Order Probabilistic Programming Language This is an implementation of FOPPL, an S-expression based probabilistic programming language d

Renato Costa 23 Dec 20, 2022
This repository contains the code for designing risk bounded motion plans for car-like robot using Carla Simulator.

Nonlinear Risk Bounded Robot Motion Planning This code simulates the bicycle dynamics of car by steering it on the road by avoiding another static car

8 Sep 03, 2022
PyTorch implementation of Glow

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions (https://arxiv.org/abs/1807.03039) Usage: python train.p

Kim Seonghyeon 433 Dec 27, 2022
CCP dataset from Clothing Co-Parsing by Joint Image Segmentation and Labeling

Clothing Co-Parsing (CCP) Dataset Clothing Co-Parsing (CCP) dataset is a new clothing database including elaborately annotated clothing items. 2, 098

Wei Yang 434 Dec 24, 2022
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition

RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition (PyTorch) Paper: https://arxiv.org/abs/2105.01883 Citation: @

260 Jan 03, 2023