python library for invisible image watermark (blind image watermark)

Overview

invisible-watermark

PyPI License Python Platform Downloads

invisible-watermark is a python library and command line tool for creating invisible watermark over image.(aka. blink image watermark, digital image watermark). The algorithm doesn't reply on the original image.

Note that this library is still experimental and it doesn't support GPU acceleration, carefully deploy it on the production environment. The default method dwtDCT(one variant of frequency methods) is ready for on-the-fly embedding, the other methods are too slow on a CPU only environment.

supported algorithms

speed

  • default embedding method dwtDct is fast and suitable for on-the-fly embedding
  • dwtDctSvd is 3x slower and rivaGan is 10x slower, for large image they are not suitable for on-the-fly embedding

accuracy

  • The algorithm cannot gurantee to decode the original watermarks 100% accurately even though we don't apply any attack.
  • Known defects: Test shows all algorithms do not perform well for web page screenshots or posters with homogenous background color

Supported Algorithms

  • dwtDct: DWT + DCT transform, embed watermark bit into max non-trivial coefficient of block dct coefficents

  • dwtDctSvd: DWT + DCT transform, SVD decomposition of each block, embed watermark bit into singular value decomposition

  • rivaGan: encoder/decoder model with Attention mechanism + embed watermark bits into vector.

background:

How to install

pip install invisible-watermark

Library API

Embed watermark

  • example embed 4 characters (32 bits) watermark
import cv2
from imwatermark import WatermarkEncoder

bgr = cv2.imread('test.png')
wm = 'test'

encoder = WatermarkEncoder()
encoder.set_watermark('bytes', wm.encode('utf-8'))
bgr_encoded = encoder.encode(bgr, 'dwtDct')

cv2.imwrite('test_wm.png', bgr_encoded)

Decode watermark

  • example decode 4 characters (32 bits) watermark
import cv2
from imwatermark import WatermarkDecoder

bgr = cv2.imread('test_wm.png')

decoder = WatermarkDecoder('bytes', 32)
watermark = decoder.decode(bgr, 'dwtDct')
print(watermark.decode('utf-8'))

CLI Usage

embed watermark:  ./invisible-watermark -v -a encode -t bytes -m dwtDct -w 'hello' -o ./test_vectors/wm.png ./test_vectors/original.jpg

decode watermark: ./invisible-watermark -v -a decode -t bytes -m dwtDct -l 40 ./test_vectors/wm.png

positional arguments:
  input                 The path of input

optional arguments:
  -h, --help            show this help message and exit
  -a ACTION, --action ACTION
                        encode|decode (default: None)
  -t TYPE, --type TYPE  bytes|b16|bits|uuid|ipv4 (default: bits)
  -m METHOD, --method METHOD
                        dwtDct|dwtDctSvd|rivaGan (default: maxDct)
  -w WATERMARK, --watermark WATERMARK
                        embedded string (default: )
  -l LENGTH, --length LENGTH
                        watermark bits length, required for bytes|b16|bits
                        watermark (default: 0)
  -o OUTPUT, --output OUTPUT
                        The path of output (default: None)
  -v, --verbose         print info (default: False)

Test Result

For better doc reading, we compress all images in this page, but the test is taken on 1920x1080 original image.

Methods are not robust to resize or aspect ratio changed crop but robust to noise, color filter, brightness and jpg compress.

rivaGan outperforms the default method on crop attack.

only default method is ready for on-the-fly embedding.

Input

  • Input Image: 1960x1080 Image
  • Watermark:
    • For freq method, we use 64bits, string expression "qingquan"
    • For RivaGan method, we use 32bits, string expression "qing"
  • Parameters: only take U frame to keep image quality, scale=36

Attack Performance

Watermarked Image

wm

Attacks Image Freq Method RivaGan
JPG Compress wm_jpg Pass Pass
Noise wm_noise Pass Pass
Brightness wm_darken Pass Pass
Overlay wm_overlay Pass Pass
Mask wm_mask_large Pass Pass
crop 7x5 wm_crop_7x5 Fail Pass
Resize 50% wm_resize_half Fail Fail
Rotate 30 degress wm_rotate Fail Fail

Running Speed (CPU Only)

Image Method Encoding Decoding
1920x1080 dwtDct 300-350ms 150ms-200ms
1920x1080 dwtDctSvd 1500ms-2s ~1s
1920x1080 rivaGan ~5s 4-5s
600x600 dwtDct 70ms 60ms
600x600 dwtDctSvd 185ms 320ms
600x600 rivaGan 1s 600ms

RivaGAN Experimental

Further, We will deliver the 64bit rivaGan model and test the performance on GPU environment.

Detail: https://github.com/DAI-Lab/RivaGAN

Zhang, Kevin Alex and Xu, Lei and Cuesta-Infante, Alfredo and Veeramachaneni, Kalyan. Robust Invisible Video Watermarking with Attention. MIT EECS, September 2019.[PDF]

You might also like...
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution

DASR Pytorch implementation of "Unsupervised Degradation Representation Learning for Blind Super-Resolution", CVPR 2021 [arXiv] Overview Requirements

[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior
[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior

pytorch-deep-video-prior (DVP) Official PyTorch implementation for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior TensorFlo

DAN: Unfolding the Alternating Optimization for Blind Super Resolution

DAN-Basd-on-Openmmlab DAN: Unfolding the Alternating Optimization for Blind Super Resolution We reproduce DAN via mmediting based on open-sourced code

Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Real-ESRGAN Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data Ported from https://github.com/xinntao/Real-ESRGAN Depend

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021) Jiaxi Jiang, Kai Zhang, Radu Timofte Computer Vision Lab, ETH Zurich, Switzerland 🔥

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis
Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis [Paper] [Online Demo] The following results are obtained by our SCUNet with purely syn

Fast image augmentation library and easy to use wrapper around other libraries. Documentation:  https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

Comments
  • Potentially performance issues

    Potentially performance issues

    When using an image larger than 1MB the performance degradates quickly. What we observe is that with an image of 1920 × 1080 the performance is great, but using an image of 9504 × 6336 inside a container with 20GB of RAM after ~40 minutes the flask repository we put on top of the library crashes because the container is OOM. Is there a way to improve performance in this sense?

    opened by luca-simonetti 0
  • CLI decode doesn't work if output image is JPG

    CLI decode doesn't work if output image is JPG

    I'm trying to use as CLI and python script to generate a wmrked JPG but watermark decode doesn't show anything:

    C:\Users\me\AppData\Local\Programs\Python\Python310\Scripts>py invisible-watermark "F:\JPEG\_DSC5341.jpg" -v -a encode -t bytes -m dwtDct -w '1234' -o "F:\JPEG\_DSC5341-w.jpg"
    watermark length: 48
    encode time ms: 2819.3318843841553
    
    C:\Users\me\AppData\Local\Programs\Python\Python310\Scripts>py invisible-watermark "F:\JPEG\_DSC5341-w.jpg" -v -a decode -t bytes -m dwtDct -l 48
    decode time ms: 1944.9546337127686
    

    It's like there is no watermark impressed in it, unless I use a PNG as output. I posted the images I'm using for test purpouses.

    raw img test wm img test_wm

    opened by TheNemus 0
  • Example code not working

    Example code not working

    encoded the image, then decoding returned nothing, not "test" like expected.

    edit: tried with a different png image: Traceback (most recent call last): File "/home/me/whisper/decode.py", line 8, in print(watermark.decode('utf-8')) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

    opened by ClashSAN 0
  • How does this work?

    How does this work?

    I think a quick blurb about how the watermarks implemented by this package work would be helpful. Is it the pixel rounding that I can read about here? https://invisiblewatermark.net/how-invisible-watermarks-work.html

    opened by kevinlinxc 0
Releases(0.1.5)
Owner
Shield Mountain
Video on demand with multi DRM enterprise solutions
Shield Mountain
This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021.

inverse_attention This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021. Le

Firas Laakom 5 Jul 08, 2022
State of the art Semantic Sentence Embeddings

Contrastive Tension State of the art Semantic Sentence Embeddings Published Paper · Huggingface Models · Report Bug Overview This is the official code

Fredrik Carlsson 88 Dec 30, 2022
Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation

Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation Introduction WAKD is a PyTorch implementation for our ICPR-2022 pap

2 Oct 20, 2022
Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

12 Feb 08, 2022
git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Investigating Loss Functions for Extreme Super-Resolution NTIRE 2020 Perceptual Extreme Super-Resolution Submission. Our method ranked first and secon

Sejong Yang 0 Oct 17, 2022
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms

FNet: Mixing Tokens with Fourier Transforms Pytorch implementation of Fnet : Mixing Tokens with Fourier Transforms. Citation: @misc{leethorp2021fnet,

Rishikesh (ऋषिकेश) 218 Jan 05, 2023
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrained mo

Hugging Face 77.2k Jan 02, 2023
Kohei's 5th place solution for xview3 challenge

xview3-kohei-solution Usage This repository assumes that the given data set is stored in the following locations: $ ls data/input/xview3/*.csv data/in

Kohei Ozaki 2 Jan 17, 2022
Deep Learning Slide Captcha

滑动验证码深度学习识别 本项目使用深度学习 YOLOV3 模型来识别滑动验证码缺口,基于 https://github.com/eriklindernoren/PyTorch-YOLOv3 修改。 只需要几百张缺口标注图片即可训练出精度高的识别模型,识别效果样例: 克隆项目 运行命令: git cl

Python3WebSpider 55 Jan 02, 2023
PyTorch implementation of Lip to Speech Synthesis with Visual Context Attentional GAN (NeurIPS2021)

Lip to Speech Synthesis with Visual Context Attentional GAN This repository contains the PyTorch implementation of the following paper: Lip to Speech

6 Nov 02, 2022
Composable transformations of Python+NumPy programsComposable transformations of Python+NumPy programs

Chex Chex is a library of utilities for helping to write reliable JAX code. This includes utils to help: Instrument your code (e.g. assertions) Debug

DeepMind 506 Jan 08, 2023
Public scripts, services, and configuration for running a smart home K3S network cluster

makerhouse_network Public scripts, services, and configuration for running MakerHouse's home network. This network supports: TODO features here For mo

Scott Martin 1 Jan 15, 2022
A framework to train language models to learn invariant representations.

Invariant Language Modeling Implementation of the training for invariant language models. Motivation Modern pretrained language models are critical co

6 Nov 16, 2022
TensorLight - A high-level framework for TensorFlow

TensorLight is a high-level framework for TensorFlow-based machine intelligence applications. It reduces boilerplate code and enables advanced feature

Benjamin Kan 10 Jul 31, 2022
Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples

Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples (WACV 2022) and Beyond Simple Meta-Learning: Multi-Purpose Model

PLAI Group at UBC 42 Dec 06, 2022
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"

MUST-GAN Code | paper The Pytorch implementation of our CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generat

TianxiangMa 46 Dec 26, 2022
Pretrained Pytorch face detection (MTCNN) and recognition (InceptionResnet) models

Face Recognition Using Pytorch Python 3.7 3.6 3.5 Status This is a repository for Inception Resnet (V1) models in pytorch, pretrained on VGGFace2 and

Tim Esler 3.3k Jan 04, 2023
Implementation of FSGNN

FSGNN Implementation of FSGNN. For more details, please refer to our paper Experiments were conducted with following setup: Pytorch: 1.6.0 Python: 3.8

19 Dec 05, 2022
pytorch implementation of trDesign

trdesign-pytorch This repository is a PyTorch implementation of the trDesign paper based on the official TensorFlow implementation. The initial port o

Learn Ventures Inc. 41 Dec 29, 2022