DeepRAFT icon indicating copy to clipboard operation
DeepRAFT copied to clipboard

Only For You: Anti-Forwarding Stamps Trigger Data Privacy

License Made with Python Made with pytorch Made with torchvision

Only For You: Deep Neural Anti-Forwarding Watermark Preserves Image Privacy

Xinghua Qu1Alvin Chan2Yew-Soon Ong3,4Pengfei Wei1Xiang Yin1Caishun Chen4Zhu Sun4Zejun Ma1
1ByteDance AI Lab  2MIT  3NTU  4A*STAR

Introduction

In recent decades, messaging apps (e.g., Facebook Messager, Whatsapp, Wechat, Snapchat) have expanded exponentially, where a huge amount of private image sharing takes place daily. However, within these apps, the possible unauthorised or malicious image forwarding among users poses significant threats to personal image privacy. In specific situations, we hope to send private and confidential images (e.g., personal selfies) in an `only for you' manner. Given limited existing studies on this topic, for the first time, we propose the Deep Neural Anti-Forwarding Watermark (DeepRAFT) that enables media platforms to check and block any unauthorised forwarding of protected images through injecting non-fragile and invisible watermarks.



To this end, we jointly train a DeepRAFT encoder and scanner, where the encoder embeds a confidentiality stamp into images as watermarks, and the scanner learns to detect them. To ensure that the technique is robust and resistant to tampering, we involve a series of data augmentations (mounted on a stochastic concatenation process) and randomized smoothing (a scalable and certified defense) towards both common image corruptions (e.g., rotation, cropping, color jitters, defocus blur, perspective warping, pixel noise, JPEG compression) and adversarial attacks (i.e., under both black and white box settings). The training pipeline is shown as below.



Installtion

Install all dependencies via:

pip3 install -r requirements.txt
pip3 install git+https://github.com/fra31/auto-attack

DiffJPEG is used for differentiable data augmentation on JPEG compression. autoattack is for adversarial robustness evaluation purpose.

Datasets

We use two datasets with dimention 400*400. Datsets can be downloaded through the commands below.

wget http://press.liacs.nl/mirflickr/mirflickr25k.v3b/mirflickr25k.zip

Metafaces dataset can be obtained from github. Note: The original metafaces dataset is with 10241024 dimention. You can use our Image_Processing.ipynb to precess them into 400400 dimension.



RUN

Baseline training

python3 train.py --run_name your_name --max_step 200000

Train smoothed scanner with randomized smoothing

python3 train_rs.py --run_name your_name --max_step 200000 --std 0.5

Train augmented and smoothed scanner with randomized smoothing together with stochastic concatenation of data augmentations.

python3 train_rs_aug.py --run_name your_name --max_step 200000 --std 0.5 --aug_start 5000 --aug_end 100000

Watermark Invisibility Demos




Results

Table 1: Anti-forwarding watermark detection accuracy



Table 2: Subjective evaluation on watermark imperceptibility.



Table 2: Adversarial robustness evaluation on auto-PGD, square attack and adaptive auto-attack.



Pre-trained models

We provide three types of pretrained models under the folder pretrained_models/.

pretrained_models/baseline/

pretrained_models/rs/

pretrained_models/rs_aug/