The Just Noticeable Robust Attack is a novel approach designed to craft adversarial perturbations that are both imperceptible to human observers and robust against image purification defenses. This approach strategically inserts strong perturbations into regions of an image where they are less likely to be perceived by a human observer, using a simple Just Noticeable Difference (JND) model.
- Clone the repo:
git clone https://github.com/simone-dotolo/jnra.git
- Create the virtual environment:
conda env create -n jnra -f environment.yml
- To protect the artworks with JNRA:
python3 jnra_protect.py
- To purify the protected artworks:
python3 apply_purification.py --data_path data/wikiart_zdzislaw_beksinki/protected/jnra --purification jpeg --device cuda
- To finetune the model:
python3 finetune.py --in_dir data/wikiart_zdzislaw_beksinki/protected_purified/jnra/jpeg/train --out_dir models/wikiart_zdzislaw-beksinki/protected_purified/jnra/jpeg
- To generate artworks:
python3 generate.py --in_dir models/wikiart_zdzislaw-beksinki/protected_purified/jnra/jpeg --out_dir generated_images/wikiart_rene-magritte/protected_purified/jnra/jpeg --prompts prompts/wikiart_zdzislaw-beksinki.txt
- To evaluate the protection:
python3 evaluate.py --original_path generated_images/wikiart_rene-magritte/original --generated_path generated_images/wikiart_rene-magritte/protected_purified/jnra/jpeg --device cuda:0
- Finetuning and generation scripts: https://github.com/ethz-spylab/robust-style-mimicry/
- DiffJPEG: https://github.com/mlomnitz/DiffJPEG
- CMMD: https://github.com/sayakpaul/cmmd-pytorch
Simone Dotolo - sim.dotolo@gmail.com



