[CVPR 2025] Official Pytorch Code for Spatial Transport Optimization by Repositioning Attention Map for Training-Free Text-to-Image Synthesis
Spatial Transport Optimization by Repositioning Attention Map for Training-Free Text-to-Image Synthesis (CVPR 2025)
Woojung Han, Yeonkyung Lee, Chanyoung Kim, Kwanghyun Park, Seong Jae Hwang
Yonsei University
Diffusion-based text-to-image (T2I) models have recently excelled in high-quality image generation, particularly in a training-free manner, enabling cost-effective adaptability and generalization across diverse tasks. However, while the existing methods have been continuously focusing on several challenges, such as "missing objects" and "mismatched attributes," another critical issue of "mislocated objects" remains where generated spatial positions fail to align with text prompts. Surprisingly, ensuring such seemingly basic functionality remains challenging in popular T2I models due to the inherent difficulty of imposing explicit spatial guidance via text forms. To address this, we propose STORM (Spatial Transport Optimization by Repositioning Attention Map), a novel training-free approach for spatially coherent T2I synthesis. STORM employs Spatial Transport Optimization (STO), rooted in optimal transport theory, to dynamically adjust object attention maps for precise spatial adherence, supported by a Spatial Transport (ST) Cost function that enhances spatial understanding. Our analysis shows that integrating spatial awareness is most effective in the early denoising stages, while later phases refine details. Extensive experiments demonstrate that STORM surpasses existing methods, effectively mitigating mislocated objects while improving missing and mismatched attributes, setting a new benchmark for spatial alignment in T2I synthesis.
Our code builds on the requirement of the official Attend&Excite repository. To set up their environment, please run:
conda env create -f environment/environment.yaml
conda activate storm
python -m spacy download en_core_web_sm
On top of these requirements, we add several requirements which can be found in environment/requirements.txt. These requirements will be installed in the above command.
Example generations outputted by Stable Diffusion with Attend-and-Excite.
To generate an image, you can simply run the run.py script. For example,
python run.py --prompt "a cat to the left of a blue dog" --seeds [0] --token_indices [[2,9],[None,8]]
Notes:
- To apply STORM on Stable Diffusion 2.1, specify:
--sd_2_1 True - You may run multiple seeds by passing a list of seeds. For example,
--seeds [0,1,2,3]. - If you do not provide a list of which token indices to alter using
--token_indices, we will split the text according to the Stable Diffusion's tokenizer and display the index of each token. You will then be able to input which indices you wish to alter. - If you wish to run the standard Stable Diffusion model without STORM, you can do so by passing
--run_standard_sd True. - All parameters are defined in
config.pyand are set to their defaults according to the official paper.
All generated images will be saved to the path "{config.output_path}/{prompt}/{seed}". We will also save a grid of all images (in the case of multiple seeds) under config.output_path.
When loading the Stable Diffusion model, you can use torch.float16 in order to use less memory and attain faster inference:
stable = StormPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16).to(device)Note that this may result in a slight degradation of results in some cases.
We provide Jupyter notebooks to reproduce the results from the paper for image generation
notebooks/demo.ipynb enables image generation using a free-form text prompt with and without STORM.
This code is builds on the code from the diffusers library as well as the Prompt-to-Prompt codebase.
If you found this code useful, please cite the following paper:
@InProceedings{2025storm,
author = {Han, Woojung and Lee, Yeonkyung and Kim, Chanyoung and Park, Kwanghyun and Hwang, Seong Jae},
title = {Spatial Transport Optimization by Repositioning Attention Map for Training-Free Text-to-Image Synthesis},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2025}
}
