DDPM-Loc introduces a novel approach to wireless localization using Denoising Diffusion Probabilistic Models (DDPMs) for uncertainty-aware position estimation in challenging noise environments.
- To-Do: real-world experiments
- To-Do: acceleration techniques for diffusion model
- To-Do: noise distribution shift (generalization)
- 2025-12-21: Updated some experiments based on reviewer's comments, including
Wireless localization is a critical technology for IoT systems, robotics, and mobile networks. Traditional methods often struggle with uncertainty quantification and performance degradation under high noise conditions. This project presents DDPM-Loc, a diffusion-based approach that leverages the generative capabilities of DDPMs to provide robust, uncertainty-aware localization.
- Novel DDPM Architecture: First application of denoising diffusion models to wireless localization problems
- Uncertainty Quantification: Probabilistic localization with uncertainty estimates through multiple sampling
- Robust Performance: Superior accuracy across various noise conditions compared to traditional methods
- Comprehensive Evaluation: Extensive comparison with MLE, GCN, MLP, and U-Net baselines
- Multiple Noise Models: Support for Gaussian, Rayleigh, and mixture noise distributions
DDPM-Loc treats localization as a generative problem, where the model learns to denoise position estimates conditioned on noisy distance measurements. The diffusion process allows for:
- Iterative Refinement: Progressive denoising from random noise to accurate positions
- Conditional Generation: Position estimation conditioned on measurement constraints
- Uncertainty Modeling: Multiple sampling provides uncertainty estimates
- Noise Robustness: Better handling of complex noise distributions
| Method | Description |
|---|---|
| DDPM | Denoising Diffusion Probabilistic Model |
| GCN | Graph Convolutional Network (transductive) |
| GraphSAGE | Graph Sample and AggregatE (inductive) |
| GAT | Graph Attention Network |
| MLP | Multi-Layer Perceptron |
| U-Net | U-Net Architecture |
| MLE | Maximum Likelihood Estimation |
| MDS | Multi-dimensional Scaling |
| SMILE | Sparse Matrix Inference and Linear Embedding |
- Python 3.10 or higher
- CUDA-compatible GPU (recommended for training)
# Clone the repository
git clone https://github.com/juntaowang99/ddpm-localization.git
cd ddpm-localization
# Create conda environment (recommended)
conda create -n ddpm-loc python=3.10
conda activate ddpm-loc
# Install dependencies
pip install -r requirements.txt
# Verify installation
python -c "import ddpm_loc; print('DDPM-Loc installed successfully')"Default configuration: 1000 agents, 50 anchors, Rayleigh noise (
$\sigma=0.1$ ), 2000 training epochs, and learning rate of 1e-3. For complete parameter details, see evaluate.py or runpython evaluate.py --help.
Evaluate any localization method by replacing METHOD_NAME with one of: ddpm, gcn, graphsage, gat, mlp, unet, mle, mds, or smile:
python evaluate.py --method METHOD_NAMETo plot and save figures (including dataset visualization, training curves, testing results, and sampling process for DDPM-Loc), run:
python evaluate.py --method METHOD_NAME --plot --save-plotsExample:
python evaluate.py \
--method ddpm \
--num-samples 5000 \
--num-anchors 100 \
--noise-level 0.5 \
--noise-type mixture \
--epochs 3000 \
--learning-rate 5e-4For probabilistic localization using DDPM-Loc, run:
python evaluate.py --method ddpm --prob-locTo save the figure showing localization results with uncertainty ellipses, run:
python evaluate.py --method ddpm --prob-loc --plot --save-plotsSince the default configuration uses 1000 testing agents, the saved figure with uncertainty ellipses may appear dense. To create clearer visualizations, you can specify a custom number of testing agents (which is half the total number of samples):
# 50 testing agents
python evaluate.py --method ddpm --num-samples 100 --prob-loc --plot --save-plotsTo run comprehensive comparison experiments across all methods and different noise levels, specify your desired noise type by replacing NOISE_TYPE with normal, rayleigh, or mixture and run:
python run_comparison.py --noise-type NOISE_TYPE --num-runs 10Note: This comprehensive comparison runs 10 trials by default (configurable via
--num-runs) and may take a long time to complete (~50 minutes on NVIDIA V100).
The following table shows RMSE performance across different noise levels (averaged over 10 runs):
Here, we also try other layouts of anchors and agents.
| Anchors in Grid Layout | Anchors in Random Layout | |
|---|---|---|
| Agent in Spiral Layout | ![]() |
![]() |
| Agent in Circle Layout | ![]() |
![]() |
| Agent in Cluster Layout | ![]() |
![]() |
| Agent in Grid Layout | ![]() |
![]() |
| Agent in Random Layout | ![]() |
![]() |
The following table shows average time over 10 runs, conducted on (CPU) Intel Xeon Gold 6278C @ 2.60GHz + (GPU) Nvidia Tesla V100:
| Method | Avg Train Time (s) | Avg Inference Time (ms/agent) |
|---|---|---|
| MLE | 0.0000 | 3.1593 |
| MDS | 0.0000 | 8.7886 |
| SMILE | 0.0000 | 26.1705 |
| MLP | 6.3996 | 0.0012 |
| UNet | 7.2226 | 0.0014 |
| GCN | 7.1624 | 0.0542 |
| GraphSAGE | 7.0163 | 0.0601 |
| GAT | 35.7039 | 0.0684 |
| DDPM | 8.3139 | 0.9393 |
While DDPM-Loc (0.9393ms/agent) is slower than single-pass networks (e.g., MLP 0.0012ms/agent, GCN 0.0542ms/agent) due to 1000-step denoising process, it is an order of magnitude faster than traditional methods (e.g., MLE 3.1593ms/agent, MDS 8.7886ms/agent). We believe ~1ms/agent is feasible for many applications.
If you find this work useful in your research, please consider citing our work:
@misc{ddpm-loc,
author = {Wang, Juntao and Li, Mengyuan and Yin, Feng and Pu, Wenqiang},
title = {Uncertainty-Aware Wireless Localization with Diffusion Models},
howpublished = {\url{https://github.com/juntaowang99/ddpm-localization/}},
year = {2025},
}This project is licensed under the MIT License - see the LICENSE file for details.














