Skip to content

Official implementation for "Designing lensless imaging systems to maximize information capture" from https://arxiv.org/abs/2506.08513

License

Notifications You must be signed in to change notification settings

lakabuli/LenslessInfoDesign

Repository files navigation

LenslessInfoDesign: Designing lensless imaging systems to maximize information capture

arXiv

About

This repository contains code for the paper "Designing lensless imaging systems to maximize information capture."

All Jupyter notebooks can be viewed without execution if the reader cannot or does not want to run them.

Installation

Please recursively clone this Git repository to include the EncodingInformation submodule:

git clone --recurse-submodules https://github.com/lakabuli/LenslessInfoDesign.git

Alternatively, EncodingInformation can be manually cloned from https://github.com/Waller-Lab/EncodingInformation.

See the EncodingInformation Installation Guide for environment setup instructions via pip.

System Requirements: All experiments were run on a Linux server with a single Nvidia RTX A6000 GPU.

Repository Structure

LenslessInfoDesign/
├── design_IDEAL/              # Sec. 4: Information-Optimal Encoder Design
│   ├── optimization_scripts/   # PSF optimization scripts
│   ├── extended_fov_scripts/   # Extended field-of-view reconstruction
│   └── data/                   # Optimized PSFs, models, and estimates
├── experimental_eval/         # Sec. 5: Experimental Information Evaluation
│   └── data/                   # Reconstructions and estimates
├── tradeoff_analysis/         # Sec. 3: Quantifying Sparsity and Multiplexing Tradeoffs
│   ├── mi_sweep_scripts/      # Scripts for mutual information estimation sweeps
│   ├── tc_sweep_scripts/      # Scripts for Tamura coefficient sweeps
│   ├── mi_estimates/           # Mutual information estimates
│   └── tc_values/              # Tamura coefficient values
└── figures/                   # Generated figure components

Model Availability

Sec. 5 (Experimental Information Evaluation) uses a data-driven reconstruction algorithm, specifically a ConvNeXt model from Ponomarenko et al. [1], generously provided by Vasilisa Ponomarenko.

[1] V. Ponomarenko, L. Kabuli, E. Markley, C. Hung, L. Waller, "Phase-mask-based lensless image reconstruction using self-attention," Proc. SPIE PC13333, Paper 12.3042497 (2024). https://doi.org/10.1117/12.3042497

The trained models for RML and DiffuserCam reconstructions using ConvNeXt are available from Google Drive. Download scripts are available in the relevant directories.

Data Availability

Sec. 5 (Experimental Information Evaluation) uses parallel lensless imaging system measurements captured for images from the MIRFLICKR-25000 dataset. The training data are available at the parallel lensless dataset website.

Photon count calibration data for Fig. 4c is also available at the parallel lensless dataset website.

Mutual information estimates and other data necessary for reproducing figures are available in the corresponding directories within this repository.

Paper

@article{https://doi.org/10.48550/arxiv.2506.08513,
  doi = {10.48550/ARXIV.2506.08513},
  url = {https://arxiv.org/abs/2506.08513},
  author = {Kabuli, Leyla A. and Pinkard, Henry and Markley, Eric and Hung, Clara S. and Waller, Laura},
  title = {Designing lensless imaging systems to maximize information capture},
  journal = {arXiv:2506.08513},
  publisher = {arXiv},
  year = {2025}
}

Contact

Please reach out to lakabuli@berkeley.edu with any questions or concerns.

About

Official implementation for "Designing lensless imaging systems to maximize information capture" from https://arxiv.org/abs/2506.08513

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published