Skip to content

[WACV'26] Inpaint360GS: Efficient Object-Aware 3D Inpainting via Gaussian Splatting for 360° Scenes

License

Notifications You must be signed in to change notification settings

dfki-av/Inpaint360GS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Inpaint360GS: Efficient Object-Aware 3D Inpainting via Gaussian Splatting for 360° Scenes

Shaoxiang Wang · Shihong Zhang · Christen Millerdurai · Rüdiger Westermann · Didier Stricker · Alain Pagani

WACV 2026

Logo

Inpaint360GS performs flexible, object-aware 3D inpainting in 360° unbounded scenes — not only for individual objects, but also for complex multi-object environments.

📅 News

- 2026.01.19 All dataset, result and code released.

- 2025.09.05 Accepted in round 1 algorithm track

Overall Running Steps

First download and zip the crowd sequence of Inpain360GS dataset

1. Training Object-aware Gaussians

2. Select and Removing Objects

3. Generating 2D Inpainted Color and Depth & 3D Inpaint

📂 Dataset Structure

1. Inpaint360GS dataset has following structure:

  -data/
    - {inpaint360}/
      - {scene_name_1}/
        - images/
          - IMG_0001.JPG
          - IMG_0002.JPG
          - ...
          - IMG_0050.JPG
          - test_IMG_0051.JPG
          - test_IMG_0052.JPG
          - ...
          - test_IMG_0100.JPG
        - sparse/0
      - {scene_name_2}

All images in a scene share the same camera intrinsics and extrinsics. test_IMG_xxxx.JPG denotes the image after object removal, which serves as input for inpainting evaluation.

2. Run on your own dataset:

Your scene structure
  - data/
    - {your_scene_name}/
      - train_and_test/
        - input/
          - IMG_0001.JPG
          - IMG_0002.JPG
          - ...
          - IMG_0050.JPG
          - test_IMG_0051.JPG
          - test_IMG_0052.JPG
          - ...
          - test_IMG_0100.JPG

Follow the image naming convention described above, then run:

bash scripts/run_data_prepare.sh

Installation

You can refer to the install document to build the Python environment.

🚀 Quick Start (Demo)

Download dataset

First, download the inpaint360gs dataset (and optional others dataset) and save them into the ./data folder.

bash scripts/download_inpaint360gs_dataset.sh

# optional: download result of inpaint360gs
bash scripts/download_inpaint360gs_result.sh

1. Training Object-aware Gaussians

Run the segmentation and initial training script:

bash run_seg.sh
[Config scene information]

The list of optional arguments is provided below:

Argument Values Default Description
--dataset_name "inpaint360gs" or "others" "inpaint360gs" ---
--scene "doppelherz", "toys", "fruits" ... "doppelherz" ---
--resolution 1,2,4,8 2 adjust according to GPU memory

2. Select and Removing Objects

Remove target objects and surrounding objects(multi object scene), then generate virtual camera poses

bash run_remove.sh
[Config scene information]

Note: This stage should inherit dataset_name, scene, and resolution from the run_seg.sh stage. Please refer to the images_{resolution}_num folder to identify the object IDs.

The list of specific arguments for removal is provided below:

Argument Values Description
--target_id e.g., "1,2,3" IDs of objects to be permanently removed.
--target_surronding_id e.g., "7,9" IDs of nearby objects that occlude the target in 2D views. They are removed temporarily and automatically restored during inpainting.
--select_obj_id automatic Do NOT need to give. The union of target_id + target_surronding_id.
--removal_thresh 0.0 - 1.0 Default: 0.7. Probability threshold for removal. Lower this value if object edges are not cleanly removed.

Workflow Note: At the end of this stage, we utilize Segment-and-Track-Anything to manually select/refine the specific regions for inpainting.

SAMT_demo.mp4

3. Generating 2D Inpainted Color and Depth & 3D Inpaint

# This step performs 2D inpainting using LaMa and subsequently optimizes the 3D Gaussian Splatting model to fill the missing regions.
bash run_inpaint.sh
[Config scene information]

This stage should inherit dataset_name, scene, and resolution from the run_seg.sh stage.

FAQ: What should I do if the LaMa inpainting result is poor (e.g., the inpainted area appears too dark or inconsistent)? This usually happens because LaMa relies on the context from pixels surrounding the mask. If the mask is too tight, the model lacks sufficient context to generate proper texture and lighting. You need to dilate the mask to include more surrounding context. You can come to tools/prepare_lama_data.py, increase the expand_pixels parameter (Default: 10) in enlarge function. Try increasing it to 20 or 30.

Contact

You can contact the author through email: shaoxiang.wang@dfki.de.

📖 Citing

If you find our work useful, please consider citing:

@misc{wang2025inpaint360gsefficientobjectaware3d,
      title={Inpaint360GS: Efficient Object-Aware 3D Inpainting via Gaussian Splatting for 360{\deg} Scenes}, 
      author={Shaoxiang Wang and Shihong Zhang and Christen Millerdurai and Rüdiger Westermann and Didier Stricker and Alain Pagani},
      year={2025},
      eprint={2511.06457},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2511.06457}, 
}

Limitations and Future Work

An important limitation of the current framework is the absence of explicit environmental light modeling. Integrating ray tracing-based illumination could improve visual fidelity, especially in scenes with shallow or complex depth distributions. In addition, future work includes investigating depth-guided feed-forward inpainting methods to support faster and more interactive 3D scene editing, potentially enabling real-time applications.

💖 Acknowledgement

We adapted some codes from some awesome repositories including Gaussian Grouping, InFusion, Gaga and Uni-SLAM. We sincerely thank the authors for releasing their implementations to the community.

This work has been partially supported by the EU projects CORTEX2 (GA No. 101070192) and LUMINOUS (GA No. 101135724), as well as by the German Research Foundation (DFG, GA No. 564809505). Special thanks to Shihong Zhang for his contributions during his Master's thesis at DFKI!


About

[WACV'26] Inpaint360GS: Efficient Object-Aware 3D Inpainting via Gaussian Splatting for 360° Scenes

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published