diff --git a/README.md b/README.md
index b9ced4a..c0ee580 100755
--- a/README.md
+++ b/README.md
@@ -40,6 +40,7 @@ _Our method introduces a novel differentiable mesh extraction framework that ope
- ⬛ Implement a simple training viewer using the GraphDeco viewer.
- ⬛ Add the mesh-based rendering evaluation scripts in `./milo/eval/mesh_nvs`.
+- ✅ Add DTU training and evaluation scripts.
- ✅ Add low-res and very-low-res training for light output meshes (under 50MB and under 20MB).
- ✅ Add T&T evaluation scripts in `./milo/eval/tnt/`.
- ✅ Add Blender add-on (for mesh-based editing and animation) to the repo.
@@ -244,6 +245,9 @@ with `--sampling_factor 0.1`, for instance.
Please refer to the DepthAnythingV2 repo to download the `vitl` checkpoint required for Depth-Order regularization. Then, move the checkpoint file to `./submodules/Depth-Anything-V2/checkpoints/`.
+You can also use the `train_regular_densification.py` script instead of `train.py` to replace the fast densification from Mini-Splatting2 with a more traditional densification strategy for Gaussians, as used in [Gaussian Opacity Fields](https://github.com/autonomousvision/gaussian-opacity-fields/tree/main) and [RaDe-GS](https://baowenz.github.io/radegs/).
+By default, this script will use its own config file `--mesh_config default_regular_densification`.
+
### Example Commands
Basic training for indoor scenes with logging:
@@ -271,6 +275,11 @@ Training with depth-order regularization:
python train.py -s -m --imp_metric indoor --rasterizer radegs --depth_order --depth_order_config strong --log_interval 200 --data_device cpu
```
+Training with a traditional, slower densification strategy for Gaussians:
+```bash
+python train_regular_densification.py -s -m --imp_metric indoor --rasterizer radegs --log_interval 200 --data_device cpu
+```
+
## 3. Extracting a Mesh after Optimization
@@ -329,6 +338,19 @@ python mesh_extract_integration.py \
The mesh will be saved at either `/mesh_integration_sdf.ply` or `/mesh_depth_fusion_sdf.ply` depending on the SDF computation method.
+### 3.3. Use regular, non-scalable TSDF
+
+We also propose a script to extract a mesh using traditional TSDF on a regular voxel grid. This script is heavily inspired from the awesome work [2D Gaussian Splatting](https://github.com/hbb1/2d-gaussian-splatting). This mesh extraction process does not scale to unbounded real scenes with background geometry.
+```bash
+python mesh_extract_regular_tsdf.py \
+ -s \
+ -m \
+ --rasterizer radegs \
+ --mesh_res 1024
+```
+
+The mesh will be saved at `/mesh_regular_tsdf_res.ply`. A cleaned version of the mesh will be saved at `/mesh_regular_tsdf_res_post.ply`, following 2DGS's postprocessing.
+
## 4. Using our differentiable Gaussians-to-Mesh pipeline in your own 3DGS project
@@ -562,6 +584,8 @@ If you get artifacts in the rendering, you can try to play with the various foll
Click here to see content.
+### Tanks and Temples
+
For evaluation, please start by downloading [our COLMAP runs for the Tanks and Temples dataset](https://drive.google.com/drive/folders/1Bf7DM2DFtQe4J63bEFLceEycNf4qTcqm?usp=sharing), and make sure to move all COLMAP scene directories (Barn, Caterpillar, _etc._) inside the same directory.
Then, please download ground truth point cloud, camera poses, alignments and cropfiles from [Tanks and Temples dataset](https://www.tanksandtemples.org/download/). The ground truth dataset should be organized as:
@@ -637,6 +661,36 @@ python render.py \
python metrics.py -m # Compute error metrics on renderings
```
+### DTU
+
+MILo is designed for maximum scalability to allow for the reconstruction of full scenes, including background elements. We optimized our method and hyperparameters to strike a balance between performance and scalability.
+
+However, we also evaluate MILo on small object-centric scenes from the DTU dataset, to verify that our mesh-in-the-loop regularization does not hurt performance in highly controlled scenarios.
+
+For these smaller scenes, the aggressive densification strategy from Mini-Splatting2 is unnecessary. Instead, we use the traditional progressive densification strategy proposed in [GOF](https://github.com/autonomousvision/gaussian-opacity-fields/tree/main) and [RaDe-GS](https://baowenz.github.io/radegs/), which is better suited for highly controlled scenarios.
+
+Similarly, since DTU scans focus on small objects of interest without background reconstruction, we employ a regular grid for mesh extraction after training (similar to GOF and RaDe-GS) rather than our scalable extraction method.
+
+We use the preprocessed DTU dataset from [2D GS](https://github.com/hbb1/2d-gaussian-splatting) for training. Please refer to the corresponding repo for downloading instructions.
+Evaluation scripts are adapted from [GOF](https://github.com/autonomousvision/gaussian-opacity-fields/tree/main) and [RaDe-GS](https://baowenz.github.io/radegs/).
+
+Please run the following commands to evaluate MILo on a single DTU scan:
+```bash
+# Training with regular densification
+python train_regular_densification.py -s -m